lm-studio.me Uptime Monitor
Local LLM Running & Download Platform
Last 30 Days Performance
Average Uptime
99.11%
Based on 30-day monitoring period
Average Response Time
656.4ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.8% uptime
Monthly Uptime
99.8%
Monthly Response Time
578ms
Daily Status Breakdown
Nov-2025
99.57% uptime
Monthly Uptime
99.57%
Monthly Response Time
556ms
Daily Status Breakdown
Oct-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
614ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
548ms
Daily Status Breakdown
Aug-2025
99.52% uptime
Monthly Uptime
99.52%
Monthly Response Time
643ms
Daily Status Breakdown
Jul-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
558ms
Daily Status Breakdown
Jun-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
574ms
Daily Status Breakdown
May-2025
97.8% uptime
Monthly Uptime
97.8%
Monthly Response Time
560ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
590ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalLM Studio
Discover, download, and run local LLMs on your computer
LM Studio is a desktop application that allows users to run Large Language Models (LLMs) locally and offline, supporting various architectures including Llama, Mistral, Phi, Gemma, DeepSeek, and Qwen 2.5.
Last checked: 14 minutes ago View Status -
OperationalOllama
Get up and running with large language models locally
Ollama is a platform that enables users to run powerful language models like Llama 3.3, DeepSeek-R1, Phi-4, Mistral, and Gemma 2 on their local machines.
Last checked: 22 minutes ago View Status -
OperationalKolosal AI
The Ultimate Local LLM Platform
Kolosal AI is a lightweight, open-source application enabling users to train, run, and chat with local Large Language Models (LLMs) directly on their devices, ensuring complete privacy and control.
Last checked: 50 minutes ago View Status -
OperationalMsty
The easiest way to use local and online AI models
Msty is a user-friendly application that simplifies using local and online AI models, offering offline functionality, privacy, and advanced features like parallel multiverse chats.
Last checked: 39 minutes ago View Status -
OperationalLocalAI
Run Powerful AI Models Locally - Free, OpenAI Alternative
LocalAI provides a free, open-source alternative to run LLMs, autonomous agents, and semantic search locally on your hardware, ensuring privacy and control.
Last checked: 59 minutes ago View Status -
OperationalLlamaEdge
The easiest, smallest and fastest local LLM runtime and API server.
LlamaEdge is a lightweight and fast local LLM runtime and API server, powered by Rust & WasmEdge, designed for creating cross-platform LLM agents and web services.
Last checked: 5 hours ago View Status