Lora Uptime Monitor
Integrate local LLM with one line of code.
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
199.69ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
206ms
Daily Status Breakdown
Nov-2025
99.72% uptime
Monthly Uptime
99.72%
Monthly Response Time
199ms
Daily Status Breakdown
Oct-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
251ms
Daily Status Breakdown
Sep-2025
99.66% uptime
Monthly Uptime
99.66%
Monthly Response Time
252ms
Daily Status Breakdown
Aug-2025
96.31% uptime
Monthly Uptime
96.31%
Monthly Response Time
447ms
Daily Status Breakdown
Jul-2025
98.39% uptime
Monthly Uptime
98.39%
Monthly Response Time
452ms
Daily Status Breakdown
Jun-2025
99.57% uptime
Monthly Uptime
99.57%
Monthly Response Time
467ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
482ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
Operationallm-studio.me
Local LLM Running & Download Platform
LM Studio is a user-friendly desktop application that allows users to run various large language models (LLMs) locally and offline, including Llama 2, PN3, Falcon, Mistral, StarCoder, and GEMMA models from Hugging Face.
Last checked: 1 hour ago View Status -
OperationalLM Studio
Discover, download, and run local LLMs on your computer
LM Studio is a desktop application that allows users to run Large Language Models (LLMs) locally and offline, supporting various architectures including Llama, Mistral, Phi, Gemma, DeepSeek, and Qwen 2.5.
Last checked: 1 hour ago View Status -
OperationalLlamaEdge
The easiest, smallest and fastest local LLM runtime and API server.
LlamaEdge is a lightweight and fast local LLM runtime and API server, powered by Rust & WasmEdge, designed for creating cross-platform LLM agents and web services.
Last checked: 2 hours ago View Status -
OperationalOllama
Get up and running with large language models locally
Ollama is a platform that enables users to run powerful language models like Llama 3.3, DeepSeek-R1, Phi-4, Mistral, and Gemma 2 on their local machines.
Last checked: 14 hours ago View Status -
IssuesOneLLM
Fine-tune, evaluate, and deploy your next LLM without code.
OneLLM is a no-code platform enabling users to fine-tune, evaluate, and deploy Large Language Models (LLMs) efficiently. Streamline LLM development by creating datasets, integrating API keys, running fine-tuning processes, and comparing model performance.
Last checked: 2 hours ago View Status -
OperationalKolosal AI
The Ultimate Local LLM Platform
Kolosal AI is a lightweight, open-source application enabling users to train, run, and chat with local Large Language Models (LLMs) directly on their devices, ensuring complete privacy and control.
Last checked: 2 hours ago View Status