LMCache Uptime Monitor
Accelerating the Future of AI, One Cache at a Time
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
119.73ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.81% uptime
Monthly Uptime
99.81%
Monthly Response Time
107ms
Daily Status Breakdown
Nov-2025
99.58% uptime
Monthly Uptime
99.58%
Monthly Response Time
109ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
105ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
115ms
Daily Status Breakdown
Aug-2025
99.74% uptime
Monthly Uptime
99.74%
Monthly Response Time
145ms
Daily Status Breakdown
Jul-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
112ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
115ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
118ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalCognee
Turn your data into reliable LLM outputs with our AI memory engine
Cognee is an AI memory engine that enhances the accuracy of LLM applications and agents. It transforms raw data into structured knowledge, improving the reliability of responses from large language models.
Last checked: 4 hours ago View Status -
OperationalFriendliAI
Accelerate Generative AI Inference
FriendliAI provides a high-performance platform for accelerating generative AI inference, enabling fast, cost-effective, and reliable deployment and serving of Large Language Models (LLMs).
Last checked: 9 hours ago View Status -
IssuesOpen Source AI Gateway
Manage multiple LLM providers with built-in failover, guardrails, caching, and monitoring.
Open Source AI Gateway provides developers with a robust, production-ready solution to manage multiple LLM providers like OpenAI, Anthropic, and Gemini. It offers features like smart failover, caching, rate limiting, and monitoring for enhanced reliability and cost savings.
Last checked: 4 hours ago View Status -
Operationalneutrino AI
Multi-model AI Infrastructure for Optimal LLM Performance
Neutrino AI provides multi-model AI infrastructure to optimize Large Language Model (LLM) performance for applications. It offers tools for evaluation, intelligent routing, and observability to enhance quality, manage costs, and ensure scalability.
Last checked: 4 hours ago View Status -
IssuesCentML
Better, Faster, Easier AI
CentML streamlines LLM deployment, offering advanced system optimization and efficient hardware utilization. It provides single-click resource sizing, model serving, and supports diverse hardware and models.
Last checked: 4 hours ago View Status -
IssuesllmChef
Perfect AI responses with zero effort
llmChef is an AI enrichment engine that provides access to over 100 pre-made prompts (recipes) and leading LLMs, enabling users to get optimal AI responses without crafting perfect prompts.
Last checked: 3 hours ago View Status