Cognee Uptime Monitor
Turn your data into reliable LLM outputs with our AI memory engine
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
138.83ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.77% uptime
Monthly Uptime
99.77%
Monthly Response Time
131ms
Daily Status Breakdown
Nov-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
124ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
136ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
136ms
Daily Status Breakdown
Aug-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
141ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
129ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
135ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
133ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
195ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalLMCache
Accelerating the Future of AI, One Cache at a Time
LMCache is an open-source Knowledge Delivery Network (KDN) designed to accelerate LLM applications, making them up to 8x faster and more cost-effective. It improves performance for AI chatbots and RAG queries through prompt caching and KV cache compression.
Last checked: 2 hours ago View Status -
OperationalLlongterm
Long Term Memory for AI Apps & Agents
Llongterm provides a 'Mind as a Service' for AI, enabling chatbots and agents to retain information and context over extended periods, improving user interactions.
Last checked: 2 hours ago View Status -
OperationalMem0
The Memory Layer for your AI Agents
Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences that save costs and improve over time.
Last checked: 2 hours ago View Status -
OperationalMemU
Agent Memory for AI
MemU is an agent memory layer for LLM applications that enables autonomous, intelligent memory management for AI agents with higher accuracy, faster retrieval, and lower cost.
Last checked: 2 hours ago View Status -
OperationalSupercog
Move faster with AI assistants at work
Supercog provides AI assistants to enhance workplace productivity by streamlining information access and analysis from various sources, integrating seamlessly with Slack.
Last checked: 2 hours ago View Status -
IssuesLLM Optimize
Rank Higher in AI Engines Recommendations
LLM Optimize provides professional website audits to help you rank higher in LLMs like ChatGPT and Google's AI Overview, outranking competitors with tailored, actionable recommendations.
Last checked: 11 hours ago View Status