Agenta Uptime Monitor
End-to-End LLM Engineering Platform
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
237.8ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.79% uptime
Monthly Uptime
99.79%
Monthly Response Time
186ms
Daily Status Breakdown
Nov-2025
99.71% uptime
Monthly Uptime
99.71%
Monthly Response Time
194ms
Daily Status Breakdown
Oct-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
142ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
138ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
147ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
130ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
152ms
Daily Status Breakdown
May-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
144ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
213ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalPromptech
The AI teamspace to streamline your workflows
Promptech is a collaborative AI platform that provides prompt engineering tools and teamspace solutions for organizations to effectively utilize Large Language Models (LLMs). It offers access to multiple AI models, workspace management, and enterprise-ready features.
Last checked: 4 hours ago View Status -
IssuesLiteral AI
Ship reliable LLM Products
Literal AI streamlines the development of LLM applications, offering tools for evaluation, prompt management, logging, monitoring, and more to build production-grade AI products.
Last checked: 6 minutes ago View Status -
OperationalHegel AI
Developer Platform for Large Language Model (LLM) Applications
Hegel AI provides a developer platform for building, monitoring, and improving large language model (LLM) applications, featuring tools for experimentation, evaluation, and feedback integration.
Last checked: 4 hours ago View Status -
OperationalAgno
An open-source platform to build, ship and monitor agentic systems.
Agno is an open-source platform for building, deploying, and monitoring AI agents. It allows developers to create high-performance agents with memory, knowledge, and tool integration capabilities.
Last checked: 4 hours ago View Status -
OperationalAgentOps
Industry leading developer platform to test, debug, and deploy AI agents
AgentOps is a comprehensive developer platform that enables testing, debugging, and deployment of AI agents with support for 400+ LLMs, Crews, and AI agent frameworks.
Last checked: 17 hours ago View Status -
OperationalGentrace
Intuitive evals for intelligent applications
Gentrace is an LLM evaluation platform designed for AI teams to test and automate evaluations of generative AI products and agents. It facilitates collaborative development and ensures high-quality LLM applications.
Last checked: 4 hours ago View Status