Evidently AI Uptime Monitor
Collaborative AI observability platform for evaluating, testing, and monitoring AI-powered products
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
121.07ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Jan-2026
100% uptime
Monthly Uptime
100%
Monthly Response Time
114ms
Daily Status Breakdown
Dec-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
120ms
Daily Status Breakdown
Nov-2025
98.56% uptime
Monthly Uptime
98.56%
Monthly Response Time
120ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
198ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
200ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
193ms
Daily Status Breakdown
Jul-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
187ms
Daily Status Breakdown
Jun-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
211ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
200ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
257ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
IssuesCensius
End-to-end AI observability platform for reliable and trustworthy ML models
Censius is an AI observability platform that provides automated monitoring, proactive troubleshooting, and model explainability tools to help organizations build and maintain reliable machine learning models throughout their lifecycle.
Last checked: 40 minutes ago View Status -
OperationalHoneyHive
AI Observability and Evaluation Platform for Building Reliable AI Products
HoneyHive is a comprehensive platform that provides AI observability, evaluation, and prompt management tools to help teams build and monitor reliable AI applications.
Last checked: 6 hours ago View Status -
OperationalFreeplay
The All-in-One Platform for AI Experimentation, Evaluation, and Observability
Freeplay provides comprehensive tools for AI teams to run experiments, evaluate model performance, and monitor production, streamlining the development process.
Last checked: 51 minutes ago View Status -
OperationalAutoblocks
Improve your LLM Product Accuracy with Expert-Driven Testing & Evaluation
Autoblocks is a collaborative testing and evaluation platform for LLM-based products that automatically improves through user and expert feedback, offering comprehensive tools for monitoring, debugging, and quality assurance.
Last checked: 38 minutes ago View Status -
OperationalArize
Unified Observability and Evaluation Platform for AI
Arize is a comprehensive platform designed to accelerate the development and improve the production of AI applications and agents.
Last checked: 52 minutes ago View Status -
OperationalKeywords AI
LLM monitoring for AI startups
Keywords AI is a comprehensive developer platform for LLM applications, offering monitoring, debugging, and deployment tools. It serves as a Datadog-like solution specifically designed for LLM applications.
Last checked: 39 minutes ago View Status