Scorecard.io Uptime Monitor
Testing for production-ready LLM applications, RAG systems, Agents, Chatbots.
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
134.83ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.67% uptime
Monthly Uptime
99.67%
Monthly Response Time
128ms
Daily Status Breakdown
Nov-2025
98.42% uptime
Monthly Uptime
98.42%
Monthly Response Time
150ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
149ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
155ms
Daily Status Breakdown
Aug-2025
99.44% uptime
Monthly Uptime
99.44%
Monthly Response Time
171ms
Daily Status Breakdown
Jul-2025
99.85% uptime
Monthly Uptime
99.85%
Monthly Response Time
138ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
148ms
Daily Status Breakdown
May-2025
99.29% uptime
Monthly Uptime
99.29%
Monthly Response Time
151ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
222ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalHumanloop
The LLM evals platform for enterprises to ship and scale AI with confidence
Humanloop is an enterprise-grade platform that provides tools for LLM evaluation, prompt management, and AI observability, enabling teams to develop, evaluate, and deploy trustworthy AI applications.
Last checked: 1 hour ago View Status -
OperationalAgenta
End-to-End LLM Engineering Platform
Agenta is an LLM engineering platform offering tools for prompt engineering, versioning, evaluation, and observability in a single, collaborative environment.
Last checked: 1 hour ago View Status -
IssuesReprompt
Collaborative prompt testing for confident AI deployment
Reprompt is a developer-focused platform that enables efficient testing and optimization of AI prompts with real-time analysis and comparison capabilities.
Last checked: 1 hour ago View Status -
OperationalLangtail
The low-code platform for testing AI apps
Langtail is a comprehensive testing platform that enables teams to test and debug LLM-powered applications with a spreadsheet-like interface, offering security features and integration with major LLM providers.
Last checked: 1 hour ago View Status -
OperationalLangtrace
Transform AI Prototypes into Enterprise-Grade Products
Langtrace is an open-source observability and evaluations platform designed to help developers monitor, evaluate, and enhance AI agents for enterprise deployment.
Last checked: 7 hours ago View Status -
IssuesLastMile AI
Ship generative AI apps to production with confidence.
LastMile AI empowers developers to seamlessly transition generative AI applications from prototype to production with a robust developer platform.
Last checked: 1 hour ago View Status