Conviction Uptime Monitor
The Platform to Evaluate & Test LLMs
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
304.5ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.77% uptime
Monthly Uptime
99.77%
Monthly Response Time
259ms
Daily Status Breakdown
Nov-2025
99.72% uptime
Monthly Uptime
99.72%
Monthly Response Time
259ms
Daily Status Breakdown
Oct-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
264ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
262ms
Daily Status Breakdown
Aug-2025
99.71% uptime
Monthly Uptime
99.71%
Monthly Response Time
254ms
Daily Status Breakdown
Jul-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
249ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
257ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
259ms
Daily Status Breakdown
Apr-2025
99.18% uptime
Monthly Uptime
99.18%
Monthly Response Time
335ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 1 hour ago View Status -
OperationalLangtail
The low-code platform for testing AI apps
Langtail is a comprehensive testing platform that enables teams to test and debug LLM-powered applications with a spreadsheet-like interface, offering security features and integration with major LLM providers.
Last checked: 1 hour ago View Status -
OperationalHelicone
Ship your AI app with confidence
Helicone is an all-in-one platform for monitoring, debugging, and improving production-ready LLM applications. It provides tools for logging, evaluating, experimenting, and deploying AI applications.
Last checked: 1 hour ago View Status -
Operationalpromptfoo
Test & secure your LLM apps with open-source LLM testing
promptfoo is an open-source LLM testing tool designed to help developers secure and evaluate their language model applications, offering features like vulnerability scanning and continuous monitoring.
Last checked: 1 hour ago View Status -
OperationalHumanloop
The LLM evals platform for enterprises to ship and scale AI with confidence
Humanloop is an enterprise-grade platform that provides tools for LLM evaluation, prompt management, and AI observability, enabling teams to develop, evaluate, and deploy trustworthy AI applications.
Last checked: 1 hour ago View Status -
OperationalHegel AI
Developer Platform for Large Language Model (LLM) Applications
Hegel AI provides a developer platform for building, monitoring, and improving large language model (LLM) applications, featuring tools for experimentation, evaluation, and feedback integration.
Last checked: 1 hour ago View Status