Gentrace Uptime Monitor
Intuitive evals for intelligent applications
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
85.59ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.82% uptime
Monthly Uptime
99.82%
Monthly Response Time
97ms
Daily Status Breakdown
Nov-2025
99.25% uptime
Monthly Uptime
99.25%
Monthly Response Time
119ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
341ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
430ms
Daily Status Breakdown
Aug-2025
99.71% uptime
Monthly Uptime
99.71%
Monthly Response Time
464ms
Daily Status Breakdown
Jul-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
424ms
Daily Status Breakdown
Jun-2025
99.72% uptime
Monthly Uptime
99.72%
Monthly Response Time
405ms
Daily Status Breakdown
May-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
421ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
422ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 10 hours ago View Status -
OperationalBraintrust
The end-to-end platform for building world-class AI apps.
Braintrust provides an end-to-end platform for developing, evaluating, and monitoring Large Language Model (LLM) applications. It helps teams build robust AI products through iterative workflows and real-time analysis.
Last checked: 10 hours ago View Status -
OperationalAgenta
End-to-End LLM Engineering Platform
Agenta is an LLM engineering platform offering tools for prompt engineering, versioning, evaluation, and observability in a single, collaborative environment.
Last checked: 10 hours ago View Status -
OperationalHumanloop
The LLM evals platform for enterprises to ship and scale AI with confidence
Humanloop is an enterprise-grade platform that provides tools for LLM evaluation, prompt management, and AI observability, enabling teams to develop, evaluate, and deploy trustworthy AI applications.
Last checked: 10 hours ago View Status -
OperationalLaminar
The AI engineering platform for LLM products
Laminar is an open-source platform that enables developers to trace, evaluate, label, and analyze Large Language Model (LLM) applications with minimal code integration.
Last checked: 10 hours ago View Status -
OperationalLangfuse
Open Source LLM Engineering Platform
Langfuse provides an open-source platform for tracing, evaluating, and managing prompts to debug and improve LLM applications.
Last checked: 21 hours ago View Status