Humanloop Uptime Monitor
The LLM evals platform for enterprises to ship and scale AI with confidence
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
105.3ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
96ms
Daily Status Breakdown
Nov-2025
99.42% uptime
Monthly Uptime
99.42%
Monthly Response Time
93ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
93ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
93ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
107ms
Daily Status Breakdown
Jul-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
119ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
133ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
126ms
Daily Status Breakdown
Apr-2025
98.94% uptime
Monthly Uptime
98.94%
Monthly Response Time
124ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalHegel AI
Developer Platform for Large Language Model (LLM) Applications
Hegel AI provides a developer platform for building, monitoring, and improving large language model (LLM) applications, featuring tools for experimentation, evaluation, and feedback integration.
Last checked: 3 hours ago View Status -
OperationalPromptech
The AI teamspace to streamline your workflows
Promptech is a collaborative AI platform that provides prompt engineering tools and teamspace solutions for organizations to effectively utilize Large Language Models (LLMs). It offers access to multiple AI models, workspace management, and enterprise-ready features.
Last checked: 3 hours ago View Status -
OperationalBraintrust
The end-to-end platform for building world-class AI apps.
Braintrust provides an end-to-end platform for developing, evaluating, and monitoring Large Language Model (LLM) applications. It helps teams build robust AI products through iterative workflows and real-time analysis.
Last checked: 3 hours ago View Status -
OperationalOpenLIT
Open Source Platform for AI Engineering
OpenLIT is an open-source observability platform designed to streamline AI development workflows, particularly for Generative AI and LLMs, offering features like prompt management, performance tracking, and secure secrets management.
Last checked: 3 hours ago View Status -
Operationalklu.ai
Next-gen LLM App Platform for Confident AI Development
Klu is an all-in-one LLM App Platform that enables teams to experiment, version, and fine-tune GPT-4 Apps with collaborative prompt engineering and comprehensive evaluation tools.
Last checked: 3 hours ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 3 hours ago View Status