BenchLLM Uptime Monitor
The best way to evaluate LLM-powered apps
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
211.4ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
223ms
Daily Status Breakdown
Nov-2025
99.7% uptime
Monthly Uptime
99.7%
Monthly Response Time
241ms
Daily Status Breakdown
Oct-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
243ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
237ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
233ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
222ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
220ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
237ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
207ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalModelBench
No-Code LLM Evaluations
ModelBench enables teams to rapidly deploy AI solutions with no-code LLM evaluations. It allows users to compare over 180 models, design and benchmark prompts, and trace LLM runs, accelerating AI development.
Last checked: 2 hours ago View Status -
IssuesOneLLM
Fine-tune, evaluate, and deploy your next LLM without code.
OneLLM is a no-code platform enabling users to fine-tune, evaluate, and deploy Large Language Models (LLMs) efficiently. Streamline LLM development by creating datasets, integrating API keys, running fine-tuning processes, and comparing model performance.
Last checked: 7 hours ago View Status -
OperationalConviction
The Platform to Evaluate & Test LLMs
Conviction is an AI platform designed for evaluating, testing, and monitoring Large Language Models (LLMs) to help developers build reliable AI applications faster. It focuses on detecting hallucinations, optimizing prompts, and ensuring security.
Last checked: 2 hours ago View Status -
OperationalEvalsOne
Evaluate LLMs & RAG Pipelines Quickly
EvalsOne is a platform for rapidly evaluating Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines using various metrics.
Last checked: 2 hours ago View Status -
OperationalGentrace
Intuitive evals for intelligent applications
Gentrace is an LLM evaluation platform designed for AI teams to test and automate evaluations of generative AI products and agents. It facilitates collaborative development and ensures high-quality LLM applications.
Last checked: 2 hours ago View Status -
OperationalPromptsLabs
A Library of Prompts for Testing LLMs
PromptsLabs is a community-driven platform providing copy-paste prompts to test the performance of new LLMs. Explore and contribute to a growing collection of prompts.
Last checked: 6 hours ago View Status