ModelBench Uptime Monitor
No-Code LLM Evaluations
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
655.4ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
70.83% uptime
Monthly Uptime
70.83%
Monthly Response Time
435ms
Daily Status Breakdown
Nov-2025
73.71% uptime
Monthly Uptime
73.71%
Monthly Response Time
458ms
Daily Status Breakdown
Oct-2025
0% uptime
Monthly Uptime
0%
Monthly Response Time
0ms
Daily Status Breakdown
Sep-2025
57.53% uptime
Monthly Uptime
57.53%
Monthly Response Time
363ms
Daily Status Breakdown
Aug-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
617ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
609ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
599ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
630ms
Daily Status Breakdown
Apr-2025
99.3% uptime
Monthly Uptime
99.3%
Monthly Response Time
657ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 2 hours ago View Status -
OperationalPromptsLabs
A Library of Prompts for Testing LLMs
PromptsLabs is a community-driven platform providing copy-paste prompts to test the performance of new LLMs. Explore and contribute to a growing collection of prompts.
Last checked: 7 hours ago View Status -
IssuesCompare AI Models
AI Model Comparison Tool
Compare AI Models is a platform providing comprehensive comparisons and insights into various large language models, including GPT-4o, Claude, Llama, and Mistral.
Last checked: 2 hours ago View Status -
OperationalLLM Explorer
Discover and Compare Open-Source Language Models
LLM Explorer is a comprehensive platform for discovering, comparing, and accessing over 46,000 open-source Large Language Models (LLMs) and Small Language Models (SLMs).
Last checked: 2 hours ago View Status -
IssuesOneLLM
Fine-tune, evaluate, and deploy your next LLM without code.
OneLLM is a no-code platform enabling users to fine-tune, evaluate, and deploy Large Language Models (LLMs) efficiently. Streamline LLM development by creating datasets, integrating API keys, running fine-tuning processes, and comparing model performance.
Last checked: 7 hours ago View Status -
OperationalBraintrust
The end-to-end platform for building world-class AI apps.
Braintrust provides an end-to-end platform for developing, evaluating, and monitoring Large Language Model (LLM) applications. It helps teams build robust AI products through iterative workflows and real-time analysis.
Last checked: 2 hours ago View Status