EvalsOne Uptime Monitor
Evaluate LLMs & RAG Pipelines Quickly
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
178.2ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.75% uptime
Monthly Uptime
99.75%
Monthly Response Time
200ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
195ms
Daily Status Breakdown
Oct-2025
99.03% uptime
Monthly Uptime
99.03%
Monthly Response Time
214ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
189ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
201ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
192ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
211ms
Daily Status Breakdown
May-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
212ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
243ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 12 hours ago View Status -
OperationalReva
Use the right LLM for your task
Reva helps businesses test AI configurations and compare LLM outcomes to ensure optimal performance for their specific tasks, focusing on outcome-driven AI testing and model evaluation.
Last checked: 41 minutes ago View Status -
IssuesCompare AI Models
AI Model Comparison Tool
Compare AI Models is a platform providing comprehensive comparisons and insights into various large language models, including GPT-4o, Claude, Llama, and Mistral.
Last checked: 27 minutes ago View Status -
OperationalGentrace
Intuitive evals for intelligent applications
Gentrace is an LLM evaluation platform designed for AI teams to test and automate evaluations of generative AI products and agents. It facilitates collaborative development and ensures high-quality LLM applications.
Last checked: 29 minutes ago View Status -
IssuesLastMile AI
Ship generative AI apps to production with confidence.
LastMile AI empowers developers to seamlessly transition generative AI applications from prototype to production with a robust developer platform.
Last checked: 10 minutes ago View Status -
IssuesOneLLM
Fine-tune, evaluate, and deploy your next LLM without code.
OneLLM is a no-code platform enabling users to fine-tune, evaluate, and deploy Large Language Models (LLMs) efficiently. Streamline LLM development by creating datasets, integrating API keys, running fine-tuning processes, and comparing model performance.
Last checked: 41 minutes ago View Status