Rhesis AI Uptime Monitor
Open-source test generation SDK for LLM applications
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
236.4ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.64% uptime
Monthly Uptime
99.64%
Monthly Response Time
128ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
129ms
Daily Status Breakdown
Oct-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
138ms
Daily Status Breakdown
Sep-2025
99.84% uptime
Monthly Uptime
99.84%
Monthly Response Time
225ms
Daily Status Breakdown
Aug-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
235ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
224ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
242ms
Daily Status Breakdown
May-2025
99.92% uptime
Monthly Uptime
99.92%
Monthly Response Time
242ms
Daily Status Breakdown
Apr-2025
98.83% uptime
Monthly Uptime
98.83%
Monthly Response Time
301ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalRoostGPT
Automated Test Case Generation using LLMs for Reliable Software Development
RoostGPT is an AI-powered testing co-pilot that automates test case generation, providing 100% test coverage while detecting static vulnerabilities. It leverages Large Language Models to enhance software development efficiency and reliability.
Last checked: 9 hours ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 3 hours ago View Status -
OperationalAutoblocks
Improve your LLM Product Accuracy with Expert-Driven Testing & Evaluation
Autoblocks is a collaborative testing and evaluation platform for LLM-based products that automatically improves through user and expert feedback, offering comprehensive tools for monitoring, debugging, and quality assurance.
Last checked: 3 hours ago View Status -
OperationalAutumn8
Compliance Assessment for Gen AI Applications
Autumn8 streamlines enterprise compliance assessments for Gen AI applications, significantly reducing preparation time and improving acceptance rates through automated document analysis.
Last checked: 3 hours ago View Status -
OperationalHegel AI
Developer Platform for Large Language Model (LLM) Applications
Hegel AI provides a developer platform for building, monitoring, and improving large language model (LLM) applications, featuring tools for experimentation, evaluation, and feedback integration.
Last checked: 3 hours ago View Status -
OperationalFlow AI
The data engine for AI agent testing
Flow AI accelerates AI agent development by providing continuously evolving, validated test data grounded in real-world information and refined by domain experts.
Last checked: 3 hours ago View Status