Autoblocks Uptime Monitor
Improve your LLM Product Accuracy with Expert-Driven Testing & Evaluation
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
320.27ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Jan-2026
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
193ms
Daily Status Breakdown
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
157ms
Daily Status Breakdown
Nov-2025
99.42% uptime
Monthly Uptime
99.42%
Monthly Response Time
169ms
Daily Status Breakdown
Oct-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
167ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
160ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
158ms
Daily Status Breakdown
Jul-2025
99.66% uptime
Monthly Uptime
99.66%
Monthly Response Time
149ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
174ms
Daily Status Breakdown
May-2025
99.63% uptime
Monthly Uptime
99.63%
Monthly Response Time
184ms
Daily Status Breakdown
Apr-2025
97.76% uptime
Monthly Uptime
97.76%
Monthly Response Time
209ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalHumanloop
The LLM evals platform for enterprises to ship and scale AI with confidence
Humanloop is an enterprise-grade platform that provides tools for LLM evaluation, prompt management, and AI observability, enabling teams to develop, evaluate, and deploy trustworthy AI applications.
Last checked: 4 hours ago View Status -
OperationalLangtail
The low-code platform for testing AI apps
Langtail is a comprehensive testing platform that enables teams to test and debug LLM-powered applications with a spreadsheet-like interface, offering security features and integration with major LLM providers.
Last checked: 4 hours ago View Status -
OperationalConviction
The Platform to Evaluate & Test LLMs
Conviction is an AI platform designed for evaluating, testing, and monitoring Large Language Models (LLMs) to help developers build reliable AI applications faster. It focuses on detecting hallucinations, optimizing prompts, and ensuring security.
Last checked: 5 hours ago View Status -
IssuesLiteral AI
Ship reliable LLM Products
Literal AI streamlines the development of LLM applications, offering tools for evaluation, prompt management, logging, monitoring, and more to build production-grade AI products.
Last checked: 5 hours ago View Status -
OperationalLangWatch
Monitor, Evaluate & Optimize your LLM performance with 1-click
LangWatch empowers AI teams to ship 10x faster with quality assurance at every step. It provides tools to measure, maximize, and easily collaborate on LLM performance.
Last checked: 4 hours ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 4 hours ago View Status