WebLLM Uptime Monitor
High-Performance In-Browser LLM Inference Engine
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
86ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
105ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
106ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
118ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
118ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
111ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
97ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
98ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
95ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBrowserAI
Run Local LLMs Inside Your Browser
BrowserAI is an open-source library enabling developers to run local Large Language Models (LLMs) directly within a user's browser, offering a privacy-focused AI solution with zero infrastructure costs.
Last checked: 1 hour ago View Status -
IssuesOneLLM
Fine-tune, evaluate, and deploy your next LLM without code.
OneLLM is a no-code platform enabling users to fine-tune, evaluate, and deploy Large Language Models (LLMs) efficiently. Streamline LLM development by creating datasets, integrating API keys, running fine-tuning processes, and comparing model performance.
Last checked: 1 hour ago View Status -
OperationalLLM API
Access 200+ AI Models with One Unified API
LLM API provides seamless access to over 200 leading AI models from top providers like OpenAI, Anthropic, Google, and Meta through a single, reliable API, empowering businesses and developers with infinite scalability.
Last checked: 2 hours ago View Status -
OperationalAvian API
Fastest, production grade API for Open Source LLMs
Avian API is an enterprise-grade language model inference platform offering state-of-the-art LLMs with superior speed and competitive pricing, powered by Meta's Llama models and Nvidia H200 SXM technology.
Last checked: 1 hour ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 1 hour ago View Status -
OperationalWasmEdge
Fast, lightweight, portable, and OpenAI compatible WebAssembly runtime for edge AI and LLM inference
WasmEdge is a cloud-native WebAssembly runtime that enables fast, lightweight, and secure AI inference and LLM applications on the edge with native GPU support and OpenAI compatibility.
Last checked: 2 hours ago View Status