W&B Weave Uptime Monitor
A Framework for Developing and Deploying LLM-Based Applications
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
188.57ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.64% uptime
Monthly Uptime
99.64%
Monthly Response Time
132ms
Daily Status Breakdown
Nov-2025
99.1% uptime
Monthly Uptime
99.1%
Monthly Response Time
105ms
Daily Status Breakdown
Oct-2025
99.63% uptime
Monthly Uptime
99.63%
Monthly Response Time
110ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
108ms
Daily Status Breakdown
Aug-2025
99.85% uptime
Monthly Uptime
99.85%
Monthly Response Time
136ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
115ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
130ms
Daily Status Breakdown
May-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
133ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
239ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalWeavel
Automate Prompt Engineering 50x Faster
Weavel optimizes prompts for LLM applications, achieving significantly higher performance than manual methods. Streamline your workflow and enhance your AI's accuracy with just a few lines of code.
Last checked: 10 hours ago View Status -
OperationalPromptMage
A Python framework for simplified LLM-based application development
PromptMage is a Python framework that streamlines the development of complex, multi-step applications powered by Large Language Models (LLMs), offering version control, testing capabilities, and automated API generation.
Last checked: 5 hours ago View Status -
OperationalBraintrust
The end-to-end platform for building world-class AI apps.
Braintrust provides an end-to-end platform for developing, evaluating, and monitoring Large Language Model (LLM) applications. It helps teams build robust AI products through iterative workflows and real-time analysis.
Last checked: 6 minutes ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 5 hours ago View Status -
OperationalModelBench
No-Code LLM Evaluations
ModelBench enables teams to rapidly deploy AI solutions with no-code LLM evaluations. It allows users to compare over 180 models, design and benchmark prompts, and trace LLM runs, accelerating AI development.
Last checked: 5 hours ago View Status -
IssuesWeave
Plug & Play AI
Weave enables users to leverage generative AI without coding. Select a template, personalize it, and automate your workflow.
Last checked: 5 hours ago View Status