Rig Uptime Monitor
Build Modular and Scalable LLM Applications in Rust
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
120.27ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
229ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
203ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
202ms
Daily Status Breakdown
Sep-2025
99.85% uptime
Monthly Uptime
99.85%
Monthly Response Time
221ms
Daily Status Breakdown
Aug-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
243ms
Daily Status Breakdown
Jul-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
224ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
268ms
Daily Status Breakdown
May-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
270ms
Daily Status Breakdown
Apr-2025
99.26% uptime
Monthly Uptime
99.26%
Monthly Response Time
298ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalLlamaHub
Kickstart Your RAG Application with Data Loaders and Agent Tools
LlamaHub is a repository providing data loaders, agent tools, and LlamaPacks to quickly build and customize Retrieval-Augmented Generation (RAG) applications using frameworks like LlamaIndex and LangChain.
Last checked: 4 hours ago View Status -
OperationalLintrule
Let the LLM review your code
Lintrule is a command-line tool that uses large language models to perform automated code reviews, enforce coding policies, and detect bugs beyond traditional linting capabilities.
Last checked: 1 hour ago View Status -
OperationalLega
Large Language Model Governance
Lega empowers law firms and enterprises to safely explore, assess, and implement generative AI technologies. It provides enterprise guardrails for secure LLM exploration and a toolset to capture and scale critical learnings.
Last checked: 1 hour ago View Status -
OperationalPromptMage
A Python framework for simplified LLM-based application development
PromptMage is a Python framework that streamlines the development of complex, multi-step applications powered by Large Language Models (LLMs), offering version control, testing capabilities, and automated API generation.
Last checked: 1 hour ago View Status -
OperationalLlamaEdge
The easiest, smallest and fastest local LLM runtime and API server.
LlamaEdge is a lightweight and fast local LLM runtime and API server, powered by Rust & WasmEdge, designed for creating cross-platform LLM agents and web services.
Last checked: 1 hour ago View Status