Gorilla Uptime Monitor
Large Language Model Connected with Massive APIs
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
186.93ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
321ms
Daily Status Breakdown
Nov-2025
99.57% uptime
Monthly Uptime
99.57%
Monthly Response Time
314ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
252ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
275ms
Daily Status Breakdown
Aug-2025
99.58% uptime
Monthly Uptime
99.58%
Monthly Response Time
247ms
Daily Status Breakdown
Jul-2025
99.9% uptime
Monthly Uptime
99.9%
Monthly Response Time
267ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
214ms
Daily Status Breakdown
May-2025
99.8% uptime
Monthly Uptime
99.8%
Monthly Response Time
180ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
170ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalOpenTools
The API for Enhanced LLM Tool Use
OpenTools provides a unified API enabling developers to connect Large Language Models (LLMs) with a diverse ecosystem of tools, simplifying integration and management.
Last checked: 4 hours ago View Status -
OperationalLLM Explorer
Discover and Compare Open-Source Language Models
LLM Explorer is a comprehensive platform for discovering, comparing, and accessing over 46,000 open-source Large Language Models (LLMs) and Small Language Models (SLMs).
Last checked: 1 hour ago View Status -
IssuesAstra Platform
The Universal API for LLM Function Calling
Astra Platform is a universal API designed to enhance Large Language Models (LLMs) with function calling capabilities, enabling seamless integration with over 2,200 applications.
Last checked: 4 hours ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 1 hour ago View Status -
IssuesRubra
Open-weight, tool-calling LLMs
Rubra provides a collection of open-weight large language models (LLMs) enhanced with tool-calling capabilities, ideal for building AI agents.
Last checked: 1 hour ago View Status