Groq Uptime Monitor
Fast AI Inference for Openly-Available Models
Last 30 Days Performance
Average Uptime
99.58%
Based on 30-day monitoring period
Average Response Time
159.9ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.8% uptime
Monthly Uptime
99.8%
Monthly Response Time
165ms
Daily Status Breakdown
Nov-2025
97.98% uptime
Monthly Uptime
97.98%
Monthly Response Time
168ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
187ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
154ms
Daily Status Breakdown
Aug-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
159ms
Daily Status Breakdown
Jul-2025
99.6% uptime
Monthly Uptime
99.6%
Monthly Response Time
145ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
197ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
221ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
242ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
Operationalchat.groq.com
Experience the World's Fastest AI Inference Engine
Groq provides access to its AI chatbot, demonstrating the exceptional speed of its LPU™ inference engine for large language models.
Last checked: 3 hours ago View Status -
IssuesGrok
Bringing Grok to Everyone
Grok is an AI model now available on the 𝕏 platform, offering improved speed, multilingual support, and enhanced capabilities.
Last checked: 8 hours ago View Status -
OperationalHoray.ai
High-Speed AI Model Inference Platform
Horay.ai provides developers with a high-speed API platform for various AI models, including LLMs, image generation, and voice generation, focusing on efficiency and scalability.
Last checked: 3 hours ago View Status -
IssuesDialoq AI
Run any AI models through one simple unified API
Dialoq AI is a comprehensive API gateway that enables developers to access and integrate 200+ Language Learning Models (LLMs) through a single, unified API, streamlining AI application development with enhanced reliability and cost predictability.
Last checked: 2 hours ago View Status -
OperationalFriendliAI
Accelerate Generative AI Inference
FriendliAI provides a high-performance platform for accelerating generative AI inference, enabling fast, cost-effective, and reliable deployment and serving of Large Language Models (LLMs).
Last checked: 8 hours ago View Status -
OperationalAvian API
Fastest, production grade API for Open Source LLMs
Avian API is an enterprise-grade language model inference platform offering state-of-the-art LLMs with superior speed and competitive pricing, powered by Meta's Llama models and Nvidia H200 SXM technology.
Last checked: 2 hours ago View Status