FriendliAI Uptime Monitor
Accelerate Generative AI Inference
Last 30 Days Performance
Average Uptime
98.61%
Based on 30-day monitoring period
Average Response Time
760.73ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.82% uptime
Monthly Uptime
99.82%
Monthly Response Time
560ms
Daily Status Breakdown
Nov-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
169ms
Daily Status Breakdown
Oct-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
436ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
216ms
Daily Status Breakdown
Aug-2025
99.72% uptime
Monthly Uptime
99.72%
Monthly Response Time
176ms
Daily Status Breakdown
Jul-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
173ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
215ms
Daily Status Breakdown
May-2025
99.81% uptime
Monthly Uptime
99.81%
Monthly Response Time
186ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
Operationalklu.ai
Next-gen LLM App Platform for Confident AI Development
Klu is an all-in-one LLM App Platform that enables teams to experiment, version, and fine-tune GPT-4 Apps with collaborative prompt engineering and comprehensive evaluation tools.
Last checked: 3 hours ago View Status -
Operationalneutrino AI
Multi-model AI Infrastructure for Optimal LLM Performance
Neutrino AI provides multi-model AI infrastructure to optimize Large Language Model (LLM) performance for applications. It offers tools for evaluation, intelligent routing, and observability to enhance quality, manage costs, and ensure scalability.
Last checked: 4 hours ago View Status -
IssuesCentML
Better, Faster, Easier AI
CentML streamlines LLM deployment, offering advanced system optimization and efficient hardware utilization. It provides single-click resource sizing, model serving, and supports diverse hardware and models.
Last checked: 3 hours ago View Status -
OperationalWebLLM
High-Performance In-Browser LLM Inference Engine
WebLLM enables running large language models (LLMs) directly within a web browser using WebGPU for hardware acceleration, reducing server costs and enhancing privacy.
Last checked: 4 hours ago View Status -
OperationalAxolotl AI
We make fine-tuning accessible, scalable, fun
Axolotl AI is a free, open-source tool designed to make fine-tuning Large Language Models (LLMs) faster, more accessible, and scalable across various AI models and platforms.
Last checked: 4 hours ago View Status -
OperationalFeatherless.ai
Instant, unlimited hosting for any llama model on HuggingFace.
Featherless.ai offers serverless AI inference hosting, providing API access to a vast library of open-weight models from HuggingFace without requiring server management.
Last checked: 4 hours ago View Status