LlamaChat Uptime Monitor
Chat with LLaMA, Alpaca, and GPT4All models locally on your Mac
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
151.53ms
Mean response time across all checks
Daily Status Overview
Hover for detailsRelated Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBodhi
Run LLMs locally, powered by Open Source
Bodhi is a free, privacy-focused application allowing users to run Large Language Models (LLMs) locally on their macOS devices without technical setup.
Last checked: 3 hours ago View Status -
OperationalPrivacy AI
Offline AI, Ultimate Privacy
Privacy AI is a local AI chatbot hub that runs entirely on-device for iOS and macOS, offering access to open-source language models while ensuring complete data privacy and offline functionality.
Last checked: 3 hours ago View Status -
OperationalChatPDFLocal
Chat locally and privately with PDFs using top AI models on your Mac.
ChatPDFLocal is a native macOS application enabling private, local chat interactions with PDF documents using leading AI models like GPT, Claude, Llama, and Gemini. It ensures data security by storing everything locally.
Last checked: 3 hours ago View Status -
OperationalFullmoon
A billion parameters in your pocket - chat with private and local large language models
Fullmoon is an open-source app that enables users to run local large language models directly on Apple devices, offering completely offline functionality and optimized performance for Apple silicon.
Last checked: 3 hours ago View Status -
OperationalOllama
Get up and running with large language models locally
Ollama is a platform that enables users to run powerful language models like Llama 3.3, DeepSeek-R1, Phi-4, Mistral, and Gemma 2 on their local machines.
Last checked: 16 hours ago View Status -
OperationalLM Studio
Discover, download, and run local LLMs on your computer
LM Studio is a desktop application that allows users to run Large Language Models (LLMs) locally and offline, supporting various architectures including Llama, Mistral, Phi, Gemma, DeepSeek, and Qwen 2.5.
Last checked: 3 hours ago View Status