What is Fullmoon?
Fullmoon represents a breakthrough in private AI computing, offering users the ability to run large language models directly on their Apple devices. The application is specifically optimized for Apple silicon and leverages Metal 3 graphics technology to ensure efficient model execution.
Built on the Swift MLX array framework, Fullmoon supports multiple platforms including iOS, iPadOS, macOS, and visionOS. The app comes with two pre-configured models: Llama-3.2-1B-Instruct-4bit (0.7 GB) and Llama-3.2-3B-Instruct-4bit (1.8 GB), both designed to operate efficiently with 4-bit precision.
Features
- Offline Operation: Works completely without internet connection
- Local Processing: Runs models directly on device for privacy
- Apple Silicon Optimization: Specially optimized for Apple's custom chips
- Cross-Platform Support: Works on iOS, iPadOS, macOS, and visionOS
- Customization Options: Adjustable themes, fonts, and system prompts
- Shortcut Integration: Compatible with Apple's Shortcuts app
Use Cases
- Private AI conversations without internet connectivity
- Secure local language processing
- Integration with Apple device automation
- Offline language model deployment
- Personal AI assistant with privacy focus
FAQs
-
What are the minimum system requirements for Fullmoon?
Fullmoon requires an Apple device with Apple silicon chip and Metal 3 graphics support. -
How much storage space do the models require?
The Llama-3.2-1B-Instruct-4bit model requires 0.7 GB, while the Llama-3.2-3B-Instruct-4bit model needs 1.8 GB of storage space. -
Can I use Fullmoon without an internet connection?
Yes, Fullmoon works completely offline as it runs the language models locally on your device.