ONNX Runtime favicon

ONNX Runtime Production-grade AI engine for accelerated training and inferencing.

What is ONNX Runtime?

ONNX Runtime is a production-grade engine developed to accelerate machine learning model training and inferencing within existing technology stacks. It provides broad compatibility, supporting numerous programming languages such as Python, C#, C++, Java, JavaScript, and Rust, and runs seamlessly across various operating systems including Linux, Windows, Mac, iOS, Android, and even within web browsers. This versatile engine is trusted by major applications like Microsoft Windows, Office, Azure Cognitive Services, and Bing.

The platform offers robust capabilities for both AI inferencing and training. ONNX Runtime Inferencing enables AI deployment in the cloud, on edge devices, mobile applications (Android and iOS via ONNX Runtime Mobile), and web browsers (via ONNX Runtime Web). For training, it accelerates the process for large models, including those from Hugging Face and Azure AI Studio, and facilitates on-device training for personalized user experiences. It also integrates Generative AI and Large Language Models (LLMs) for tasks like image synthesis and text generation, while optimizing performance across CPU, GPU, and NPU hardware for latency, throughput, memory, and size.

Features

  • Cross-Platform Compatibility: Runs on Linux, Windows, Mac, iOS, Android, and web browsers.
  • Multi-Language Support: Compatible with Python, C#, C++, Java, JavaScript, Rust, and more.
  • Performance Optimization: Optimizes latency, throughput, memory utilization, and binary size across CPU, GPU, and NPU.
  • Generative AI Integration: Supports Generative AI and Large Language Models (LLMs) for image synthesis and text generation.
  • Accelerated Model Training: Speeds up training for large models, including Hugging Face models.
  • On-Device Training: Enables training models locally on user devices for personalization.
  • Flexible Inferencing: Deploys models for inference on cloud, edge, mobile, and web platforms.

Use Cases

  • Accelerating ML model inference in production environments.
  • Speeding up the training process for large language models.
  • Deploying AI models on edge devices with limited resources.
  • Running machine learning models directly within web browsers.
  • Integrating Generative AI features into applications across different platforms.
  • Enabling personalized AI experiences through on-device training.
  • Optimizing ML model performance across diverse hardware (CPU, GPU, NPU).

Related Queries

Helpful for people in the following professions

ONNX Runtime Uptime Monitor

Average Uptime

100%

Average Response Time

103.75 ms

Last 30 Days

Related Tools:

Blogs:

Didn't find tool you were looking for?

Be as detailed as possible for better results