Apache TVM logo

Apache TVM

Apache TVM is an open-source compiler framework for deep learning that provides performance portability across diverse hardware backends including CPUs, GPUs, FPGAs, and specialized accelerators (ARM, NVIDIA, AMD, Qualcomm). It automatically optimizes deep learning models from frameworks like TensorFlow, PyTorch, ONNX, MXNet, and Keras for deployment on edge and cloud targets. TVM is an Apache Software Foundation top-level project.

2 APIs 6 Features
AICompilerDeep LearningEdge ComputingModel OptimizationOpen Source

APIs

Apache TVM Python API

The TVM Python API provides a comprehensive interface for model compilation, optimization, and deployment. Key modules include tvm.relay for defining and optimizing computationa...

Apache TVM RPC API

The TVM RPC (Remote Procedure Call) system enables remote compilation, deployment, and profiling of optimized models on target devices. It provides server/client APIs for upload...

Features

Multi-Framework Support

Import models from TensorFlow, PyTorch, ONNX, MXNet, Keras, and other frameworks.

Hardware-Specific Optimization

Automatic operator scheduling and kernel fusion for CPUs, GPUs, and custom accelerators.

Auto-Tuning

AutoTVM and AutoScheduler for automated hyperparameter optimization of compute kernels.

MicroTVM

Deploy optimized models on microcontrollers and bare-metal devices without an OS.

BYOC Framework

Bring Your Own Codegen framework for integrating custom hardware accelerators.

Relay IR

High-level intermediate representation for end-to-end model optimization.

Use Cases

Edge AI Deployment

Deploy optimized deep learning models on edge devices and microcontrollers.

Model Serving Optimization

Optimize inference performance for cloud GPU/CPU model serving.

Cross-Platform Deployment

Compile a single model for multiple hardware targets from one codebase.

Custom Accelerator Integration

Integrate custom AI accelerators using TVM's BYOC framework.

Integrations

ONNX

Import and optimize ONNX models from any ONNX-compatible ML framework.

PyTorch

TorchScript to TVM compilation for PyTorch model optimization.

TensorFlow

TensorFlow and TFLite model import and optimization.

NVIDIA CUDA

CUDA/cuDNN backend for NVIDIA GPU kernel generation and optimization.

ARM

ARM CPU (Cortex-A, Cortex-M) and ARM Mali GPU backend support.

Resources

👥
GitHubRepository
GitHubRepository
🔗
Documentation
Documentation
🌐
Portal
Portal
🚀
GettingStarted
GettingStarted
📄
ReleaseNotes
ReleaseNotes
💬
Support
Support
📜
TermsOfService
TermsOfService