Overview of Qualcomm® AI Hub
Examples
- Compiling Models
- Compiling PyTorch to TensorFlow Lite
- Compiling PyTorch model to a QNN Model Library
- Compiling PyTorch model to a QNN Context Binary
- Compiling Precompiled QNN ONNX
- Compiling PyTorch model for the ONNX Runtime
- Compiling ONNX models to TensorFlow Lite or QNN
- Compiling models quantized with AIMET to TensorFlow Lite or QNN
- Profiling Models
- Running Inference
- Quantization (Beta)
- Linking
- Devices
- Working with jobs
- Command Line Interface
- Deployment
API Documentation
- API documentation
- Core API
- Managed objects
- Exceptions
- Common Options
- Compile Options
- Quantize Options
- Link Options
- Profile & Inference Options
- Profile Options
- ONNX Runtime Options
- ONNX Runtime QNN Execution Provider options
- ONNX Runtime DirectML Execution Provider options
- TensorFlow Lite Options
- TensorFlow Lite Delegate Options for Qualcomm® AI Engine Direct
- TensorFlow Lite Delegate Options for GPUv2
- TensorFlow Lite Delegate Options for NNAPI
- Qualcomm® AI Engine Direct Options
Release Notes
- Release Notes
- Released November 25, 2024
- Released November 11, 2024
- Released October 28, 2024
- Released October 14, 2024
- Released October 7, 2024
- Released September 23, 2024
- Released September 11, 2024
- Released August 26, 2024
- Released August 12, 2024
- Released July 29, 2024
- Released July 15, 2024
- Released July 1, 2024
- Released June 17, 2024
- Released June 4, 2024
- Released May 17, 2024
- Released May 6, 2024
- Released April 22, 2024
- Released April 8, 2024
- Released March 25, 2024
- Released March 11, 2024
- Released February 28, 2024