Overview of Qualcomm® AI Hub
Examples
- Compiling Models
- Compiling PyTorch to TensorFlow Lite
- Compiling PyTorch model to a QNN Model Library
- Compiling PyTorch model to a QNN Context Binary
- Compiling Precompiled QNN ONNX
- Compiling PyTorch model for the ONNX Runtime
- Compiling ONNX models to TensorFlow Lite or QNN
- Compiling models quantized with AIMET to TensorFlow Lite or QNN
- Profiling Models
- Running Inference
- Quantization (Beta)
- Linking
- Devices
- Working with jobs
- Command Line Interface
- Deployment
API Documentation
- API documentation
- Core API
- Managed objects
- Exceptions
- Common Options
- Compile Options
- Quantize Options
- Link Options
- Profile & Inference Options
- Profile Options
- ONNX Runtime Options
- ONNX Runtime QNN Execution Provider options
- ONNX Runtime DirectML Execution Provider options
- TensorFlow Lite Options
- TensorFlow Lite Delegate Options for Qualcomm® AI Engine Direct
- TensorFlow Lite Delegate Options for GPUv2
- TensorFlow Lite Delegate Options for NNAPI
- Qualcomm® AI Engine Direct Options
Release Notes
- Release Notes
- Released October 28
- Released October 14
- Released October 7
- Released September 23
- Released September 11
- Released August 26
- Released August 12
- Released July 29
- Released July 15
- Released July 1
- Released Jun 17
- Released Jun 4
- Released May 17
- Released May 6
- Released Apr 22
- Released Apr 8
- Released Mar 25
- Released Mar 11
- Released Feb 28