Overview of Qualcomm® AI Hub
Examples
- Compiling Models
- Compiling PyTorch to TensorFlow Lite
- Compiling PyTorch model to a QNN Model Library
- Compiling PyTorch model to a QNN Context Binary
- Compiling Precompiled QNN ONNX
- Compiling PyTorch model for the ONNX Runtime
- Compiling ONNX models to TensorFlow Lite or QNN
- Compiling models quantized with AIMET to TensorFlow Lite or QNN
- Profiling Models
- Running Inference
- Devices
- Working with jobs
- Command Line Interface
API Documentation
- API documentation
- Core API
- Managed objects
- Exceptions
- Common Options
- Compile Options
- Profile & Inference Options
- Profile Options
- ONNX Runtime Options
- ONNX Runtime QNN Execution Provider options
- ONNX Runtime DirectML Execution Provider options
- TensorFlow Lite Options
- TensorFlow Lite Delegate Options for Qualcomm® AI Engine Direct
- TensorFlow Lite Delegate Options for GPUv2
- TensorFlow Lite Delegate Options for NNAPI
- Qualcomm® AI Engine Direct Options