qai_hub.submit_compile_job

submit_compile_job(model, device, name=None, input_specs=None, options='', single_compile=True, calibration_data=None, retry=True)

Submits a compile job.

Parameters:
  • model (Union[Model, TopLevelTracedModule, MLModel, ModelProto, bytes, str, Path]) – Model to compile. The model must be a PyTorch model or an ONNX model

  • device (Union[Device, List[Device]]) – Devices for which to compile the input model for.

  • name (Optional[str]) – Optional name for the job. Job names need not be unique.

  • input_specs (Optional[Mapping[str, Union[Tuple[int, ...], Tuple[Tuple[int, ...], str]]]]) –

    Required if model is a PyTorch model. Keys in Dict (which is ordered in Python 3.7+) define the input names for the target model (e.g., Core ML model) created from this profile job, and may be different from the names in PyTorch model.

    An input shape can either be a Tuple[int, …], ie (1, 2, 3), or it can be a Tuple[Tuple[int, …], str], ie ((1, 2, 3), “int32”)). The latter form can be used to specify the type of the input. If a type is not specified, it defaults to “float32”. Currently, only “float32”, “int8”, “int16”, “int32”, “uint8”, and “uint16” are accepted types.

    For example, a PyTorch module with forward(self, x, y) may have input_specs=dict(a=(1,2), b=(1, 3)). When using the resulting target model (e.g. a Core ML model) from this profile job, the inputs must have keys a and b, not x and y. Similarly, if this target model is used in an inference job (see qai_hub.submit_inference_job()), the dataset must have entries a, b in this order, not x, y

    If model is an ONNX model, input_specs are optional. input_specs can be used to overwrite the model’s input names and the dynamic extents for the input shapes. If input_specs is not None, it must be compatible with the model, or the server will return an error.

  • options (str) – Cli-like flag options. See Compile Options.

  • single_compile (bool) – If True, create a single job on a single device compatible with all devices. If False, create a single job for each device

  • calibration_data (Union[Dataset, Mapping[str, List[ndarray]], str, None]) – Data, Dataset, or Dataset ID to use for post-training quantization. PTQ will be applied to the model during translation.

  • retry (bool) – If job creation fails due to rate-limiting, keep retrying periodically until creation succeeds.

Returns:

job – Returns the compile jobs. Always one job if single_compile is “True”, and possibly multiple jobs if it is “False”.

Return type:

CompileJob | List[CompileJob]

Examples

Submit a traced Torch model for compile on an Samsung Galaxy S23:

import qai_hub as hub
import torch

pt_model = torch.jit.load("mobilenet.pt")

input_specs = (1, 3, 224, 224)

model = hub.upload_model(pt_model)

job = hub.submit_compile_job(model, device=hub.Device("Samsung Galaxy S23"),
                             name="mobilenet (1, 3, 224, 224)",
                             input_specs=dict(x=input_specs))

For more examples, see Compiling Models.