Shortcuts

apis

apis/tensorrt

mmdeploy.apis.tensorrt.from_onnx(onnx_model: Union[str, onnx.onnx_ml_pb2.ModelProto], output_file_prefix: str, input_shapes: Dict[str, Sequence[int]], max_workspace_size: int = 0, fp16_mode: bool = False, int8_mode: bool = False, int8_param: Optional[dict] = None, device_id: int = 0, log_level: tensorrt.Logger.Severity = tensorrt.Logger.ERROR, **kwargs)tensorrt.ICudaEngine[源代码]

Create a tensorrt engine from ONNX.

参数
  • onnx_model (str or onnx.ModelProto) -- Input onnx model to convert from.

  • output_file_prefix (str) -- The path to save the output ncnn file.

  • input_shapes (Dict[str, Sequence[int]]) -- The min/opt/max shape of each input.

  • max_workspace_size (int) -- To set max workspace size of TensorRT engine. some tactics and layers need large workspace. Defaults to 0.

  • fp16_mode (bool) -- Specifying whether to enable fp16 mode. Defaults to False.

  • int8_mode (bool) -- Specifying whether to enable int8 mode. Defaults to False.

  • int8_param (dict) -- A dict of parameter int8 mode. Defaults to None.

  • device_id (int) -- Choice the device to create engine. Defaults to 0.

  • log_level (trt.Logger.Severity) -- The log level of TensorRT. Defaults to trt.Logger.ERROR.

返回

The TensorRT engine created from onnx_model.

返回类型

tensorrt.ICudaEngine

示例

>>> from mmdeploy.apis.tensorrt import from_onnx
>>> engine = from_onnx(
>>>             "onnx_model.onnx",
>>>             {'input': {"min_shape" : [1, 3, 160, 160],
>>>                        "opt_shape" : [1, 3, 320, 320],
>>>                        "max_shape" : [1, 3, 640, 640]}},
>>>             log_level=trt.Logger.WARNING,
>>>             fp16_mode=True,
>>>             max_workspace_size=1 << 30,
>>>             device_id=0)
>>>             })
mmdeploy.apis.tensorrt.is_available()[源代码]

Check whether TensorRT package is installed and cuda is available.

返回

True if TensorRT package is installed and cuda is available.

返回类型

bool

mmdeploy.apis.tensorrt.is_custom_ops_available()[源代码]

Check whether TensorRT custom ops are installed.

返回

True if TensorRT custom ops are compiled.

返回类型

bool

mmdeploy.apis.tensorrt.load(path: str)tensorrt.ICudaEngine[源代码]

Deserialize TensorRT engine from disk.

参数

path (str) -- The disk path to read the engine.

返回

The TensorRT engine loaded from disk.

返回类型

tensorrt.ICudaEngine

mmdeploy.apis.tensorrt.onnx2tensorrt(work_dir: str, save_file: str, model_id: int, deploy_cfg: Union[str, mmcv.utils.config.Config], onnx_model: Union[str, onnx.onnx_ml_pb2.ModelProto], device: str = 'cuda:0', partition_type: str = 'end2end', **kwargs)[源代码]

Convert ONNX to TensorRT.

实际案例

>>> from mmdeploy.backend.tensorrt.onnx2tensorrt import onnx2tensorrt
>>> work_dir = 'work_dir'
>>> save_file = 'end2end.engine'
>>> model_id = 0
>>> deploy_cfg = ('configs/mmdet/detection/'
                  'detection_tensorrt_dynamic-320x320-1344x1344.py')
>>> onnx_model = 'work_dir/end2end.onnx'
>>> onnx2tensorrt(work_dir, save_file, model_id, deploy_cfg,
        onnx_model, 'cuda:0')
参数
  • work_dir (str) -- A working directory.

  • save_file (str) -- The base name of the file to save TensorRT engine. E.g. end2end.engine.

  • model_id (int) -- Index of input model.

  • deploy_cfg (str | mmcv.Config) -- Deployment config.

  • onnx_model (str | onnx.ModelProto) -- input onnx model.

  • device (str) -- A string specifying cuda device, defaults to 'cuda:0'.

  • partition_type (str) -- Specifying partition type of a model, defaults to 'end2end'.

mmdeploy.apis.tensorrt.save(engine: tensorrt.ICudaEngine, path: str)None[源代码]

Serialize TensorRT engine to disk.

参数
  • engine (tensorrt.ICudaEngine) -- TensorRT engine to be serialized.

  • path (str) -- The absolute disk path to write the engine.

apis/onnxruntime

mmdeploy.apis.onnxruntime.is_available()[源代码]

Check whether ONNX Runtime package is installed.

返回

True if ONNX Runtime package is installed.

返回类型

bool

mmdeploy.apis.onnxruntime.is_custom_ops_available()[源代码]

Check whether ONNX Runtime custom ops are installed.

返回

True if ONNX Runtime custom ops are compiled.

返回类型

bool

apis/ncnn

mmdeploy.apis.ncnn.from_onnx(onnx_model: Union[onnx.onnx_ml_pb2.ModelProto, str], output_file_prefix: str)[源代码]

Convert ONNX to ncnn.

The inputs of ncnn include a model file and a weight file. We need to use a executable program to convert the .onnx file to a .param file and a .bin file. The output files will save to work_dir.

示例

>>> from mmdeploy.apis.ncnn import from_onnx
>>> onnx_path = 'work_dir/end2end.onnx'
>>> output_file_prefix = 'work_dir/end2end'
>>> from_onnx(onnx_path, output_file_prefix)
参数
  • onnx_path (ModelProto|str) -- The path of the onnx model.

  • output_file_prefix (str) -- The path to save the output ncnn file.

mmdeploy.apis.ncnn.is_available()[源代码]

Check whether ncnn and onnx2ncnn tool are installed.

返回

True if ncnn and onnx2ncnn tool are installed.

返回类型

bool

mmdeploy.apis.ncnn.is_custom_ops_available()[源代码]

Check whether ncnn extension and custom ops are installed.

返回

True if ncnn extension and custom ops are compiled.

返回类型

bool

apis/pplnn

mmdeploy.apis.pplnn.is_available()[源代码]

Check whether pplnn is installed.

返回

True if pplnn package is installed.

返回类型

bool

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.