Post-training Optimization Tool API Examples¶
The Post-training Optimization Tool contains multiple examples that demonstrate how to use its API to optimize DL models. All available examples can be found on GitHub.
The following examples demonstrate the implementation of Engine, Metric, and DataLoader interfaces for various use cases:
Quantizing Image Classification model
Uses single
MobilenetV2model from TensorFlowImplements
DataLoaderto load .JPEG images and annotations of Imagenet databaseImplements
Metricinterface to calculate Accuracy at top-1 metricUses DefaultQuantization algorithm for quantization model
Quantizing Object Detection Model with Accuracy Control
Uses single
MobileNetV1 FPNmodel from TensorFlowImplements
Dataloaderto load images of COCO databaseImplements
Metricinterface to calculate mAP@[.5:.95] metricUses
AccuracyAwareQuantizationalgorithm for quantization model
Quantizing Semantic Segmentation Model
Uses single
DeepLabV3model from TensorFlowImplements
DataLoaderto load .JPEG images and annotations of Pascal VOC 2012 databaseImplements
Metricinterface to calculate Mean Intersection Over Union metricUses DefaultQuantization algorithm for quantization model
Quantizing 3D Segmentation Model
Uses single
Brain Tumor Segmentationmodel from PyTorchImplements
DataLoaderto load images in NIfTI format from Medical Segmentation Decathlon BRATS 2017 databaseImplements
Metricinterface to calculate Dice Index metricDemonstrates how to use image metadata obtained during data loading to post-process the raw model output
Uses DefaultQuantization algorithm for quantization model
-
Uses cascaded (composite)
MTCNNmodel from Caffe that consists of three separate models in an OpenVino Intermediate Representation (IR)Implements
Dataloaderto load .jpg images of WIDER FACE databaseImplements
Metricinterface to calculate Recall metricImplements
Engineclass that is inherited fromIEEngineto create a complex staged pipeline to sequentially execute each of the three stages of the MTCNN model, represented by multiple models in IR. It uses engine helpers to set model in OpenVino Inference Engine and process raw model output for the correct statistics collectionUses DefaultQuantization algorithm for quantization model
-
Uses models from Kaldi
Implements
DataLoaderto load data in .ark formatUses DefaultQuantization algorithm for quantization model
After execution of each example above the quantized model is placed into the folder optimized. The accuracy validation of the quantized model is performed right after the quantization.