This article explains how to experiment with ONNX Runtime [1] applications for image classification based on the MobileNet v1 model using ONNX Python runtime.
1. Description[edit source]
The image classification neural network model allows identification of the subject represented by an image. It classifies an image into various classes.
The application enables three main features :
- A camera streaming preview implemented using Gstreamer
- A NN inference based on the camera (or test data pictures) inputs is being ran by the ONNX Runtime [1] interpreter
- A user interface implemented using Python GTK.
The performances depend on the number of CPUs available. The camera preview is limited to one CPU core where the ONNX runtime[1] interpreter is configured to use the maximum of the available resources.
The model used with this application is the MobileNet v1 downloaded from the Tensorflow Lite Hub[2] and convert to ONNX opset 16 format using tf2onnx.
1.1. Convert a Tensorflow Lite model to ONNX using tf2onnx[edit source]
To convert a .tflite model to ONNX format, ONNX provide a tool named tf2onnx [3] which is very simple to use.
The first step is to install Tensorflow on the host computer, for test purposes it could be useful to install also ONNX Runtime. Tool tf2onnx will use the already installed versions of Tensorflow and ONNX runtime, if it does not find any it will install the most recent versions.
The second step is to install tf2onnx :
- Install from pypi :
pip install -U tf2onnx
or
- Install latest from github :
pip install git+https://github.com/onnx/tensorflow-onnx
After the installation, you are able to convert the tflite model directly using the following command line :
python -m tf2onnx.convert --opset 16 --tflite path/to/tflite/model.tlfite --output path/to/onnx/model/model.onnx
Native ONNX models are also available in the ONNX Model Zoo [4]
2. Installation[edit source]
2.1. Install from the OpenSTLinux AI package repository[edit source]
After having configured the AI OpenSTLinux package you can install the X-LINUX-AI components for this application:
apt-get install onnx-cv-apps-image-classification-python
And restart the demo launcher:
- For OpenSTLinux distribution with a version lower than 4.0 use
systemctl restart weston@root
- For other OpenSTLinux distribution use :
systemctl restart weston-launch
2.2. Source code location[edit source]
The label_onnx.py Python script is available:
- in the Openembedded OpenSTLinux Distribution with X-LINUX-AI Expansion Package:
- <Distribution Package installation directory>/layers/meta-st/meta-st-stm32mpu-ai/recipes-samples/onnxrt-cv-apps/files/image-classification/python/label_onnx.py
- on the target:
- /usr/local/demo-ai/computer-vision/onnx-image-classification/python/label_onnx.py
- on GitHub:
3. How to use the application[edit source]
3.1. Launching via the demo launcher[edit source]
3.2. Executing with the command line[edit source]
The Python script label_onnx.py application is located in the userfs partition:
/usr/local/demo-ai/computer-vision/onnx-image-classification/python/label_onnx.py
It accepts the following input parameters:
usage: label_onnx.py [-h] [-i IMAGE] [-v VIDEO_DEVICE] [--frame_width FRAME_WIDTH] [--frame_height FRAME_HEIGHT] [--framerate FRAMERATE] [-m MODEL_FILE] [-l LABEL_FILE] [--input_mean INPUT_MEAN] [--input_std INPUT_STD] [--validation] [--num_threads NUM_THREADS] options: -h, --help show this help message and exit -i IMAGE, --image IMAGE image directory with image to be classified -v VIDEO_DEVICE, --video_device VIDEO_DEVICE video device (default /dev/video0) --frame_width FRAME_WIDTH width of the camera frame (default is 640) --frame_height FRAME_HEIGHT height of the camera frame (default is 480) --framerate FRAMERATE framerate of the camera (default is 15fps) -m MODEL_FILE, --model_file MODEL_FILE .onnx model to be executed -l LABEL_FILE, --label_file LABEL_FILE name of file containing labels --input_mean INPUT_MEAN input mean --input_std INPUT_STD input standard deviation --validation enable the validation mode --num_threads NUM_THREADS Select the number of threads used by ONNX interpreter to run inference
3.3. Testing with MobileNet V1[edit source]
The model used for test is the mobilenet_v1_0.5_128_quant.onnx downloaded from Tensorflow Lite Hub[2] and converted to ONNX format.
To ease launching of the Python script, two shell scripts are available:
- launch image classification based on camera frame inputs
/usr/local/demo-ai/computer-vision/onnx-image-classification/python/launch_python_label_onnx_mobilenet.sh
- launch image classification based on the pictures located in /usr/local/demo-ai/computer-vision/models/mobilenet/testdata directory:
/usr/local/demo-ai/computer-vision/onnx-image-classification/python/launch_python_label_onnx_mobilenet_testdata.sh
4. References[edit source]