Image classification

Revision as of 15:56, 25 June 2024 by Registered User (Created page with "{{ApplicableFor |MPUs list=STM32MP13x, STM32MP15x, STM32MP25x |MPUs checklist=STM32MP13x, STM32MP15x, STM32MP25x }} <noinclude></noinclude> This article explains how to use a {{Highlight|TensorFlow Lite<ref name=tensorflowlite_url>[https://www.tensorflow.org/lite TensorFlow Lite]</ref>}}, {{Highlight|ONNX <ref name=onnx_url>[https://onnx.ai/ ONNX]</ref>}} or {{Highlight|OpenVX <ref name=openvx_url>[https://www.khronos.org/openvx/ OpenVX]</ref>}} model in applications for...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Applicable for STM32MP13x lines, STM32MP15x lines, STM32MP25x lines

This article explains how to use a TensorFlow Lite[1], ONNX [2] or OpenVX [3] model in applications for image classification based on MobileNetV1 and MobileNetV2 neural network using the stai_mpu API on our STM32MPUs serie.

1. Description[edit source]

The image classification neural network model allows identification of the subject represented by an image. It classifies an image into various classes.

C/C++ TensorFlow Lite image classification application

The application demonstrates a computer vision use case for image classification where frames are grabbed from a camera input (/dev/videox) and analyzed by a neural network model interpreted by OpenVX, TFLite or ONNX framework.
A Gstreamer pipeline is used to stream camera frames (using v4l2src), to display a preview (using gtkwaylandsink) and to execute neural network inference (using appsink).
The result of the inference is displayed in the preview. The overlay is done using GtkWidget with cairo.
This combination is quite simple and efficient in terms of CPU overhead.

The models used with this application are the MobileNet v1 downloaded from the TensorFlow Lite Hub[4] and the MobileNet v2 downloaded from the ST model zoo[5]

2. Installation[edit source]

2.1. Install from the OpenSTLinux AI package repository[edit source]

Warning white.png Warning
The software package is provided AS IS, and by downloading it, you agree to be bound to the terms of the software license agreement (SLA0048). The detailed content licenses can be found here.

After having configured the AI OpenSTLinux package you can install X-LINUX-AI components for image classification application:

2.1.1. On STM32MP2x board[edit source]

The OpenVX application will be installed to take advantage of the neural processing unit (NPU) and graphics processing unit (GPU) hardware acceleration.

  • To install this application, please use the following command:
 x-linux-ai -i stai-mpu-image-classification-cpp-ovx
Warning DB.png Important
You can install the Python version of this application by installing this package: stai-mpu-image-classification-python-ovx


  • Then, restart the demo launcher:
 systemctl restart weston-graphical-session.service

2.1.2. On STM32MP1x board[edit source]

Info white.png Information
Note that you can expand this section

The TFLite application will be installed with the XNNPACK delegate to accelerate the neural network inference on the CPU.

  • To install this application, please use the following command:
 x-linux-ai -i stai-mpu-image-classification-cpp-tflite
Warning DB.png Important
You can install the Python version of this application by installing this package: stai-mpu-image-classification-python-tflite


  • Then, restart the demo launcher:
 systemctl restart weston-graphical-session.service


2.2. Source code location[edit source]

  • in the Openembedded OpenSTLinux Distribution with X-LINUX-AI Expansion Package:
<Distribution Package installation directory>/layers/meta-st/meta-st-x-linux-ai/recipes-samples/image-classification/files/stai_mpu
  • on GitHub:
recipes-samples/image-classification/files/stai_mpu

2.3. Regenerate the package from OpenSTLinux Distribution (optional)[edit source]

Using the OpenSTLinux Distribution, you are able to rebuild the application.

Info white.png Information
If not already installed, the X-LINUX-AI OpenSTLinux Distribution need to be installed by following this link


  • Set up the build environment:
 cd <Distribution Package installation directory>
 source layers/meta-st/scripts/envsetup.sh
  • Rebuild the application on STM32MP2x:
 bitbake stai-mpu-image-classification-cpp-ovx -c compile

The generated binary is available here:

<Distribution Package installation directory>/<build directory>/tmp-glibc/work/cortexa35-ostl-linux/stai-mpu-image-classification-cpp/5.0.0-r0/stai-mpu-image-classification-cpp-5.0.0/stai_mpu
Warning DB.png Important
You can generate the Python version of this application: stai-mpu-image-classification-python-ovx


  • Rebuild the application on STM32MP1x:
Info white.png Information
Note that you can expand this section
 bitbake stai-mpu-image-classification-cpp-tflite -c compile

The generated binary is available here:

<Distribution Package installation directory>/<build directory>/tmp-glibc/work/cortexa35-ostl-linux/stai-mpu-image-classification-cpp/5.0.0-r0/stai-mpu-image-classification-cpp-5.0.0/stai_mpu
Warning DB.png Important
You can generate the Python version of this application: stai-mpu-image-classification-python-tflite