X-LINUX-AI-CV OpenSTLinux expansion package

This article describes the content of the X-LINUX-AI-CV Expansion Package and explains how to use it.

1 Description

X-LINUX-AI-CV is the STM32 MPU OpenSTLinux Expansion Package that targets artificial intelligence for computer vision.
This package contains AI and computer vision frameworks, as well as application examples to get started with some basic use cases.

STM32MPU Embedded Software with the X-LINUX-AI-CV OpenSTLinux expansion package

1.1 Current version

X-LINUX-AI-CV v1.0.0

1.2 Contents

1.3 Software structure

X-LINUX-AI-CV v1.0.0 Expansion Package Software structure

1.4 Supported hardware

As any software expansion package, the X-LINUX-AI-CV is supported on all STM32MP1 Series and is compatible with the following boards:

  • STM32MP157C-DK2[4]
  • STM32MP157C-EV1[5]
  • STM32MP157A-EV1[6]

2 How to use the X-LINUX-AI-CV Expansion Package

2.1 Software installation

Please refer to the STM32MP1 artificial intelligence expansion packages article to build and install the X-LINUX-AI-CV software.

2.2 Material needed

To use the X-LINUX-AI-CV OpenSTLinux Expansion Package, choose one of the following materials:

  • STM32MP157C-DK2[4] + an UVC USB WebCam
  • STM32MP157C-EV1[5] with the built in camera
  • STM32MP157A-EV1[6] with the built in camera

3 AI application examples

3.1 Python TensorFlowLite applications

This part provides Python applications examples based on TensorflowLite and OpenCV.
The applications integrate a camera preview and test data picture that is then connected to the chosen TensorFlowLite model.
Two Python application examples are available and are described below:

3.1.1 Image classification application

3.1.1.1 Description

The image classification[7] neural network model allows identification of the subject represented by an image. It classifies an image into various classes.

Image classification application

The label_tfl_multiprocessing.py Python script (located in the userfs partition: /usr/local/demo-ai/ai-cv/python/label_tfl_multiprocessing.py) is a multi-process python application for image classification.
The application enables OpenCV camera streaming (or test data picture) and TensorFlowLite interpreter runing the NN inference based on the camera (or test data pictures) inputs.
The user interface is implemented using Python GTK.

3.1.1.2 How to use it

The Python script label_tfl_multiprocessing.py accepts the following input parameters:

-i, --image          image directory with images to be classified
-v, --video_device   video device (default /dev/video0)
--frame_width        width of the camera frame (default is 640)
--frame_height       height of the camera frame (default is 480)
--framerate          framerate of the camera (default is 30fps)
-m, --model_file     tflite model to be executed
-l, --label_file     name of file containing labels
--input_mean         input mean
--input_std          input standard deviation
3.1.1.3 Testing with MobileNet V1
3.1.1.3.1 Default model: MobileNet V1 0.5 128 quant

The default model used for tests is the mobilenet_v1_0.5_128_quant.tflite downloaded from Tensorflow Lite hosted models[8]


To ease launching of the Python script, two shell scripts are available:

  • launch image classification based on camera frame inputs
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_label_tfl_mobilenet.sh
  • launch image classification based on the picture located in /usr/local/demo-ai/ai-cv/models/mobilenet/testdata directory
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_label_tfl_mobilenet_testdata.sh
Info.png Note that you need to populate the testdata directory with your own data sets.

The pictures are then randomly read from the testdata directory

3.1.1.3.2 Testing another MobileNet v1 model

You can test other models by downloading them directly to the STM32MP1 board. From example:

Board $> curl http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz | tar xzv -C /usr/local/demo-ai/ai-cv/models/mobilenet/
Board $> python3 /usr/local/demo-ai/ai-cv/python/label_tfl_multiprocessing.py -m /usr/local/demo-ai/ai-cv/models/mobilenet/mobilenet_v1_1.0_224_quant.tflite -l /usr/local/demo-ai/ai-cv/models/mobilenet/labels.txt -i /usr/local/demo-ai/ai-cv/models/mobilenet/testdata/
3.1.1.4 Testing with your own model

The label_tfl_multiprocessing.py python script fits with Tensorflow Lite model format for image classification. Any model with a .tflite extension and a label file can be used with label_tfl_multiprocessing.py python script.
You are free to update the label_tfl_multiprocessing.py python script to perfectly fit your needs.

3.1.2 Object detection application

3.1.2.1 Description

The object detection[9] neural network model allows identification and localization of a known object within an image.

Object detection application

The objdetect_tfl_multiprocessing.py python script (located in the userfs partition: /usr/local/demo-ai/ai-cv/python/objdetect_tfl_multiprocessing.py) is a multi-process python application for image classification.
The application enables OpenCV camera streaming (or test data pictures) and TensorFlowLite interpreter runing the NN inference based on the camera (or test data picture) inputs.
The user interface is implemented using Python GTK..

3.1.2.2 How to use it

The Python script objdetect_tfl_multiprocessing.py accepts different input parameters:

-i, --image          image directory with images to be classified
-v, --video_device   video device (default /dev/video0)
--frame_width        width of the camera frame (default is 640)
--frame_height       height of the camera frame (default is 480)
--framerate          framerate of the camera (default is 30fps)
-m, --model_file     tflite model to be executed
-l, --label_file     name of file containing labels
--input_mean         input mean
--input_std          input standard deviation
3.1.2.3 Testing with COCO ssd MobileNet v1

The model used for test is the detect.tflite downloaded from object detection overview[9]

To ease launching of the Python script, two shell scripts are available:

  • launch object detection based on camera frame inputs
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_objdetect_tfl_coco_ssd_mobilenet.sh
  • launch object detection based on the picture located in /usr/local/demo-ai/ai-cv/models/coco_ssd_mobilenet/testdata directory
Board $> /usr/local/demo-ai/ai-cv/python/launch_python_objdetect_tfl_coco_ssd_mobilenet_testdata.sh
Info.png Note that you need to populate the testdata directory with your own data sets.

The pictures are then randomly read from the testdata directory

4 Enjoy running your own CNN

The two above examples provide application samples to demonstrate how to execute Tensforflow Lite CNN easily on the STM32MP1.

You are free to update the Python scripts for your own purposes, using your own CNN Tensorflow Lite models.

5 References


Attachments

Discussions