Template:ArticleMainWriter Template:ArticleFirstDraftVersion
SUMMARY
The AI extension package contains AI frameworks like Neural Network and Computer Vision to enable AI application example that could be run on STM32MP1 hardware.
This package consists in a OpenEmbedded meta layer meta-st-stm32mpu-ai to be added on top of the STM32MP1 Distribution Package. It brings a complete and coherent easy to build / install environment to take advantage of AI on STM32MP1 hardware.
1. Installation of the meta layer[edit source]
- Clone following git repositories into [your STM32MP1 Distribution path]/layers/meta-st
cd [your STM32MP1 Distribution path]/layers/meta-st git clone https://gerrit.st.com/stm32mpuapp//meta/meta-st-stm32mpu-ai.git -b thud
- Setup the build environement
source layers/meta-st/scripts/envsetup.sh Select your DISTRO (ex: openstlinux-weston) Select your MACHINE (ex: stm32mp1)
- Add the new layers
bitbake-layers add-layer ../layers/meta-st/meta-st-stm32mpu-ai
- Build the AI image
bitbake st-image-ai
2. AI application examples[edit source]
2.1. Python TensorFlowLite applications[edit source]
This part provide python applications example based on TensorflowLite and OpenCV.
The applications integrate camera preview and test data picture that is then connected to the chosen TensorFlowLite model.
2.1.1. mage classification[edit source]
2.1.1.1. Description[edit source]
The label_tfl_multiprocessing.py python script is a multi-process python application for image classification.
The application enable OpenCV camera streaming (or test data pictures) and TensorFlowLite interpreter that run the NN inference based on the camera (or test data pictures) inputs.
The user interface is done thanks to python GTK.
2.1.1.2. How to use it[edit source]
The python scripts label_tfl_multiprocessing.py accepts different input parameters:
-i, --image image directory with images to be classified -v, --video_device video device (default /dev/video0) --frame_width width of the camera frame (default is 640) --frame_height" height of the camera frame (default is 480) --framerate framerate of the camera (default is 30fps) -m, --model_file tflite model to be executed -l, --label_file name of file containing labels --input_mean input mean --input_std input standard deviation
To ease the launch of the python script, two shell scripts are available:
- launch image classification based on camera frame inputs
/usr/local/demo-ai/python/launch_python_label_tfl_mobilenet.sh
- launch image classification based on the picture located in /usr/local/demo-ai/models/mobilenet/testdata directory
/usr/local/demo-ai/python/launch_python_label_tfl_mobilenet_testdata.sh
2.1.1.3. Mobilenet V1[edit source]
2.1.1.3.1. Default model is Mobilenet V1 0.5 128 quant[edit source]
The default model use for tests is the mobilenet\_v1\_0.5\_128\_quant.tflite downloaded from https://www.tensorflow.org/lite/guide/hosted\_models.
2.1.1.3.2. Testing another Mobilenet V1 model[edit source]
You can test other models by downloading it directly on the STM32MP1 board. As example:
curl http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz | tar xzv -C /usr/local/demo-ai/models/mobilenet/ python3 /usr/local/demo-ai/python/label_tfl_multiprocessing.py -m /usr/local/demo-ai/models/mobilenet/mobilenet_v1_1.0_224_quant.tflite -l /usr/local/demo-ai/models/mobilenet/labels.txt -i /usr/local/demo-ai/models/mobilenet/testdata/