How to reproduce an example using PyCoral API

Revision as of 09:18, 16 June 2022 by Registered User
Applicable for STM32MP13x lines, STM32MP15x lines

This article explains how to reproduce an example using the PyCoral API[1]. The PyCoral is a python API which is used for three main purposes:

  • Inferencing : facilitates the implementation of neural network inference on Coral Edge TPU
  • Pipelining : provides functions to pipeline a model on multiple Coral Edge TPUs
  • Transfer learning : enables on device transfer learning

This API is the python equivalent of the C/C++ libcoral API[2].

Several examples can be found on the PyCoral GitHub[3]. This article will only show how to build a semantic segmentation example from scratch. The method used here can easily be applied for other examples.

1. Description[edit source]

This example is based on a semantic segmentation model which allows identification and clustering together pixels of an image belonging to a same class representing an object.

PyCoral API semantic segmetation example

The purpose of this example is beyond the semantic segmentation aspect to demonstrate how to use the PyCoral API to easily realize inferences on Google Coral Edge TPU[4]

The model used with this example is a deeplabv3 downloaded from the Coral GitHub testing models[5].

2. Installation[edit source]

2.1. Install from the OpenSTLinux AI package repository[edit source]

Warning white.png Warning
The software package is provided AS IS, and by downloading it, you agree to be bound to the terms of the software license agreement (SLA0048). The detailed content licenses can be found here.

After having configured the AI OpenSTLinux package you can install the X-LINUX-AI components for this examples:

 apt-get install packagegroup-x-linux-ai-tflite-edgetpu

Then restart the demo launcher: - For OpenSTLinux distribution with a version lower than 4.0 use

 systemctl restart weston@root

- For other OpenSTLinux distribution use :

 systemctl restart weston-launch

2.2. Source code location[edit source]

The python script semantic_segmentation.py of this example is located in the PyCoral API examples GitHub [6].

3. How to use the example[edit source]

3.1. Download the example[edit source]

As mentioned before the python script must be downloaded from the PyCoral API examples GitHub [6].

Clone the PyCoral GitHub :

 git clone https://github.com/google-coral/pycoral
 cd pycoral/examples

To download all the files required by the example a script named install_requirements.sh is provided in the examples directory. This script is very easy to use, only the filename semantic_segmentation.py must be passed in argument to automatically download all needed files :

 ./install_requirements.sh semantic_segmentation.py

3.2. launch the application[edit source]

Copy the python script semantic_segmentation.py and the test data directory on the board:

 scp -r  ../test_data/ root@<board_ip>:/path/
 scp semantic_segmentation.py root@<board_ip>:/path/

Connect to the board, the script accepts the following input parameters:

usage: semantic_segmentation.py [-h] --model MODEL --input INPUT [--output OUTPUT] [--keep_aspect_ratio]

options:
  -h, --help           show this help message and exit
  --model MODEL        Path of the segmentation model.
  --input INPUT        File path of the input image.
  --output OUTPUT      File path of the output image.
  --keep_aspect_ratio  keep the image aspect ratio when down-sampling the image by adding black pixel padding (zeros) on bottom or right. By default the image is resized and reshaped without cropping. This
                       option should be the same as what is applied on input images during model training. Otherwise the accuracy may be affected and the bounding box of detection result may be stretched.
Info white.png Information
The Coral Edge TPU must be plugged on the board before launching the script

Launch the example :

 python3 semantic_segmentation.py --model test_data/deeplabv3_mnv2_pascal_quant_edgetpu.tflite  --input test_data/bird.bmp

If everything went well, the result should be something like that :

Done. Results saved at semantic_segmentation_result.jpg

Get the results of the segmentation:

 scp root@<board_ip>:/path/semantic_segmentation_result.jpg .

4. References[edit source]