Last edited 2 years ago

X-LINUX-AI - object detection using Coral Edge TPU TensorFlow Lite Python runtime


This article explains how to experiment with Coral Edge TPU[1] applications for object detection based on the COCO SSD MobileNet v1 model using TensorFlow Lite Python runtime.

1. Description[edit source]

The object detection[2] neural network model allows identification and localization of a known object within an image.

Python Coral Edge TPU object detection application

The application enables three main features :

  • A camera streaming preview implemented using Gstreamer
  • A NN inference based on the camera (or test data pictures) inputs is being ran by the Coral Edge TPU[1] TensorFlow Lite[3] interpreter
  • A user interface implemented using Python GTK.

With this application the inference of the NN is mainly handled by the Coral Edge TPU[1], while the CPU deals mostly with the streaming camera and GUI.

The model used with this application is the COCO SSD MobileNet v1 downloaded from the object detection overview[2] and converted for the Coral Edge TPU.

2. Installation[edit source]

2.1. Install from the OpenSTLinux AI package repository[edit source]

After having configured the AI OpenSTLinux package you can install X-LINUX-AI components for this application:

 apt-get install tflite-cv-apps-edgetpu-object-detection-python

Then restart the demo launcher:

 systemctl restart weston-launch

2.2. Source code location[edit source]

The objdetect_tfl.py Python script is available:

  • in the Openembedded OpenSTLinux Distribution with X-LINUX-AI Expansion Package:
<Distribution Package installation directory>/layers/meta-st/meta-st-stm32mpu-ai/recipes-samples/tflite-cv-apps-edgetpu/files/object-detection/python/objdetect_tfl.py
  • on the target:
/usr/local/demo-ai/computer-vision/tflite-object-detection-edgetpu/python/objdetect_tfl.py
  • on GitHub:
https://github.com/STMicroelectronics/meta-st-stm32mpu-ai/tree/v3.0.0/recipes-samples/tflite-cv-apps/files/object-detection/python/objdetect_tfl.py

3. How to use the application[edit source]

3.1. Launching via the demo launcher[edit source]

Demo launcher

3.2. Executing with the command line[edit source]

The Python script objdetect_tfl.py application is located in the userfs partition:

/usr/local/demo-ai/computer-vision/tflite-object-detection-edgetpu/python/objdetect_tfl.py

It accepts the following input parameters:

usage: objdetect_tfl.py [-h] [-i IMAGE] [-v VIDEO_DEVICE] [--frame_width FRAME_WIDTH] [--frame_height FRAME_HEIGHT] [--framerate FRAMERATE] [-m MODEL_FILE] [-l LABEL_FILE]
                        [-e EXT_DELEGATE] [-p {std,max}] [--edgetpu] [--input_mean INPUT_MEAN] [--input_std INPUT_STD] [--validation] [--num_threads NUM_THREADS]
                        [--maximum_detection MAXIMUM_DETECTION] [--threshold THRESHOLD]

options:
  -h, --help            show this help message and exit
  -i IMAGE, --image IMAGE
                        image directory with image to be classified
  -v VIDEO_DEVICE, --video_device VIDEO_DEVICE
                        video device ex: video0
  --frame_width FRAME_WIDTH
                        width of the camera frame (default is 320)
  --frame_height FRAME_HEIGHT
                        height of the camera frame (default is 240)
  --framerate FRAMERATE
                        framerate of the camera (default is 15fps)
  -m MODEL_FILE, --model_file MODEL_FILE
                        .tflite model to be executed
  -l LABEL_FILE, --label_file LABEL_FILE
                        name of file containing labels
  -e EXT_DELEGATE, --ext_delegate EXT_DELEGATE
                        external_delegate_library path
  -p {std,max}, --perf {std,max}
                        [EdgeTPU ONLY] Select the performance of the Coral EdgeTPU
  --edgetpu             enable Coral EdgeTPU acceleration
  --input_mean INPUT_MEAN
                        input mean
  --input_std INPUT_STD
                        input standard deviation
  --validation          enable the validation mode
  --num_threads NUM_THREADS
                        Select the number of threads used by tflite interpreter to run inference
  --maximum_detection MAXIMUM_DETECTION
                        Adjust the maximum number of object detected in a frame accordingly to your NN model (default is 10)
  --threshold THRESHOLD
                        threshold of accuracy above which the boxes are displayed (default 0.60)

3.3. Testing with COCO SSD MobileNet V1[edit source]

The model used for test is the detect_edgetpu.tflite downloaded from the object detection overview[2] and converted for the Coral Edge TPU. If you are interested, please take a look at how this model has been converted.


To ease launching of the application, two shell scripts are available:

  • launch object detection based on camera frame inputs
/usr/local/demo-ai/computer-vision/tflite-object-detection-edgetpu/python/launch_python_objdetect_tfl_edgetpu_coco_ssd_mobilenet.sh
  • launch object detection based on the pictures located in /usr/local/demo-ai/computer-vision/models/mobilenet/testdata directory
 /usr/local/demo-ai/computer-vision/tflite-object-detection-edgetpu/python/launch_python_objdetect_tfl_edgetpu_coco_ssd_mobilenet_testdata.sh

4. References[edit source]