Last edited one month ago

How to run Coral Edge TPU inference using Python TensorFlow Lite API

Applicable for STM32MP13x lines, STM32MP15x lines

1. Article purpose[edit source]

This article describes how to run an inference on the STM32MP1 using a Google Coral Edge TPU device and the Python TensorFlow Lite API. It is an example based on an image classification application.

Info white.png Information
There are many ways to achieve this result, this article provides a simple example. You are free to explore other methods that are better adapted to your development constraints.

2. Libedgetpu and TensorFlow Lite Python APIs[edit source]

The Artificial Intelligence expansion package X-LINUX-AI comes with TensorFlow Lite Python APIs and the libedgetpu (providing the support of the Coral Edge TPU) that has been rebuilt from source to be compatible with the embedded TensorFlow Lite runtime.

In the next section we explore, with a basic image-classification example, how to inference your models on the board using the Coral Edge TPU device.

3. Running an inference on Coral Edge TPU using the TensorFlow Lite Python API[edit source]

3.1. Installing prerequisites on the target[edit source]

After having configured the AI OpenSTLinux package, you can install the X-LINUX-AI components and the packages needed to run our example.
The main packages are Python Numpy[1], Python OpenCV[2], Python TensorFlow Lite runtime[3] and libedgetpu

Warning white.png Warning
The software package is provided AS IS, and by downloading it, you agree to be bound to the terms of the software license agreement (SLA). The detailed content licenses can be found here.
 apt-get install python3-numpy python3-opencv python3-tensorflow-lite libedgetpu

3.2. Preparing the workspace on the target[edit source]

 cd /usr/local/ && mkdir -p workspace
 cd workspace && mkdir -p models testdata 

In this example, we use the mobilenet_v1_1.0_224_quant_edgetpu.tflite model to classify download images, accompanied by the labels file from the Coral[4] website.

 wget https://github.com/google-coral/edgetpu/raw/master/test_data/mobilenet_v1_1.0_224_quant_edgetpu.tflite -O models/mobilenet_v1_1.0_224_quant_edgetpu.tflite
 wget https://github.com/google-coral/edgetpu/raw/master/test_data/imagenet_labels.txt -O models/labels.txt
 wget https://github.com/google-coral/edgetpu/raw/master/test_data/bird.bmp -O testdata/bird.bmp
Info white.png Information
You can run your own model but you have to make sure that your .tflite model is compiled for inferencing on Coral EdgeTPU. Refer first to Compile your custom model

3.3. Running the inference[edit source]

Here is a simple python script example to execute a NN inference on the Google Coral Edge TPU. It is important to mention that this script runs 2 inferences on the board to let you have a clear idea on the real inference time from the second inference since the first one takes into account the time of loading model to Coral Edge TPU RAM memory.

#!/usr/bin/python3
#
# Copyright (c) 2020 STMicroelectronics. All rights reserved.
#
# This software component is licensed by ST under BSD 3-Clause license,
# the "License"; You may not use this file except in compliance with the
# License. You may obtain a copy of the License at:
#                        opensource.org/licenses/BSD-3-Clause

import sys
import numpy as np
import tflite_runtime.interpreter as tflite
import time
import cv2

label_file = "/usr/local/workspace/models/labels.txt"
with open( label_file, 'r') as  f :
           labels = [ line.strip() for line in f.readlines() ]
model_file = "/usr/local/workspace/models/mobilenet_v1_1.0_224_quant_edgetpu.tflite"

#Create the interpreter and allocate tensors
interpreter = tflite.Interpreter(model_path = model_file, experimental_delegates = [tflite.load_delegate('libedgetpu-max.so.2')])
interpreter.allocate_tensors()

#Getting the model input and output details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]

#Read the picture, convert from BGR to RGB encoding,
#resized to fit the size of the model input and have their
#dimensions expanded by one
image = cv2.imread(sys.argv[1])
nn_img_rgb = cv2.cvtColor(np.array(image), cv2.COLOR_BGR2RGB)
nn_img_rgb_resized = cv2.resize(nn_img_rgb, (width, height))
input_data = np.expand_dims(nn_img_rgb_resized, axis=0)

#Set the input data, execute the first inference (that could take longer
#since the model is being loaded on the Coral Edge TPU RAM)
interpreter.set_tensor(input_details[0]['index'], input_data)
start = time.perf_counter()
interpreter.invoke()
inference_time = time.perf_counter() - start
print("1st inference:", inference_time, "s")

# Execute the second inference and measure the inference duration
start = time.perf_counter()
interpreter.invoke()
inference_time = time.perf_counter() - start
print("2nd inference:", inference_time, "s")

#Print the results
results = np.squeeze(interpreter.get_tensor(output_details[0]['index']))
top_k = results.argsort()[-5:][::-1]
for i in top_k:
    print('{0:08.6f}'.format(float(results[i]*100/255.0))+":", labels[i])
print("\n")

Copy this python script to the target:

 scp path/to/your/script/classify_on_stm32mp1.py root@<board_ip_address>:/usr/local/workspace

3.4. Running the inference from the board on the Coral Edge TPU[edit source]

 cd /usr/local/workspace
 python3 classify_on_stm32mp1.py testdata/bird.bmp

1st inference: 0.1224343549997684 s 2nd inference: 0.011332693999975163 s

88.627451: 20  chickadee
4.313725: 19  magpie
3.137255: 18  jay
2.745098: 21  water ouzel, dipper
1.176471: 14  junco, snowbird
Info white.png Information
The first inference may take longer since the model is being loaded on the Coral Edge TPU RAM. The real inference time after hardware acceleration is starting from the second inference.

4. References[edit source]