Last edited one month ago

How to measure performance of your NN models using the Coral Edge TPU

Applicable for STM32MP13x lines, STM32MP15x lines, STM32MP25x lines


This article describes how to measure the performance of a neural network model compiled for Coral Edge TPU on STM32MPUs platforms.

1. Installation[edit source]

1.1. Installing from the OpenSTLinux AI package repository[edit source]

Warning white.png Warning
The software package is provided AS IS, and by downloading it, you agree to be bound to the terms of the software license agreement (SLA0048). The detailed content licenses can be found here.

After having configured the AI OpenSTLinux package you can install X-LINUX-AI components for this application. The minimum package required is the coral-edgetpu-benchmark, it could be installed directly on your board using the following command:

 x-linux-ai -i coral-edgetpu-benchmark

The model used in this example can be installed from the following package:

  • On STM32MP25x linesĀ More info.png:
 x-linux-ai -i img-models-mobilenetv2-10-224
Info white.png Information
On STM32MP1x you can use the following package to benchmark the MobilenetV1 following the same process: img-models-mobilenetv1-05-128

2. How to use the Benchmark application[edit source]

2.1. Executing with the command line[edit source]

The coral_edgetpu_benchmark application is located in the userfs partition:

/usr/local/bin/coral-edgetpu-*/tools/coral_edgetpu_benchmark

It accepts the following input parameters:

Usage: ./coral-edgetpu-benchmark

        -m --model_file <.tflite file path>:  .tflite model to be executed
        -l --loops <int>:                     provide the number of time the inference will be executed 
                                              (by default nb_loops=1)
        --help:                               show this help

2.2. Testing with MobileNet[edit source]

The model used for testing is the mobilenet_v2_1.0_224_int8_per_tensor_edgetpu.tflite which is a MobilenetV2 FPNLite. It is a model used for image classification.
On the target, the model is located here:

/usr/local/x-linux-ai/image-classification/models/mobilenet/

To launch the application, use the following command:

 /usr/local/bin/coral-edgetpu-*/tools/coral_edgetpu_benchmark -m /usr/local/x-linux-ai/image-classification/models/mobilenet/mobilenet_v2_1.0_224_int8_per_tensor_edgetpu.tflite -l 5

Console output:

model file set to: /usr/local/x-linux-ai/image-classification/models/mobilenet/mobilenet_v2_1.0_224_int8_per_tensor_edgetpu.tflite
This benchmark will execute 5 inference(s)
Bus 001 Device 006: ID 18d1:9302 Google Inc. 
Loaded model /usr/local/x-linux-ai/image-classification/models/mobilenet/mobilenet_v2_1.0_224_int8_per_tensor_edgetpu.tflite
resolved reporter

inferences are running: INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
# # # # # # 

inference time: min=16346us  max=16476us  avg=16410.2us