How to measure performance of your NN models using the Coral Edge TPU

Revision as of 19:24, 1 December 2021 by Registered User

This article describes how to measure the performances of a Coral Edge TPU neural network model on STM32MP1x plateform.

1. Installation[edit source]

1.1. Install from the OpenSTLinux AI package repository[edit source]

Warning white.png Warning
The software package is provided AS IS, and by downloading it, you agree to be bound to the terms of the software license agreement (SLA). The detailed content licenses can be found here.

After having configured the AI OpenSTLinux package you can install X-LINUX-AI components for this application. The minimum package required is :

 apt-get install tflite-edgetpu-benchmark

The model used in this example can be installed from the following package:

 apt-get install tflite-models-coco-ssd-mobilenetv1-edgetpu

2. How to use the Benchmark application[edit source]

2.1. Executing with the command line[edit source]

The "tflite_edgetpu_benchmark"" application is located in the userfs partition:

/usr/local/bin/demo-ai/benchmark

It accepts the following input parameters:

Usage: ./tflite-edgetpu-benchmark

        -m --model_file <.tflite file path>:  .tflite model to be executed
        -l --loops <int>:                     provide the number of time the inference will be executed 
                                              (by default nb_loops=1)
        --help:                               show this help

2.2. Testing with COCO SSD MobileNet V1[edit source]

The model used for testing is the detect_edgetpu.tflite which is a COCO SSD MobilenetV1. It is a model used for object detection.
On the target, the model is located here:

/usr/local/demo-ai/computer-vision/models/coco_ssd_mobilenet/

To launch the application, use the following command :

  ./tflite-edgetpu-benchmark -m <model .tflite> -l <number of loops>

In ouput, this benchmark script will return the following line:

inference time: min=65734us  max=77319us  avg=74377.3us

With that, you can have an idea of your model performances.

No categories assignedEdit