This message will disappear after all relevant tasks have been resolved.
Semantic MediaWiki
There are 1 incomplete or pending task to finish installation of Semantic MediaWiki. An administrator or user with sufficient rights can complete it. This should be done before adding new data to avoid inconsistencies.This page is a candidate for renaming (move). The requested new name is: How to measure performance of your NN models using the Coral Edge TPU . The supplied reason is: Remove benchmark from the title . -- Registered User (-) 19:02, 1 December 2021 (CET). Wiki maintainers: remember to update the pages that link this page before renaming (moving) it. |
This article describes how to measure the performances of a Coral Edge TPU neural network model on STM32MP1x plateform.
1. Installation[edit source]
1.1. Install from the OpenSTLinux AI package repository[edit source]
After having configured the AI OpenSTLinux package you can install X-LINUX-AI components for this application. The minimum package required is :
apt-get install tflite-edgetpu-benchmark
The model used in this example can be installed from the following package:
apt-get install tflite-models-coco-ssd-mobilenetv1-edgetpu
2. How to use the Benchmark application[edit source]
2.1. Executing with the command line[edit source]
The tflite_edgetpu_benchmark application is located in the userfs partition:
/usr/local/demo-ai/benchmark/tflite-edgetpu/tflite_edgetpu_benchmark
It accepts the following input parameters:
Usage: ./tflite-edgetpu-benchmark -m --model_file <.tflite file path>: .tflite model to be executed -l --loops <int>: provide the number of time the inference will be executed (by default nb_loops=1) --help: show this help
2.2. Testing with COCO SSD MobileNet V1[edit source]
The model used for testing is the detect_edgetpu.tflite which is a COCO SSD MobilenetV1. It is a model used for object detection.
On the target, the model is located here:
/usr/local/demo-ai/computer-vision/models/coco_ssd_mobilenet/
To launch the application, use the following command :
/usr/local/demo-ai/benchmark/tflite-edgetpu/tflite_edgetpu_benchmark -m /usr/local/demo-ai/computer-vision/models/coco_ssd_mobilenet/detect_edgetpu.tflite -l 50
Console output:
model file set to: /usr/local/demo-ai/computer-vision/models/coco_ssd_mobilenet/detect_edgetpu.tflite This benchmark will execute 50 inference(s) Bus 002 Device 004: ID 18d1:9302 Google Inc. Loaded model /usr/local/demo-ai/computer-vision/models/coco_ssd_mobilenet/detect_edgetpu.tflite resolved reporter inferences are running: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # inference time: min=58315us max=109714us avg=66009.4us