How to run Coral Edge TPU inference using Python TensorFlow Lite API

Revision as of 12:30, 24 June 2020 by Registered User (Created page with "==Article purpose== This article aims to describe how to run an inference on the STM32MP1 using a Google Coral EdgeTPU device and the Python TensorFlow-Lite API. {{info|The...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

1. Article purpose[edit source]

This article aims to describe how to run an inference on the STM32MP1 using a Google Coral EdgeTPU device and the Python TensorFlow-Lite API.

Info white.png Information
There are many ways to achieve the same result; this article aims to provide at least one simple example. You are free to explore other methods that are better adapted to your development constraints.

2. Difference between TensorFlow-Lite Python APIs[edit source]

The Artificial Intelligence expansion package X-LINUX-AI comes with two versions of TensorFlow-Lite. The first runtime version is based on the release 2.2.0 of TensorFlow and the second is runtime based on XXX. This is due to the fact that TensorFlow Lite[1]2.2.0 does not support the Coral EdgeTPU runtime. The following figure explains the software structure.

3. Running an inference using XXXX[edit source]

Before running the inference make sure that your .tflite model is compiled for inferencing on the Coral EdgeTPU. Please take a look on how to Compile your custom model and send it to the board.