1. Article purpose[edit source]
This article aims to describe how to run an inference on the STM32MP1 using a Google Coral EdgeTPU device and the Python TensorFlow-Lite API.
2. Difference between TensorFlow-Lite Python APIs[edit source]
The Artificial Intelligence expansion package X-LINUX-AI comes with two versions of TensorFlow-Lite. The first runtime version is based on the release 2.2.0 of TensorFlow and the second is runtime based on XXX. This is due to the fact that TensorFlow Lite[1]2.2.0 does not support the Coral EdgeTPU runtime. The following figure explains the software structure.
- Example.jpg
Caption1
3. Running an inference using XXXX[edit source]
Before running the inference make sure that your .tflite model is compiled for inferencing on the Coral EdgeTPU. Please take a look on how to Compile your custom model and send it to the board.