How to measure machine learning model power consumption with STM32Cube.AI generated application

Revision as of 14:12, 4 August 2021 by Registered User (Created page with "This article is describing how to easily modify the system performance application generated thanks to STM32Cube.AI to run power and energy measurements in optimal configurati...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This article is describing how to easily modify the system performance application generated thanks to STM32Cube.AI to run power and energy measurements in optimal configuration.

The system performance application allows to run automatically inferences of a machine learning processing generated thanks to STM32Cube.AI (neural network or traditional machine learning models). It allows to measure directly on the target the inference time. It can also be used to measure power consumption. However, the default settings are not fully optimal to ensure accurate measures of only the processing excluding peripherals and power leakages on unused GPIO. As an example, we will take the NUCLEO-L4R5ZI, but the process can be adapted to any board supported by STM32Cube.AI.


Info white.png Information
  • STM32Cube.AI is a software aiming at the generation of optimized C code for STM32 and neural network inference. It is delivered under the Mix Ultimate Liberty+OSS+3rd-party V1 software license agreement (SLA0048).

1. Prerequisites

1.1. Hardware

1.2. Software

The following section describes how to start from STM32CubeMX to generate the project. We will provide soon pre-defined STM32CUbeMX project Files ioc for some boards on our GitHub. You can load them directly to the Import your model section. To load an ioc, select Files / Load Project:

No categories assignedEdit