Creating a project in STM32Cube AI Studio


STM32Cube AI Studio is the next-generation tool for deploying AI models on STM32 microcontrollers.

This guide provides a comprehensive, step-by-step walkthrough for creating a new project, inspired by best practices from the legacy plugin X-CUBE-AI.

1. Introduction

STM32Cube AI Studio enables:

  • Import and validation of AI models (supported types are: ONNX, TFLite, Keras)
  • Selection and configuration of STM32 hardware targets
  • Analysis, optimization, and visualization of models
  • Management of memory pools and hardware accelerators
  • Export of ready-to-deploy code and binaries.

2. Project workflow

2.1. Launch and Prepare

  1. Open STM32Cube AI Studio from the host PC desktop or start menu.
  2. Ensure the latest version for new features and device support are available on the host PC.

2.2. Create a new project

Start a project

  • Click the Project button on the dashboard or sidebar.
STM32Cube AI Studio project 0
  • Enter a Project Name (for example, gesture_recognition) and define the Workspace Path.
STM32Cube AI Studio project 1
  • Select a target (for example: STM32F746G-DISCO)
STM32Cube AI Studio project 2
  • Select a toolchain (for example: STM32CubeIDE labelled as "IDE")
STM32Cube AI Studio project 3


Project Structure

Once the project is started, a structure is set up, the project includes:

  • Model files
  • Configuration files
  • Optimization results
  • Exported code and binaries

For a generated project, these files are directly included in theproject code structure, in:

  • Middlewares/ST/AI: Includes ST Edge AI Core middleware and libraries
  • AI/App: Application code


Note white.png Note

ST Edge AI Core outputs are stored in a .ai folder which is also used as a temporary folder / backup folder.

2.3. Start a Run

The Run button generate a report summarizing model performance.
Before running, configure the following fields:


Target

Configure target-specific options, such as:

  • activating the Neural-ART NPU, or other specific hardware
  • selecting optimization profiles
  • defining custom runtime arguments


Model

Click to import a model. Supported formats are:

  • ONNX (.onnx)
  • TensorFlow Lite (.tflite)
  • Keras (.h5, .keras, .hdf5).


Memory pool

Define where the model activations will be allocated in RAM, either automatically or manually (customize pool priority, size, and offset).

Below is an example on NUCLEO-H743ZI using default memory pools, where DTCMRAM will be filled first, then RAM_D1, then RAM_D2, and so on. Add a new memory pool row manually if needed, with custom offset and size, for example when using external RAM.

Configuring memory pools manually

For more information about memory pools, see this section.


Validation

This section defines:

  • which data will be used for model validation
  • how many samples will be used
  • where validation will be performed: on host (PC) or on target (STM32 board)


Note white.png Note
  • Run on host relies on the ST Edge AI Core validate command, or analyze when not supported.
  • Run on target relies on STM32CubeMX (building a validation project), ST Edge AI core (generating model C code via the generate command), and STM32CubeProgrammer (flashing validation app on target).

For more information, refer to the ST Edge AI Core CLI command workflow.

2.4. Generate code

The Generate code button either:

  • generates optimized C code (.c and .h) for the imported model
  • generates a hello_world project template for the selected IDE

Both options rely on the ST Edge AI Core generate command.


3. Best Practices & Troubleshooting

Model compatibility

Memory optimization

  • If memory errors occur, reduce the model size or optimize memory pools.
  • Select a device with more resources if needed.

4. Example: Creating a gesture recognition project

  1. Launch STM32Cube AI Studio
  2. Create a new project named gesture_recognition
  3. Import a model, such as, gesture_model.onnx
  4. Select the target board, such as, STM32H747I-DISCO
  5. Execute a run, selecting to run either on the target or on the host
  6. Visualize the model architecture
  7. Generate a STM32project
  8. Integrate into the application

5. Resources

6. Related ST Edge AI Core Documentation

7. Next Steps

  1. Validation and Performance Measurement
  2. Library integration and API