STM32Cube AI Studio documentation

AI:STM32CubeAIstudio logo 1.svg


STM32Cube AI Studio is a free-of-charge GUI allowing for the automatic conversion of pretrained artificial intelligence algorithms, including neural network (NN) and classical machine learning models, into the equivalent optimized C code to be embedded in the application.

STM32Cube AI Studio replaces X-CUBE-AI in the ST AI product offering to cover new STM32 devices, offer new AI-oriented features, and propose a modern user experience focused on adding AI in embedded applications.


The optimized library that is generated offers an easy-to-use and developer-friendly way to deploy AI on edge devices. When optimizing NN models forNeural-ART accelerator NPU (neural processor unit), the tool generates the microcode that maps AI operations on the NPU when possible and fallback to CPU when not. This scheduling is done at operator level to maximize AI hardware acceleration.

STM32Cube AI Studio uses the ST Edge AI Core technology, which is STMicroelectronics technology (CLI) to optimize NN models for any STMicroelectronics products with artificial intelligence (AI) capabilities.

1. Positioning & Purpose

Warning DB.png Important
  • STM32Cube AI Studio replaces the X-CUBE-AI plugin for STM32CubeMX, which is not recommended for new development (NRND) and is no longer supported from ST Edge AI Core version 3.0.0 onwards.
  • For new developments, use STM32Cube AI Studio instead of X-CUBE-AI.


STM32Cube AI Studio

  • Combines: model import, optimization, validation, and code generation in one tool.
  • Provides advanced memory management: external RAM/Flash, memory pools, compression, and visualization.
  • Enhances validation workflows with Desktop and On-target validation, performance metrics, and error diagnostics.
  • Provides a modern and intuitive user interface (UI), project templates, and a CLI for CI/CD.


Key Features

  • Generates an STM32-optimized library from pretrained NN and classical machine learning (ML) models
  • Advanced memory management: External RAM/Flash, memory pools, weight compression.
  • Validates optimized models against reference models on host and on target
  • Integrated validation: checks AI model accuracy/performance on-host and on-device
  • Support for STMicroelectronics Neural-ART Accelerator neural processing unit (NPU) for hardware accelerated AI/ML models
  • Support for 32-bit float and quantized neural network formats (TensorFlow™ Lite and ONNX Tensor-oriented QDQ)
  • Native support for frameworks such as Keras, TensorFlow™ Lite, LiteRT, and any framework exporting to ONNX such as PyTorch™, MATLAB®, and more
  • Support for various built-in scikit-learn models such as isolation forest, support vector machine (SVM), and K-means via ONNX
  • Easy portability across different STM32 microcontrollers series through STM32Cube ecosystem compatibility
  • Free-of-charge, user-friendly license terms


Place in the ST Edge AI Ecosystem


Inputs/outputs

  • Inputs:
    • AI model (mandatory),
    • STM32CubeMX project (.ioc) (optional),
    • datasets (optional)
  • Outputs:
    • AI model optimized C code and headers (.c/.h),
    • STM32CubeMX project (.ioc) for a specific STM32 target, and associated C files


2. Installation and Requirements

Installation

  1. Download STM32Cube AI Studio.
  2. Install required tools: ST Edge AI Core, STM32CubeMX, STM32CubeProgrammer, STM32CubeIDE (optional: IAR, Keil).
  3. Follow the installation guide guide for more details.


OS Support

  • Windows

3. Typical Workflow Example

  1. Select your STM32 target
  2. Import your model (Keras/TFLite/ONNX)
  3. (Configure memories)
  4. Run STM32Cube AI Studio optimization
  5. Validate the results on host OR Validate the results directly on STM32 target
  6. Generate C files model OR Get a STM32 Hello World project including the AI app


More information: seeValidation and Performance Measurement.


4. Migration & Compatibility

Can I keep my current workflow unchanged?

  • Yes, migration is optional. X-CUBE-AI and STM32CubeIDE/MX can still be used as before.
  • However, to use a newer version of ST Edge AI Core, using STM32Cube AI Studio is recommended.


How do I migrate an existing project?

  1. Open STM32Cube AI Studio.
  2. Create a new project and import your existing model (Keras, TFLite, ONNX).
  3. Configure target MCU and memory settings.
  4. Generate code and integrate it within your project.


More information: seeFAQ and Troubleshooting.


Compatibility

  • Supported MCUs: All STM32 families supported by STM32CubeMX, except STM32F1/F2, STM32L1, STM32U0, STM32MP1/MP2. For more details, see STM32CubeMX MCU Selector, in the all supported MCUs section.
  • Supported IDEs: STM32CubeIDE, IAR, Keil, GNU Arm Embedded Toolchain.
  • Supported model formats: Keras (.h5, .keras), TFLite (.tflite), ONNX (.onnx).

5. Next steps

Related STM32Cube AI Documentation

  1. STM32Cube AI Studio installation
  2. Creating a project in STM32Cube AI Studio
  3. Validation and Performance Measurement
  4. Library integration and API
  5. FAQ and Troubleshooting


Useful resources