Library integration and API

STM32Cube AI Studio generates a set of C source/header files and a static runtime library for seamless integration.

1. File structure

Generated artifacts are placed under the AI sub-folder of your exported project and typically include:

  • App/AI: Network sources and headers, application code, glue code
  • Middlewares/: AI runtime library integration

Middleware and application content may vary based on toolchain, project variant whether application or validation, and project settings.

2. Integration teps

  1. Include headers: Include stai.h and the generated network header(s).
  2. Runtime initialization: Calls stai_runtime_init().
  3. Network context: Declares the context with STAI_NETWORK_CONTEXT_DECLARE(...) and initialize with stai_network_init().
  4. Activation buffers: Allocates and binds with stai_network_set_activations().
  5. Inputs/outputs: Binds buffers using stai_network_set_inputs() and stai_network_set_outputs().
  6. Inspect/run: Optionally calls stai_network_get_info(), then run with stai_network_run().
  7. Deinit: Calls stai_network_deinit() and stai_runtime_deinit().

3. Multi-network support

  • Use the ST‑AI (stai) embedded client API multi-network support for projects with multiple models.

4. Code and data placement

  • .text/.rodata: Place in internal flash.
  • .data/.bss: Place in SRAM.
  • External memory: Use for large models automatically or as configured.

5. Debugging

  • Black box: No internal state introspection.
  • Error API: Use the ST‑AI (stai) error reporting for integration issues.

6. Related ST Edge AI Core documentation

The embedded inference API is documented in ST Edge AI Core:

7. Next steps