This message will disappear after all relevant tasks have been resolved.
Semantic MediaWiki
There are 1 incomplete or pending task to finish installation of Semantic MediaWiki. An administrator or user with sufficient rights can complete it. This should be done before adding new data to avoid inconsistencies.1. Common Issues
1.1. How to update only the exported neural network (NN) library?
- Reopen the project in STM32Cube AI Studio, update the model, and regenerate the project.
1.2. Can I use the generated library in a non-STM32CubeMX project?
- Yes. Copy the AI sub-folder and required files into your project and update build settings.
1.3. Validation on target fails: "Invalid firmware"
- Ensure the correct firmware is programmed and the board is connected. Restart the board if needed.
1.4. Validation on target fails: "Read I/O timeout"
- An issue occurred during the inference, which can be due to bad clock configuration, memory overlap or wrong memory access. Verify your project configuration.
1.5. "network_name is not a valid network" error?
- The expected C model is not available on the connected board.
- Ensure the correct model is selected and programmed.
- Check the UI log console (Output window) for more details.
1.6. "The embedded STM32 model does not match the C model" error?
- The signature of the generated C model does not match the expected model.
- Check for mismatches in RAM/ROM size, MACC, number of nodes, or tool versions.
- Rebuild and reprogram the firmware to ensure consistency.
- Execute with
--no-checkto ignore the error.
1.7. Validation metrics seem off or accuracy is low?
- Ensure the validation dataset matches the preprocessing pipeline of the original model.
- For classifiers, use one-hot encoding for output data.
- If using quantization or compression, validate the impact on accuracy using the provided metrics (ACC, RMSE, MAE, L2r).
- If using external memories, verify they are enabled and configured correctly.
1.8. Why is my model too large for the selected MCU?
- Apply quantization to reduce model size.
- Consider using a Discovery Kit with external RAM/flash.
- Offload weights/activations to external RAM/flash if available.
- Consider simplifying your model architecture.
1.9. How do I use external memory for weights or activations?
- In the "Advanced Settings", select "Use external RAM" or "Use external Flash".
- For external flash, use
split-weightsmode to map weights in internal and external flash. - Use the "Propose placement" button to optimize memory allocation between internal and external memories.
1.10. How do I update only the neural network library in my project?
- Open an existing project in STM32Cube AI Studio.
- Upload the new or updated model.
- Regenerate the project.
- Only the AI sub-folder and related files need to be updated in your source tree.
1.11. How do I measure inference time and CPU cycles?
- The application reports duration, CPU cycles and cycles/MACC.
1.12. How do I handle multi-network projects?
- Use the ST‑AI (stai) embedded client API multi-network support.
- Ensure only one inference runs at a time if sharing activation buffers.
1.13. How do I place code and data in specific memory regions?
- Use linker scripts to place
.textand.rodatain internal flash,.dataand.bssin SRAM. - For external memory, configure in Advanced Settings and ensure the board support package (BSP) initializes the memory.
1.14. How do I debug the generated library?
- The library is a black box; use the ST‑AI (stai) error reporting mechanisms.
- For integration issues, check the log files and ensure all dependencies are linked.
- Refer to ST Edge AI Core documentation
2. Related ST Edge AI Core Documentation
3. Next Steps