Difference between revisions of "AI:How to add AI model to OpenMV ecosystem"
[quality revision] | [quality revision] |
m (Fixed issues reported by Maurizio, removed his comments)
|
m (Escoda Michael moved page How to add AI model to OpenMV ecosystem to AI:How to add AI model to OpenMV ecosystem: previously without namespace)
|
Contents
- 1 STM32Cube.AI enabled OpenMV firmware
- 2 Documentation of microPython STM32Cube.AI wrapper
1 STM32Cube.AI enabled OpenMV firmware[edit]
This tutorial walks you through the process of integrating your own neural network into the OpenMV environment.
The OpenMV open-source project provides the source code for compiling the OpenMV H7 firmware with STM32Cube.AI enabled.
The process for using STM32Cube.AI with OpenMV is described in the following figure.
- Train your neural network using your favorite deep learning framework.
- Convert your trained network to optimized C code using STM32Cube.AI tool
- Download the OpenMV firmware source code, and
- Add the generated files to the firmware source code
- Compile with GCC toolchain
- Flash the board using OpenMV IDE
- Program the board with microPython and perform inference
![]() |
Licence information: X-CUBE-AI is delivered under the Mix Ultimate Liberty+OSS+3rd-party V1 software license agreement SLA0048 |
1.1 Prerequisites[edit]
To follow this article it is assumed that a Linux environment is used (tested with Ubuntu 18.04).
![]() |
For Windows users, it is strongly recommended to install the Windows Subsystem for Linux (WSL) Ubuntu 18.04 that provides a Ubuntu Linux environment. Please note: this tutorial has only been tested with WSL1
Once the installation is done, access to the WSL Ubuntu file system can be done from the Windows File explorer at the following location: C:\Users\<username>\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs |
All the commands starting with this syntax should be executed in a Linux console:
PC $> <mycommand>
1.2 Requirements[edit]
1.2.1 Check that your environment is up-to-date[edit]
PC $> sudo apt update PC $> sudo apt upgrade PC $> sudo apt install git zip make build-essential tree
1.2.2 Create your workspace directory[edit]
PC $> mkdir $HOME/openmv_workspace
![]() |
This is just a suggestion of directory organization. All following command lines will refer to this directory |
1.2.3 Install the stm32ai command line to generate the optimized code[edit]
- Download the latest version of the X-CUBE-AI-Linux from ST website into your openmv_workspace directory.
- Extract the archive
PC $> cd $HOME/openmv_workspace PC $> chmod 644 en.en.x-cube-ai-v6-0-0-linux.zip PC $> unzip en.en.x-cube-ai-v6-0-0-linux.zip PC $> mv STMicroelectronics.X-CUBE-AI.6.0.0.pack STMicroelectronics.X-CUBE-AI.6.0.0.zip PC $> unzip STMicroelectronics.X-CUBE-AI.6.0.0.zip -d X-CUBE-AI.6.0.0 PC $> unzip stm32ai-linux-6.0.0.zip -d X-CUBE-AI.6.0.0/Utilities
- Add the stm32ai command line to your PATH.
PC $> export PATH=$HOME/openmv_workspace/X-CUBE-AI.6.0.0/Utilities/linux:$PATH
- You can verify that the stm32ai command line is properly installed:
PC $> stm32ai --version stm32ai - Neural Network Tools for STM32AI v1.4.1 (STM.ai v6.0.0-RC6)
1.2.4 Install the GNU Arm toolchain version 7-2018-q2 to compile the firmware[edit]
PC $> sudo apt remove gcc-arm-none-eabi PC $> sudo apt autoremove PC $> sudo -E add-apt-repository ppa:team-gcc-arm-embedded/ppa PC $> sudo apt update PC $> sudo -E apt install gcc-arm-embedded
Alternatively, you can download the toolchain directly from ARM with
PC $> wget https://armkeil.blob.core.windows.net/developer/Files/downloads/gnu-rm/7-2018q2/gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2 PC $> tar xf gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2
and add the path to gcc-arm-none-eabi-7-2018-q2-update/bin/
to your PATH environment variable
You can verify that the GNU Arm toolchain is properly installed:
PC $> arm-none-eabi-gcc --version arm-none-eabi-gcc (GNU Tools for Arm Embedded Processors 7-2018-q2-update) 7.3.1 20180622 (release) [ARM/embedded-7-branch revision 261907] Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
![]() |
If you don't use this specific version of the gcc compiler, the compilation is likely to fail. |
1.2.5 Check Python version[edit]
Make sure that when you type python --version
in your shell, the version is Python 3.x.x. If not, create the symbolic link:
PC $> sudo ln -s /usr/bin/python3.8 /usr/bin/python
1.2.6 Install the OpenMV IDE[edit]
Download OpenMV IDE from OpenMV website.
OpenMV IDE is used to develop microPython scripts and to flash the board.
1.3 Step 1 - Download and prepare the OpenMV project[edit]
In this section we clone the OpenMV project, checkout a known working version and create a branch.
Then we initialize the git submodules. This clones the OpenMV dependencies, such as microPython.
1.3.1 Clone the OpenMV project[edit]
PC $> cd $HOME/openmv_workspace PC $> git clone --recursive https://github.com/openmv/openmv.git
1.3.2 Checkout a known working version[edit]
PC $> cd openmv PC $> git checkout v3.9.4
1.4 Step 2 - Add the STM32Cube.AI library to OpenMV[edit]
Now that the OpenMV firmware is downloaded, we need to copy over the STM32Cube.AI runtime library and header files into the OpenMV project.
PC $> cd $HOME/openmv_workspace/openmv/src/stm32cubeai PC $> mkdir -p AI/{Inc,Lib} PC $> mkdir data
Then copy the files from STM32Cube.AI to the AI directory:
PC $> cp $HOME/openmv_workspace/X-CUBE-AI.6.0.0/Middlewares/ST/AI/Inc/* AI/Inc/ PC $> cp $HOME/openmv_workspace/X-CUBE-AI.6.0.0/Middlewares/ST/AI/Lib/GCC/STM32H7/NetworkRuntime*_CM7_GCC.a AI/Lib/NetworkRuntime_CM7_GCC.a
After this operation, the AI directory should look like this
AI/ ├── Inc │ ├── ai_common_config.h │ ├── ai_datatypes_defines.h │ ├── ai_datatypes_format.h │ ├── ai_datatypes_internal.h │ ├── ai_log.h │ ├── ai_math_helpers.h │ ├── ai_network_inspector.h │ ├── ai_platform.h │ ├── ... ├── Lib │ └── NetworkRuntime_CM7_GCC.a └── LICENSE
1.5 Step 3 - Generate the code for a NN model[edit]
In this section, we train a convolutional neural network to recognize hand-written digits.
Then we generate a STM32 optimized C code for this network thanks to STM32Cube.AI.
These files will be added to OpenMV firmware source code.
1.5.1 Train a convolutional neural network[edit]
![]() |
Alternatively, you can skip this step and use the pre-trained mnist_cnn.h5 file provided (see next chapter). |
The convolutional neural network for digit classification (MNIST) from Keras will be used as an example. If you want to train the network, you need to have Keras installed.
To train the network and save the model to the disk, run the following commands:
PC $> cd $HOME/openmv_workspace/openmv/src/stm32cubeai/example PC $> python3 mnist_cnn.py
1.5.2 STM32 optimized code generation[edit]
To generate the STM32 optimized code, use the stm32ai command line tool as follows:
PC $> cd $HOME/openmv_workspace/openmv/src/stm32cubeai PC $> stm32ai generate -m example/mnist_cnn.h5 -o data/
The following files are generated in $HOME/openmv_workspace/openmv/src/stm32cubeai/data:
* network.h * network.c * network_data.h * network_data.c
1.5.3 Preprocessing[edit]
If you need to do some special preprocessing before running the inference, you must modify the function ai_transform_input
located into src/stm32cubeai/nn_st.c
. By default, the code does the following:
- Simple resizing (subsampling)
- Conversion from unsigned char to float
- Scaling pixels from [0,255] to [0, 1.0]
The provided example might just work out of the box for your application, but you may want to take a look at this function.
1.6 Step 4 - Compile[edit]
1.6.1 Build MicroPython cross-compiler[edit]
MicroPython cross-compiler is used to pre-compile Python scripts to .mpy files which can then be included (frozen) into the firmware/executable for a port. To build mpy-cross use:
PC $> cd $HOME/openmv_workspace/openmv/src/micropython/mpy-cross PC $> make
1.6.2 Build the firmware[edit]
- Lower the heap section in RAM allowing more space for our neural network activation buffers.
- For OpenMV H7: Edit src/omv/boards/OPENMV4/omv_boardconfig.h, find OMV_HEAP_SIZE and set to 230K.
- For OpenMV H7 Plus: Edit src/omv/boards/OPENMV4P/omv_boardconfig.h, find OMV_HEAP_SIZE and set to 230K.
- Execute the following command to compile:
- For OpenMV H7, it is the default target board, no need to define TARGET
PC $> cd $HOME/openmv_workspace/openmv/src/ PC $> make clean PC $> make CUBEAI=1
-
- For OpenMV H7 Plus, add the TARGET=OPENMV4P to make
PC $> make TARGET=OPENMV4P CUBEAI=1
![]() |
This may take a while, you can speed up the process by adding -j4 or more (depending on your CPU) to the make command, but it can be the right time to take a coffee. |
1.7 Step 5 - Flash the firmware[edit]
- Plug the OpenMV camera to the computer using a micro-USB to USB cable.
- Open OpenMV IDE
- From the toolbar select Tools > Run Bootloader
- Select the firmware file (It is located in openmv/src/build/bin/firmware.bin) and follow the instructions
- Once this is done, you can click the Connect button located at the bottom left of the IDE window
1.8 Step 6 - Program with microPython[edit]
- Open OpenMV IDE, and click the Connect button located at the bottom left of the IDE window
- Create a new microPython script File > New File
- You can start from this example script running the MNIST neural network we have embedded in the firmware
'''
Copyright (c) 2019 STMicroelectronics
This work is licensed under the MIT license
'''
# STM32Cube.AI on OpenMV MNIST Example
import sensor, image, time, nn_st
sensor.reset() # Reset and initialize the sensor.
sensor.set_contrast(3)
sensor.set_brightness(0)
sensor.set_auto_gain(True)
sensor.set_auto_exposure(True)
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to Grayscale
sensor.set_framesize(sensor.QQQVGA) # Set frame size to 80x60
sensor.skip_frames(time = 2000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
# [STM32Cube.AI] Initialize the network
net = nn_st.loadnnst('network')
nn_input_sz = 28 # The NN input is 28x28
while(True):
clock.tick() # Update the FPS clock.
img = sensor.snapshot() # Take a picture and return the image.
# Crop in the middle (avoids vignetting)
img.crop((img.width()//2-nn_input_sz//2,
img.height()//2-nn_input_sz//2,
nn_input_sz,
nn_input_sz))
# Binarize the image
img.midpoint(2, bias=0.5, threshold=True, offset=5, invert=True)
# [STM32Cube.AI] Run the inference
out = net.predict(img)
print('Network argmax output: {}'.format( out.index(max(out)) ))
img.draw_string(0, 0, str(out.index(max(out))))
print('FPS {}'.format(clock.fps())) # Note: OpenMV Cam runs about half as fast when connected
Take a white sheet of paper and draw numbers with a black pen, point the camera towards the paper. The code must yield the following output:
2 Documentation of microPython STM32Cube.AI wrapper[edit]
This section provides information about the 2 microPython functions added the the OpenMV microPython framework in order to be able to initialize and run STM32Cube.AI optimized neural network inference.
2.1 loadnnst[edit]
nn_st.loadnnst(network_name)
Initialize the network named network_name.
Arguments:
- network_name : String, usually 'network'
Returns:
- A network object, used to make predictions
Example:
import nn_st
net = nn_set.loadnnst('network')
2.2 predict[edit]
out = net.predict(img)
Runs a network prediction with img as input.
Arguments:
- img : Image object, from the image module of nn_st. Usually taken from sensor.snapshot()
Returns:
- Network predictions as an python list
Example:
'''
Copyright (c) 2019 STMicroelectronics
This work is licensed under the MIT license
'''
import sensor, image, nn_st
# Init the sensor
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
# Init the network
net = nn_st.loadnnst('network')
# Capture a frame
img = sensor.snapshot()
# Do the prediction
output = net.predict(img)
= STM32Cube.AI enabled OpenMV firmware = This tutorial walks you through the process of integrating your own neural network into the OpenMV environment. The OpenMV open-source project provides the source code for compiling the OpenMV H7 firmware with STM32Cube.AI enabled. The process for using STM32Cube.AI with OpenMV is described in the following figure. [[File:ai_openmv_cubeai.png|thumb|upright=2|center|link=|Process to use STM32Cube.AI with OpenMV]] # Train your neural network using your favorite deep learning framework. # Convert your trained network to optimized C code using STM32Cube.AI tool # Download the OpenMV firmware source code, and # Add the generated files to the firmware source code # Compile with GCC toolchain # Flash the board using OpenMV IDE # Program the board with microPython and perform inference {{Info|Licence information: X-CUBE-AI is delivered under the Mix Ultimate Liberty+OSS+3rd-party V1 software license agreement SLA0048 }} == Prerequisites == To follow this article it is assumed that a Linux environment is used (tested with Ubuntu 18.04). {{Info| For Windows users, it is strongly recommended to install the [https://docs.microsoft.com/en-us/windows/wsl/install-win10 Windows Subsystem for Linux (WSL) Ubuntu 18.04] that provides a Ubuntu Linux environment. {{Highlight|Please note: this tutorial has only been tested with WSL1}} Once the installation is done, access to the WSL Ubuntu file system can be done from the Windows File explorer at the following location:<pre style="white-space:pre-wrap;"> C:\Users\<username>\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs</pre>}} {{InternalInfo|ST employees cannot install WSL from Windows Store, they need to follow the instructions [https://docs.microsoft.com/en-us/windows/wsl/install-manual here] Moreover, it is advised to follow instructions [https://wiki.st.com/stm32mpu/wiki/PC_prerequisites#Linux_PC here] to setup proxy once WSL is installed.}} All the commands starting with this syntax should be executed in a Linux console: {{PC$}} <mycommand> == Requirements == === Check that your environment is up-to-date === {{PC$}} sudo apt update {{PC$}} sudo apt upgrade {{PC$}} sudo apt install git zip make build-essential tree === Create your workspace directory === {{PC$}} mkdir $HOME/openmv_workspace {{Info| This is just a suggestion of directory organization.<br>All following command lines will refer to this directory}} === Install the stm32ai command line to generate the optimized code === {{InternalInfo|The quickest way to download stm32ai command line is via this URL (update version number e.g 6.0.0): <nowiki>https://sw-center.st.com/packs/x-cube-ai/stm32ai-linux-<VERSION>.zip</nowiki>}} * Download the latest version of the [https://www.st.com/en/embedded-software/x-cube-ai.html#get-software X-CUBE-AI-Linux] from ST website into your openmv_workspace directory. {{Info| * Please make sure you download the X-CUBE-AI for Linux. * You need a my.st.com account to download. If you don't have one, please follow the instruction to register. * For WSL users, you can copy the downloaded zip file into the following location in Windows:<pre style="white-space:pre-wrap;"> C:\Users\<username>\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\<username>\openmv_workspace</pre> : Then you need to close and reopen your WSL Ubuntu console. *For WSL users, you can access your Windows filesystem in <code>/mnt/c/</code> from your WSL Ubuntu console. }} * Extract the archive {{PC$}} cd $HOME/openmv_workspace {{PC$}} chmod 644 en.en.x-cube-ai-v6-0-0-linux.zip {{PC$}} unzip en.en.x-cube-ai-v6-0-0-linux.zip {{PC$}} mv STMicroelectronics.X-CUBE-AI.6.0.0.pack STMicroelectronics.X-CUBE-AI.6.0.0.zip {{PC$}} unzip STMicroelectronics.X-CUBE-AI.6.0.0.zip -d X-CUBE-AI.6.0.0 {{PC$}} unzip stm32ai-linux-6.0.0.zip -d X-CUBE-AI.6.0.0/Utilities * Add the '''stm32ai''' command line to your PATH. {{PC$}} export PATH=$HOME/openmv_workspace/X-CUBE-AI.6.0.0/Utilities/linux:$PATH : You can verify that the stm32ai command line is properly installed: {{PC$}} stm32ai --version stm32ai - Neural Network Tools for STM32AI v1.4.1 (STM.ai v6.0.0-RC6) === Install the GNU Arm toolchain version 7-2018-q2 to compile the firmware === {{PC$}} sudo apt remove gcc-arm-none-eabi {{PC$}} sudo apt autoremove {{PC$}} sudo -E add-apt-repository ppa:team-gcc-arm-embedded/ppa {{PC$}} sudo apt update {{PC$}} sudo -E apt install gcc-arm-embedded Alternatively, you can download the toolchain directly from ARM with {{PC$}} wget https://armkeil.blob.core.windows.net/developer/Files/downloads/gnu-rm/7-2018q2/gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2 {{PC$}} tar xf gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2 and add the path to <code>gcc-arm-none-eabi-7-2018-q2-update/bin/</code> to your PATH environment variable You can verify that the GNU Arm toolchain is properly installed: {{PC$}} arm-none-eabi-gcc --version arm-none-eabi-gcc (GNU Tools for Arm Embedded Processors 7-2018-q2-update) 7.3.1 20180622 (release) [ARM/embedded-7-branch revision 261907] Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. {{Warning| If you don't use this specific version of the gcc compiler, the compilation is likely to fail.}} === Check Python version === Make sure that when you type <code>python --version</code> in your shell, the version is Python 3.x.x. If not, create the symbolic link: {{PC$}} sudo ln -s /usr/bin/python3.8 /usr/bin/python === Install the OpenMV IDE === Download [https://openmv.io/pages/download OpenMV IDE] from OpenMV website.<br> '''OpenMV IDE''' is used to develop microPython scripts and to flash the board. == Step 1 - Download and prepare the OpenMV project == In this section we clone the OpenMV project, checkout a known working version and create a branch.<br> Then we initialize the git submodules. This clones the OpenMV dependencies, such as microPython. {{Info|Check that there are no spaces in the path of openmv directory else the compilation will fail. You can check by running {{Highlight|pwd}} command inside the {{Highlight|openmv}} directory. If there are some spaces, move this directory to a path with no spaces.}} === Clone the OpenMV project === {{PC$}} cd $HOME/openmv_workspace {{PC$}} git clone --recursive https://github.com/openmv/openmv.git === Checkout a known working version === {{PC$}} cd openmv {{PC$}} git checkout v3.9.4 == Step 2 - Add the STM32Cube.AI library to OpenMV == Now that the OpenMV firmware is downloaded, we need to copy over the STM32Cube.AI runtime library and header files into the OpenMV project. {{PC$}} cd $HOME/openmv_workspace/openmv/src/stm32cubeai {{PC$}} mkdir -p AI/{Inc,Lib} {{PC$}} mkdir data Then copy the files from STM32Cube.AI to the AI directory: {{PC$}} cp $HOME/openmv_workspace/X-CUBE-AI.6.0.0/Middlewares/ST/AI/Inc/* AI/Inc/ {{PC$}} cp $HOME/openmv_workspace/X-CUBE-AI.6.0.0/Middlewares/ST/AI/Lib/GCC/STM32H7/NetworkRuntime*_CM7_GCC.a AI/Lib/NetworkRuntime_CM7_GCC.a After this operation, the AI directory should look like this <pre>AI/ ├── Inc │ ├── ai_common_config.h │ ├── ai_datatypes_defines.h │ ├── ai_datatypes_format.h │ ├── ai_datatypes_internal.h │ ├── ai_log.h │ ├── ai_math_helpers.h │ ├── ai_network_inspector.h │ ├── ai_platform.h │ ├── ... ├── Lib │ └── NetworkRuntime_CM7_GCC.a └── LICENSE</pre> == Step 3 - Generate the code for a NN model == In this section, we train a convolutional neural network to recognize hand-written digits. <br> Then we generate a STM32 optimized C code for this network thanks to STM32Cube.AI.<br> These files will be added to OpenMV firmware source code. === Train a convolutional neural network === {{Info|Alternatively, you can skip this step and use the pre-trained {{Highlight|mnist_cnn.h5}} file provided (see next chapter).}} The convolutional neural network for digit classification (MNIST) from Keras will be used as an example. If you want to train the network, you need to have [https://keras.io/#installation Keras installed]. To train the network and save the model to the disk, run the following commands: {{PC$}} cd $HOME/openmv_workspace/openmv/src/stm32cubeai/example {{PC$}} python3 mnist_cnn.py === STM32 optimized code generation === To generate the STM32 optimized code, use the stm32ai command line tool as follows: {{PC$}} cd $HOME/openmv_workspace/openmv/src/stm32cubeai {{PC$}} stm32ai generate -m example/mnist_cnn.h5 -o data/ The following files are generated in {{Highlight|$HOME/openmv_workspace/openmv/src/stm32cubeai/data}}: * network.h * network.c * network_data.h * network_data.c === Preprocessing === If you need to do some special preprocessing before running the inference, you must modify the function <code>ai_transform_input</code> located into <code>src/stm32cubeai/nn_st.c</code> . By default, the code does the following: * Simple resizing (subsampling) * Conversion from unsigned char to float * Scaling pixels from [0,255] to [0, 1.0] The provided example might just work out of the box for your application, but you may want to take a look at this function. == Step 4 - Compile == === Build MicroPython cross-compiler === MicroPython cross-compiler is used to pre-compile Python scripts to .mpy files which can then be included (frozen) into the firmware/executable for a port. To build mpy-cross use: {{PC$}} cd $HOME/openmv_workspace/openmv/src/micropython/mpy-cross {{PC$}} make === Build the firmware === * Lower the heap section in RAM allowing more space for our neural network activation buffers. ** For {{Highlight|OpenMV H7}}: Edit {{Highlight|src/omv/boards/OPENMV4/omv_boardconfig.h}}, find {{Highlight|OMV_HEAP_SIZE}} and set to {{Highlight|230K}}. ** For {{Highlight|OpenMV H7 Plus}}: Edit {{Highlight|src/omv/boards/OPENMV4P/omv_boardconfig.h}}, find {{Highlight|OMV_HEAP_SIZE}} and set to {{Highlight|230K}}. * Execute the following command to compile: ** For {{Highlight|OpenMV H7}}, it is the default target board, no need to define TARGET {{PC$}} cd $HOME/openmv_workspace/openmv/src/ {{PC$}} make clean {{PC$}} make CUBEAI=1 :* For {{Highlight|OpenMV H7 Plus}}, add the TARGET=OPENMV4P to make {{PC$}} make TARGET=OPENMV4P CUBEAI=1 {{Info|This may take a while, you can speed up the process by adding -j4 or more (depending on your CPU) to the make command, but it can be the right time to take a coffee.}} {{Info|If the compilation fails with a message saying that the .heap section overflows RAM1, you can edit the file {{Highlight|src/omv/boards/OPENMV4/omv_boardconfig.h}} or {{Highlight|src/omv/boards/OPENMV4P/omv_boardconfig.h}} and further lower the {{Highlight|OMV_HEAP_SIZE}} by a few kilobytes and try to build again. Do not forget to run {{Highlight|make clean}} between builds.}} == Step 5 - Flash the firmware == * Plug the OpenMV camera to the computer using a micro-USB to USB cable. * Open OpenMV IDE * From the toolbar select {{Highlight|Tools > Run Bootloader}} * Select the firmware file (It is located in {{Highlight|openmv/src/build/bin/firmware.bin}}) and follow the instructions {{Info| For Windows users, the firmware is located here:<pre style="white-space:pre-wrap;"> C:\Users\<username>\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\<username>\openmv_workspace\openmv\src\build\bin\firmware.bin</pre>}} * Once this is done, you can click the {{Highlight|Connect button}} located at the bottom left of the IDE window == Step 6 - Program with microPython == * Open OpenMV IDE, and click the {{Highlight|Connect button}} located at the bottom left of the IDE window * Create a new microPython script {{Highlight|File > New File}} * You can start from this example script running the MNIST neural network we have embedded in the firmware <source lang="python"> ''' Copyright (c) 2019 STMicroelectronics This work is licensed under the MIT license ''' # STM32Cube.AI on OpenMV MNIST Example import sensor, image, time, nn_st sensor.reset() # Reset and initialize the sensor. sensor.set_contrast(3) sensor.set_brightness(0) sensor.set_auto_gain(True) sensor.set_auto_exposure(True) sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to Grayscale sensor.set_framesize(sensor.QQQVGA) # Set frame size to 80x60 sensor.skip_frames(time = 2000) # Wait for settings take effect. clock = time.clock() # Create a clock object to track the FPS. # [STM32Cube.AI] Initialize the network net = nn_st.loadnnst('network') nn_input_sz = 28 # The NN input is 28x28 while(True): clock.tick() # Update the FPS clock. img = sensor.snapshot() # Take a picture and return the image. # Crop in the middle (avoids vignetting) img.crop((img.width()//2-nn_input_sz//2, img.height()//2-nn_input_sz//2, nn_input_sz, nn_input_sz)) # Binarize the image img.midpoint(2, bias=0.5, threshold=True, offset=5, invert=True) # [STM32Cube.AI] Run the inference out = net.predict(img) print('Network argmax output: {}'.format( out.index(max(out)) )) img.draw_string(0, 0, str(out.index(max(out)))) print('FPS {}'.format(clock.fps())) # Note: OpenMV Cam runs about half as fast when connected</source> Take a white sheet of paper and draw numbers with a black pen, point the camera towards the paper. The code must yield the following output: [[File:ai_cubeai_mnist_output.png|frame|center|alt=|Output from camera]] ----- = Documentation of microPython STM32Cube.AI wrapper = This section provides information about the 2 microPython functions added the the OpenMV microPython framework in order to be able to initialize and run STM32Cube.AI optimized neural network inference. === loadnnst ===<source lang="python"> nn_st.loadnnst(network_name)</source> Initialize the network named {{Highlight|network_name}}. Arguments: * {{Highlight|network_name}} : String, usually {{Highlight|'network'}} Returns: * A network object, used to make predictions Example: <source lang="python"> import nn_st net = nn_set.loadnnst('network')</source> === predict ===<source lang="python"> out = net.predict(img)</source> Runs a network prediction with {{Highlight|img}} as input. Arguments: * {{Highlight|img}} : Image object, from the image module of nn_st. Usually taken from {{Highlight|sensor.snapshot()}} Returns: * Network predictions as an python list Example: <source lang="python"> ''' Copyright (c) 2019 STMicroelectronics This work is licensed under the MIT license ''' import sensor, image, nn_st # Init the sensor sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) # Init the network net = nn_st.loadnnst('network') # Capture a frame img = sensor.snapshot() # Do the prediction output = net.predict(img)</source> <noinclude> [[Category:Artifical Intelligence|10]] {{PublicationRequestId | 15253 | 2020-03-10 }} </noinclude>
(No difference)
|