NanoEdge AI Studio

Revision as of 18:04, 21 November 2021 by Registered User (→‎Global settings)
NanoEdgeAI logo rectangle.png

1. What is NanoEdge AI Library?

NanoEdge™ AI Library is an Artificial Intelligence (AI) static library originally developed by Cartesiam, for embedded C software running on Arm® Cortex® microcontrollers (MCUs). It comes in the form of a precompiled .a file that provides building blocks to implement smart features into any C code.

When embedded on microcontrollers, the NanoEdge AI Library gives them the ability to "understand" sensor patterns automatically, by themselves, without the need for the user to have additional skills in Mathematics, Machine Learning, or data science.

Each NanoEdge AI static library contains an AI model designed to bring Machine Learning capabilities to any C code, in the form of easily implementable functions, e.g., for learning signal patterns, detecting anomalies, classifying signals, or extrapolating data.

There are 4 different types of NanoEdge AI Libraries, corresponding to the 4 types of projects that can be created in NanoEdge AI Studio:

  • Anomaly Detection (AD) libraries are used to detect abnormal behaviors on a machine, after an initial in-situ training phase, using a dynamic model that learns patterns incrementally.
  • n-class Classification (nCC) libraries are used to distinguish and recognize different types of behaviors, anomalous or not, and classify them into pre-established categories, using a static model.
  • 1-class Classification (1CC) libraries are used to detect abnormal behaviors on a machine, using a static model, without providing any context about the possible anomalies to be expected.
  • Extrapolation (EX) libraries are used to estimate an unknown target value using other know parameters, using a static (regression) model.
Info white.png Information
For more information about their uses and specificities, see their respective documentations:

XXXXX CHECK LINKS - URLS MAY HAVE CHANGED XXXXX

Here are the most important features of the NanoEdge AI Libraries:

  • ultra optimized to run on MCUs (any Arm® Cortex®-M)
  • ultra memory efficient (1-20 Kbytes of RAM/Flash memory)
  • ultra fast (1-20 ms inference on Cortex®-M4 at 80 MHz)
  • inherently independent from the cloud
  • run directly within the microcontroller
  • can be integrated into existing code / hardware
  • consume very little energy
  • preserve the stack (static allocation only)
  • transmit or save no data
  • require no Machine Learning expertise to be deployed

All NanoEdge AI Libraries are created by using NanoEdge AI Studio.

2. Purpose of NanoEdge AI Studio

2.1. What the Studio can do

NanoEdge AI Libraries contains a range of Machine Learning models, and each of these models can be optimized by tuning a wide range of hyperparameters. This results in a very large number of potential combinations, each one being tailored for a specific use-case (one static libraries for each combination). Therefore, a tool is needed to find the best possible library for each project.

NanoEdge AI Studio (NanoEdgeAIStudio), also referred to as the Studio,

  • is a search engine for AI Libraries
  • is built for embedded developers
  • abstracts away all aspects of Machine Learning and data science
  • enables the quick and easy development of Machine Learning capabilities into any C code
  • uses minimal amounts of input data compared to traditional Machine Learning approaches

Its purpose is to find the best possible NanoEdge AI static library for a given hardware application, where the only requirements in terms of user knowledge are embedded development (software/hardware), C coding, and basic signal sampling notions.

NanoEdge AI Studio takes as input project parameters (such as MCU type, RAM and sensor type) and some signal examples, and outputs the most relevant NanoEdge AI Library. This library can either be untrained (it will only start learning after it is embedded in the microcontroller) or pre-trained in the Studio. In all cases, the NanoEdge AI Library will be able to infer (detect, classify, extrapolate...) directly from the target microcontroller.

The resulting NanoEdge AI Library is a combination of 3 elementary software bricks:

  1. signal pre-processing algorithm (e.g. FFT, PCA, normalization, reframing...),
  2. Machine Learning model (e.g. kNN, SVM, neural networks, Cartesiam-proprietary ML algorithms...),
  3. optimal hyperparametrization for the ML model.

Each NanoEdge AI static library is the result of the benchmark of virtually all possible AI libraries (combinations of signal treatment, ML model and tuned hyperparameters), tested against the minimal data given by the user. It therefore contains the best possible model, for a given use case, given the signal examples provided as input.

Using NanoEdge AI Studio is an iterative process by design: users can import signals, run a benchmark, find a library, and start testing it in under an hour.
Then, depending on the results obtained, changes are made to the input data (quality and/or quantity of signals), and the process restarted, to obtain a better iteration of the NanoEdge AI Library.

2.2. What the Studio cannot do

In a nutshell, NanoEdge AI Studio takes user data as input (in the form of sensor signal examples), and produces a static library (.a) file as output. This is a straightforward and relatively quick iterative procedure.

However, the Studio does not provide any input data. The user needs to have qualified data (of sufficient quality and quantity) in order to obtain satisfactory results from the Studio. This data can either be raw sensor signals, or pre-treated signals, and need to be formatted properly (see below). For example, for anomaly detection on a machine, the user needs to collect signal examples representing "normal" behaviors on this machine, as well as a few examples of possible "anomalies". This data collection process is crucial, and can be tedious, as some expertise is needed to design the correct signal acquisition and sampling methodology, which can vary dramatically from one project to the other.

Additionally, NanoEdge AI Studio does not provide any ready-to-use C code to implement in your final project. This code, which includes some of the NanoEdge AI Library smart functions (e.g., init, learn, detect, classification, extrapolation), needs to be written and compiled by the user. The user is free to call these functions as needed, and implement all the smart features imaginable.

In summary, the static (.a) library file, outputted by the Studio from user-generated input data, must be linked to some C code written by the user, and compiled/flashed by the user on the target microcontroller.

3. Getting started

NanoEdge AI Studio can be used to generate ML libraries for different project types, using data coming from one or more sensors, possibly of different types. It is therefore crucial to understand which project type to create for a given use case, and which sensor type will be most relevant to use.

3.1. Defining important concepts

Some vocabulary used in this documentation may be interpreted in different ways depending on the context. Here are some clarifications:

  • "axis/axes" and "variable(s)": here these two terms will be used interchangeably.
  • "sample": this refers to the instantaneous output of a sensor, and contains as many numerical values as the sensor has axes (or variables). For example, a 3-axis accelerometer outputs 3 numerical values per sample, while a current sensor (1-axis) outputs only 1 numerical value per sample.
  • "signal", "signal example", or "learning example": used interchangeably, these refer to a collection of several samples, which has an associated temporal length (which depends on the sampling frequency used). The term "line" will also be used to refer to a signal example, because in the input files for the Studio, each line represents an independent signal example (exception: Multi-sensor, see below).
  • "buffer size", or "buffer length"; this is the number of samples per signal. It should be a power of 2. For example, a 3-axis signal with buffer length 256 will be represented by 768 (256*3) numerical values.

3.2. Types of projects in NanoEdge AI Studio

The 4 different types of projects that can be created using the Studio, along with their characteristics, outputs, and possible use cases, are outlined below:

Anomaly Detection (AD):

  • Use case: detecting anomalies in data using a dynamic model.
  • User input: signal examples representing both nominal states and abnormal states (used for library selection only).
  • Studio output: untrained anomaly detection library that will learn incrementally, directly on the target microcontroller.
Warning DB.png Important

Anomaly detection is the only project type that will output a NanoEdge AI Library capable of learning signal examples in situ, after it is embedded into the microcontroller. All other library types will only infer in the microcontroller.

This feature gives anomaly detection libraries great adaptability, since the same library, deployed on different devices (possibly monitoring slightly different machines, or machines that operate in different environmental conditions, or that are susceptible to perturbations) will be able to train differently (i.e. learn different knowledge) to adapt itself to the specific behavior of its target machine.

This also means that anomaly detection libraries can learn incrementally on the go; knowledge can be erased, but it can also be enriched at any moment, simply by learning additional signal examples representing the new behaviors to learn (e.g., signals representing new nominal regimes, possibly due to a change in operating conditions).


1-class Classification (1CC):

  • Use case: detecting anomalies in data using a static model.
  • User input: signal examples representing normal states only (used for both library selection and model training).
  • Studio output: pre-trained outlier detection library that will infer directly on the target microcontroller.
Info white.png Information

1-class classification libraries are especially useful when the types of anomalies that may happen on a target machine are difficult to predict, or when no signal example representing possible anomalies can be provided.


n-class Classification (nCC):

  • Use case: distinguishing among n different states using a static model.
  • User input: signal examples representing all the different states (classes) to be expected (used for both library selection and model training).
  • Studio output: pre-trained classification library that will infer directly on the target microcontroller.
Info white.png Information

n-class classification libraries may be used, for example, to determine which kind or anomaly is happening on a machine (out of many possible predetermined anomalies), or to detect what is the current regime / behavior type of a piece of equipment that has different modes of operation.


Extrapolation (EX):

  • Use case: estimating an unknown target value using other know parameters, using a static model.
  • User input: signal examples associating the know parameters, to their target values (used for both library selection and model training).
  • Studio output: pre-trained regression library that will infer directly on the target microcontroller.
Info white.png Information

Extrapolation is the only project type that will output a ML library capable of evaluating a number (e.g., predicting the value of a continuous variable, using a mathematical regression model. All other library types only output discrete classes.

3.3. Types of sensors in NanoEdge AI Studio

NanoEdge AI Studio and its output NanoEdge AI Libraries are compatible with any sensor type; they are sensor-agnostic. For example, users may use data coming from an accelerometer, a magnetometer, a microphone, a gas sensor, a time-of-flight sensor, a microphone, or a combination of any of these (list non exhaustive).

The Studio is designed to be able to work with raw sensor data, that hasn't necessarily been pre-processed. However, in cases where users already have some knowledge and expertise about their signals, pre-processed signals can be imported instead.

Depending on the user's use case, the Studio needs to understand which data format to expect in the imported input files. There are 2 main categories of sensors selectable in the Studio:

  1. Generic (n axes) sensor, which is a generalization of some other sensor types, e.g., accelerometer 3 axes, magnetometer 1 axis, microphone (1 axis), current (1 axis), and so on. This sensor covers most typical use cases, and will be selected most of the time.
  2. Multi-sensor (n variables), referred to as "Multi-sensor", which is specific to anomaly detection projects, and designed for niche use cases, for which the expected format is entirely different.

The Generic n-axis sensor (and all others except "Multi-sensor") expects a buffer of data as input, in other words, a signal example represented by a succession of instantaneous sensor samples. As a result, this "signal example" will have an associated temporal length, which depends on the sampling frequency (output data rate of the sensor) and on the number of instantaneous samples composing the signal example (referred to as buffer size, or buffer length).

Warning DB.png Important

This Generic sensor approach is the main approach, and should be used by default, since it covers all possible sensor types (and combinations), all project types, and most use cases.

It is especially relevant when the physical phenomena sampled evolve "quickly" (e.g. accelerometer, current sensor, and so on), using output data rates higher anywhere above a few hertz (for example, 10 Hz to 20000 Hz).

The Multi-sensor sensor, on the other hand, expects instantaneous samples of data. In other words, it uses a single sensor sample as input, as opposed to a temporal signal example composed of many samples.

Warning DB.png Important

This Multi-sensor, only available in anomaly detection projects, is typically used when the physical phenomena sampled evolve "slowly" over time (e.g. temperature, pressure...), or when they do not explicitly depend on time. A typical use case for this sensor is the monitoring of artificial "machine states" represented by signals forming higher-level features, resulting from the aggregation of multiple variables, possibly coming from multiple sensors. Such signals therefore represent instantaneous states rather than time-evolving signals.

In the remainder of this documentation, except when explicitly stated otherwise, the "Generic" sensor approach (using signal examples / buffers as input, rather that single samples) will be used by default.

3.4. Designing a relevant sampling methodology

Compared to traditional machine learning approaches, which may require hundreds of thousands of signal examples to build a model, NanoEdge AI Studio requires minimal input datasets (as little as 50-100 signal examples, depending on the use case).

However, this data needs to be qualified, which means that it must contain relevant information about the physical phenomena to be monitored. For this reason, it is absolutely crucial to design the proper sampling methodology, in order to make sure that all the desired characteristics from the physical phenomena to be sampled are correctly extracted and translated into meaningful data.

To prepare input data for the Studio, the user must choose the most adequate sampling frequency.

The sampling frequency corresponds to the number of samples measured per second. For some sensors, the sampling frequency can be directly set by the user (e.g. digital sensors), but in other cases (e.g. analog sensors), a timer needs to be set up for constant time intervals between each sample.

The speed at which the samples are taken must allow the signal to be accurately described, or "reconstructed"; the sampling frequency must be high enough to account for the rapid variations of the signal. The question of choosing the sampling frequency therefore naturally arises:

  • If the sampling frequency is too low, the readings are too far apart; if the signal contains relevant features between two samples, they are lost.
  • If the sampling frequency is too high, it may negatively impact the costs, in terms of processing power, transmission capacity, or storage space for example.
Warning DB.png Important

To choose the sampling frequency, prior knowledge of the signal is useful in order to know its maximum frequency component. Indeed, to accurately reconstruct an output signal from an input signal, the sampling frequency must be at least twice as high as the maximum frequency that you wish to detect within the input signal.

Without any prior knowledge of the signal, it is recommended to test several sampling frequencies and refine them according to the results obtained via NanoEdge AI Studio / Library (such as 200 Hz, 500 Hz, 1000 Hz, or others).

The issues related to the choice of sampling frequency and the number of samples are illustrated below:

  • Case 1: the sampling frequency and the number of samples make it possible to reproduce the variations of the signal.
NanoEdgeAI sampling freq 1.png
  • Case 2: the sampling frequency is not sufficient to reproduce the variations of the signal.
NanoEdgeAI sampling freq 2.png
  • Case 3: the sampling frequency is sufficient but the number of samples is not sufficient to reproduce the entire signal (meaning that only part of the input signal is reproduced).
NanoEdgeAI sampling freq 3.png

The buffer size corresponds to the total number of samples recorded per signal, per axis. Together, the sampling frequency and the buffer size put a constraint on the effective signal temporal length.

Warning DB.png Important

In summary, there are 3 important parameters to consider:

  • n: buffer size
  • f: sampling frequency
  • L: signal length

They are linked together via: n = f * L. In other words, by choosing two (according to your use case), the third one is constrained.

Info white.png Information

For Multi-sensor, the concept of "buffer size" is not relevant, since there are only individual samples in the input files imported in the Studio, instead of full signal examples made of a collection of samples.

Here are general recommendations. Make sure that:

  • the sampling frequency is high enough to catch all desired signal features. To sample a 1000 Hz phenomenon, you must at least double the frequency (in this case, sample at 2000 Hz at least).
  • your signal is long (or short) enough to be coherent with the phenomenon to be sampled. For example, if you want your signals to be 0.25 seconds long (L), you must have n / f = 0.25. For example, choose a buffer size of 256 with a frequency of 1024 Hz, or a buffer of 1024 with a frequency of 4096 Hz, and so on.
Info white.png Information
For best performance, always use a buffer size n that is a power of two (for instance 128, 512 or others).

3.5. Preparing signal files

During the library selection process, NanoEdge AI Studio uses user data (input files containing signal examples) to test and benchmark many signal preprocessing algorithms, Machine Learning models and parameters. The way these input files are structured, formatted, and the way the signal were recorded is therefore very important.

Here are general considerations for input file format, which apply to all cases. The Studio expects:

  • .txt / .csv files
  • numerical values only (not counting separators), and no headers
  • uniform separators throughout the whole file: either single space, tab, , or ;.
  • decimal values formatted using a period (.) and not commas (,).
  • more than 1 sample per line (exception: Multi-sensor)
  • fewer than 16384 (214) values per line
  • the same number of numerical values on each line
  • a bare minimum of 20 lines per sensor axis (e.g. for a 3-axis accelerometer: 60 lines is a bare minimum)
  • fewer than ~100000 lines in total (generally, 50-1000 are more than enough)
  • file size lower than ~1 Gb

Then, there are some specific formatting rules, mainly depending on the type of project created in the Studio:

  1. Anomaly detection, 1-class classification, and n-class classification projects all follow the same general rules.
  2. Extrapolation projects has a particularity, because it needs to incorporate the target values from which to extrapolate.
  3. Multi-sensor is a big exception, and applies only to niche cases in anomaly detection projects.

3.5.1. General rules

The following applies to anomaly detection (exception: Multi-sensor), 1-class classification, and n-class classification projects.

In NanoEdge AI Studio, lines are taken into account independently, iteratively, so they must represent a meaningful snapshot in time of the signal to be processed. It it therefore crucial to set a coherent sampling frequency and a proper buffer size.

Info white.png Information

The Studio considers each line independently, so the lines in each input file may be shuffled without affecting the results. Exception : Multi-sensor.

The Studio expects:

  • each line to represent a single, independent signal example, made of many samples
  • the buffer size of this signal to be a power of two, and to stay constant throughout the project
  • the sampling frequency of this signal to stay constant throughout the project
  • all signal examples corresponding to a given "class" to be combined in the same input file (for anomaly detection, it generally means that all "nominal" regimes should be concatenated into a single "nominal" input file, and all "abnormal regimes" into a single "anomalies" input file.
Info white.png Information

When using more than one sensor at once with a given NanoEdge AI library (e.g. both a 3-axis accelerometer and a 1-axis current sensor), all variables are combined together, as if one were using only one single multi-axis sensor (in this example, a generic 4-axis sensor combining both acceleration and current). Therefore all sensor data is combined into a single input file.

Example:

I am using a 3-axis accelerometer. I want to monitor a piece of equipment that vibrates. I will collect a total of 100 learning examples to represent the vibration behavior of my equipment. I estimate the highest-frequency component of this vibration to be below 500 Hz, therefore I choose a sampling frequency of 1000 Hz for my sensor. I decide that my learning examples for this vibration should represent about 1/4 of a second (250 ms). To achieve this, I choose a buffer size of 256 samples. This means my 256 samples will represent a signal of 1000/256 = 256 ms.

Therefore, in my input file, each signal will be composed of 256 3-value samples. This means each of the 100 lines in my input file will be composed of 768 numerical values (256*3).

NanoEdgeAI input example.png
Info white.png Information

Depending on project constraints, buffer size, signal lengths, and sampling frequencies vary.
For example, for a buffer size of 256, it could mean that:

  • the capture of 0.25-second signals, with a sampling frequency of 1 kHz implied the choice of a buffer size of 256 (256/1000 = 0.256).
  • the sampling at a higher frequency (4 kHz), so with a buffer size of 256, led to much shorter signals, 64 ms (256/4000 = 0.064).

3.5.2. Variant: Extrapolation projects

The following applies to extrapolation projects only.

The mathematical models used in NanoEdge AI Extrapolation libraries are regression models (not necessarily linear). Therefore, all general information and rules governing regression apply here.

NanoEdge AI Studio uses input files provided in Extrapolation projects both to find the best possible NanoEdge AI Extrapolation library, and also to train it. It means that the learning examples (lines) provided in the input files should not only contain the signal buffer itself, but also the target values associated to this signal buffer.

The point is to learn a model that will correlate each signal buffer to a target value, so that after training, when it is embedded into the microcontroller, the extrapolation library is able to read a signal buffer, and infer the missing (unknown) target value.

Warning DB.png Important

Here, the buffer refers to the combination of all known parameters or features associated to a target value. The target value refers to the variable or feature that the user is trying to extrapolate / infer / evaluate / predict. This target value is known during training (hence provided in the input files provided in the Studio), but unknown during inference (hence absent from the input files used later on to test the extrapolation library obtained).

Input file format provided to the Studio for extrapolation only slightly differ from the general guidelines presented in the previous section. The difference is that the signal buffer (each line) should be preceded by a single numerical value, representing the target to evaluate, as shown here:

These target values are provided in the Studio for library selection / training, but omitted during testing.
Therefore, during testing (e.g. using the NanoEdge AI Emulator), the input file format is exactly the same as the one described in the previous section, titled "General rules".

Example:

I'm trying to evaluate / predict / extrapolate my running speed (which is my target value) from raw 3-axis accelerometer data.

  • I choose a sampling frequency of 500 Hz on my accelerometer (which I believe will be sufficient to capture all vibratory characteristics or my "running signature"), and a buffer size of 1024 (because at 500 Hz it will represent a temporal signal segment of approximately 2 seconds, which I estimate will contain sufficient information to extrapolate a speed).
  • I also need a way to measure my running speed (target value), in order to train the model later on. For instance, I can use a (GPS) speedometer, or simply run a known distance and record my time.
  • Then, I can start collecting data.
    I will walk / run several times while carrying my speedometer and accelerometer, to collect both accelerometer buffers and the associated speeds.
    For instance, I will walk / run 6 times, at 6 noticeably different speeds, each time for 1 minute at constant speed. Therefore for each run, I will get one known speed value, and many (approximately 30) two-second accelerometer buffers composed of 1024*3 = 3072 values each.
  • Finally, I compile this data in a single file that I will use as input in the Studio.
    • This file contains approximately 180 lines (6 runs with 30 buffers each), each representing an individual training example.
    • Each line is composed of 3073 numerical values: 1 speed value followed by 1024*3 accelerometer values, all separated (for example) by commas.
    • The first 30 lines all start with the same speed value, but have different associated accelerometer buffers (1st run). The 30 next lines all start with another speed value, and have their associated accelerometer buffers (2nd run), and so on.
  • After my model is trained, I am able to evaluate my running speed, just by providing the best NanoEdge AI library found by the Studio, with some accelerometer buffers of size 1024 sampled at 500 Hz (of course, without providing the speed value). The data provided for inference (or testing) therefore contains 3072 values only (no speed), since speed is what I'm trying to estimate.

3.5.3. Exception: Multi-sensor

In anomaly detection projects (only), the Multi-sensor sensor is used to monitor machine "states" that typically evolve slowly in time. These states may be represented by variables coming from distinct sensor sources, and/or result from the aggregation of signal buffers into artificial, higher level features.

Here, the input format is different:

  • Each line represents a single sample (possibly multi-variable) instead of a full signal.
  • The number of values per line (equal to the number of variables per sample) does not have to be a power of two.
  • The lines are not independent, so the ordering does matter (lines should not be shuffled).
  • Typically, there are many more lines in the input file compared to the "normal" case (not "Multi-sensor"), since we now have only one sample per line, instead of many samples per line.

Example:

I want to monitor the state of a machine, represented by a combination of sensors; 3-axis magnetometer, a temperature sensor (1 axis), and a pressure sensor (1-axis). Temperature and pressure, if they vary slowly, can be read directly, but magnetometer data needs to be summarized using (for example) average values across a 50 millisecond window along all 3 axes (we do not use instantaneous values). This would result in 3 extracted magnetic features, followed by temperature, followed by pressure, to represent a 5-variable state.

NanoEdgeAI input example multi1.png

We could also imagine building a more complex state from our 50 millisecond magnetometer buffer, including not only average magnetometer values, but also minima and maxima, for all 3 axes. This would result in 3*3 = 9 extracted magnetometer values (3 each for average, minimum, maximum), followed by temperature and pressure, to represent a 11-variable state.

NanoEdgeAI input example multi2.png


4. Using NanoEdge AI Studio

4.1. Running NanoEdge AI Studio for the first time

When running NanoEdge AI Studio for the first time, you are prompted for:

  • Your proxy settings: if you are using a proxy, use the settings below, otherwise, click NO.
Here are the IP addresses that need to be authorized:
Licensing API: Cartesiam API for library compilation:
54.147.158.222 52.178.13.227
54.147.163.93 -
54.144.124.187 -
54.144.46.201 -
or via URL: https://api.cryptlex.com:443 or via URL: https://api.nanoedgeaistudio.net (alternatively, https://api.cartesiam.net)
  • The port you want to use.
It can be changed to any port available on your machine (port 5000 by default).
  • Your license key.
If you do not know your license key, log in to the Cryptex licensing platform to retrieve it.
If you have lost your login credentials, reset your password using the email address used to download NanoEdge AI Studio.
Info white.png Information
If you do not have an Internet connection, offline activation is available:
  1. Choose Offline activation and enter your license key.
  2. Copy the long string of characters that appears.
  3. Log in to the Cryptex licensing platform.
  4. Reset your password using the email address provided when downloading NanoEdge AI Studio.
  5. Log into your Cryptlex dashboard using your new password.
  6. Click on your license key, then Activations and Offline activation.
  7. Click ACTIVATION, then paste the string of characters copied in step 2, and click Download response.
  8. In NanoEdge AI Studio, click Import file and open the downloaded .dat file.

4.2. Studio's home screen

The Studio main (home) screen comprises 4 main elements:

  1. The project creation bar (top)
  2. The existing projects window (left side)
  3. The "inspiration" window (right side)
  4. The toolbar (left extremity)

The project creation bar is used to create a new project (anomaly detection, 1-class classification, n-class classification, or extrapolation), or to create a data logger (specific to STEVAL-STWINKT1B) in order to quickly start gathering data using a wide range of sensors, and easily import it into the Studio.

The existing projects window is used to load, import/export, or search existing NanoEdge AI projects.

The inspiration window provides links to the Use Case Explorer data portal, where datasets corresponding to a wide range of interesting use cases are publicly available for download. This data portal also contains summaries of the performances obtained with NanoEdge AI Studio using these datasets.

The toolbar provides quick access to:

  • the Studio settings (port, workspace folder path, license information, and proxy settings)
  • the NanoEdge AI documentation
  • NanoEdge AI's license agreement
  • CLI (command line interface client) download
  • Studio log files (for troubleshooting)
  • the Studio workspace folder

4.3. Creating a new project

On the main screen, select your desired project type on the project creation bar, and click CREATE NEW PROJECT.

Each project is divided into 5 successive steps:

  1. Global settings, to set the general project parameters
  2. Signals, to import signal examples that will be used for library selection.
    Note: this step is divided in 2 substeps in anomaly detection projects (Regular signals / Abnormal signals)
  3. Optimize & benchmark, where the best NanoEdge AI Library is automatically selected and optimized
  4. Emulator, to test the candidate libraries before embedding them into the microcontroller
  5. Deploy, to compile and download the best library and its associated header files, ready to be linked to any C code.

XXXXX PLACEHOLDER PROGRESS BAR STEPS 12345 XXXXX

XXXXX ADD LINKS TO SUBSECTIONS BELOW

Info white.png Information

NanoEdge AI Studio comes in two versions: Trial and Full. The difference is the number / type of boards where NanoEdge AI Libraries can be deployed. Apart from library compilation and download, both versions offer exactly the fame features.
The Full version is compatible with any STM32 board and Arm® Cortex®-M microcontroller.
The Trial version is limited to a subset of STM32 evaluation boards (see list in the next section).


4.3.1. Global settings

The first step in any project is Global settings.

XXXXX ADD IMAGE

Here, the following parameters parameters will be set:

  • Project name
  • Description (optional)
  • Max RAM: this is the maximum amount of RAM memory to be allocated to the AI library. It doesn't take into consideration the space taken by the sensor buffers.
  • Limit Flash / No Flash limit: this is the maximum amount of Flash memory to be allocated to the AI library.
Warning white.png Warning

Restricting the amount of RAM/Flash available will restrict the search space during benchmark, which will cause potentially better, more memory-hungry libraries, to be ignored.

  • Sensor type: the type of sensor used to gather data in the project, and the number of axes / variables when using a "Generic" sensor or "Multi-sensor".
  • Target: this is the type of board / microcontroller on which the final NanoEdge AI Library will be deployed.
    The trial version of NanoEdge AI Studio is limited to the following boards:
    • STEVAL-STWINKT1B
    • Disco-B-U585I-IOT02A
    • Disco-F413ZH
    • Disco-F746NG / F769NI
    • Disco-L4R9I / L4S5I
    • Disco-L562E
    • Nucleo-F401RE / F411RE
    • Nucleo-G071RB
    • Nucleo-G474RE
    • Nucleo-H743ZI2
    • Nucleo-L432KC / L433RC-P / L476RG
    • Nucleo-WB55RG / WL55JC
Warning DB.png Important

When combining different sensor types together, 3 distinct approaches may be used:

1. Using the Generic sensor:

  • The Generic sensor may be used to combine multiple sensor types together into a single, unified signal buffer that will be treated by the library as one multi-variable input.
    The Machine Learning algorithms therefore build a model based on the combination of these inputs.
  • All signal sources must have the same output data rate (sampling frequency).
  • Example: combining accelerometer (3 axes) + gyroscope (3 axes) + current (1 axis) signals, into a unified 7-axis signal.
    • The Generic sensor must be selected, with 7 axes.
    • The buffers in the input files are formatted just like a generic 3-axis accelerometer (see this section formatting), XXXXXXXXXXXXXXXX PLACEHOLDER but each sample now has 7 variables.
      Instead of the 3 linear accelerations [X Y Z], the 7-axis sample adds 3 angular accelerations [Gx Gy Gz] from the gyroscope, and 1 current value [C] from the current sensor.
    • This results in 7-axis samples [X Y Z Gx Gy Gz C], meaning that for a buffer size of 256, each line would be composed of 1792 numerical values (7*256).

2. Using the Multi-sensor:

  • In the same way, Multi-sensor enables the combination of multiple variables into the same library, to be treated as a single, unified input.
  • All the restrictions related to Multi-sensor regarding input file format apply, see this section XXXXXXXXXXXXXXXX PLACEHOLDER.

3. Using the Multi-library feature (selectable on the Studio's Deploy screen):

  • This approach is radically different, and consists in separating the different sensor types, to create a separate library for each one.
  • Each signal is decoupled and treated on its own by a different library, that will run concurrently in the same microcontroller. See the Multi-library section. XXXXXXX PLACEHOLDER.
  • Here, the output data rates of the different sensors may be different.

4.3.2. Signals

4.3.2.1. How to import signals

The input files, containing all the signal examples to be used by the Studio to select the best possible AI library, may be imported from 3 sources:

  1. From a file (in .txt / .csv format)
  2. From the serial port (USB) of a live datalogger
  3. From the SD card of a datalogger (in .dat format), obtained via the Datalogger feature of the Studio. XXXXX PLACEHOLDER DATALOGGER LINK XXXXX

XXXXX PLACEHOLDER IMPORT SIGNALS XXXXX

1. From file:

  • Click SELECT FILES, and select the input file to import.
  • Rename the input file if needed.
  • Repeat the operation to import more files.
  • Click CONTINUE.

XXXXX PLACEHOLDER FILE 1 XXXXX

2. From serial port:

  • Select the COM Port where your datalogger is connected, and select the correct Baudrate.
  • If needed, tick the checkbox enter a maximum number of lines to be imported.
  • Click START/STOP to record the desired number of signal examples from your datalogger.
  • Rename your input file if needed.
  • Click CONTINUE.

XXXXX PLACEHOLDER SERIAL 1 XXXXX

{{Info: A USB data logger is required for this. It should be able to log data and output it to serial port in real time. }}

3. From SD card (combine this option with the Studio's Datalogger feature XXXXX PLACEHOLDER DATALOGGER LINK XXXXX):

  • Select the SD card directory (obtained using the Datalogger feature of the Studio) which all the sensor data in .dat format.
  • Select a Signal length, i.e. the desired buffer length for your signals (should be a power of 2).
  • Select the sensor from which the data should be imported.
  • Click CONTINUE.

XXXXX PLACEHOLDER SDCARD 1 XXXXX

Then:

  • Select the correct delimiter (it should be automatically detected).
  • Make sure the file looks correct in the preview. Otherwise, delete problematic lines, or edit your input file to put it in the correct format.
  • Click IMPORT.

The Signal screen shows various information about the imported signals:

XXXXX PLACEHOLDER SIGNALS SCREEN XXXXX

4.3.2.2. Which signals should be imported

1. Anomaly detection:

For anomaly detection, the general guideline is to concatenate all signal examples corresponding to the same category into the same file (like "nominal").
As a result, anomaly detection benchmarks will be started using only 2 input files: one for all regular signals, one for all abnormal signals.

  • The Regular signals correspond to nominal machine behavior, corresponding to data acquired by sensors during normal use, when everything is functioning as expected.

Include data corresponding to all the different regimes, or behaviors, that you wish to consider as "nominal". For example, when monitoring a fan, you may need to log vibration data corresponding to different speeds, possibly including the transients.

  • The Abnormal signals correspond to abnormal machine behavior, corresponding to data acquired by sensors during a phase of anomaly.

The anomalies do not have to be exhaustive. In practice, it would be impossible to predict (and include) all the different kinds of anomalies that could happen on your machine. Just include examples of some anomalies that you've already encountered, or that you suspect could happen. If needed, do not hesitate to create "anomalies" manually.
However, if the library is expected to be sensitive enough to detect very "subtle anomalies", it is recommended that the data provided as abnormal signals includes at least some examples of subtle anomalies as well, and not only very gross, obvious ones.

Warning DB.png Important

These signal examples are only necessary to give the benchmark algorithms some context, in order to select the best library possible.

At this stage, for anomaly detection, no learning is taking place yet. After the optimal library is selected, compiled, and downloaded, it will be completely fresh, brand new, untrained, and have no learned knowledge.

The learning process that is then performed, either via NanoEdge AI Emulator, or in your embedded hardware application, is unsupervised.

Example:

I want to detect anomalies on a 3-speed fan by monitoring its vibration patterns using an accelerometer. I recorded many signals corresponding to different behaviors, both "nominal" and "abnormal". I have the following signal examples (numbers are arbitrary):

  • 30 examples for "Speed 1", which I consider nominal,
  • 25 examples for "Speed 2", which I consider nominal,
  • 35 examples for "Speed 3", which I consider nominal,
  • 30 examples for "Fan turned off", which I also consider nominal,
  • Some of these signals contain "transients", like fan speeding up, or slowing down.
  • 30 examples for "fan air flow obstructed at speed 1", which I consider abnormal,
  • 35 examples for "fan orientation tilted by 90 degrees", which I consider abnormal,
  • 25 examples for "tapping on the fan with my finger", which I consider abnormal,
  • 25 examples for "touching the rotating fan with my finger", which I consider abnormal.

Here, I create

  • Only 1 nominal input file containing all 120 signal examples (30+25+35+30) covering 4 nominal regimes + transients.
  • Only 1 abnormal input file containing all 115 signal examples (30+35+25+25) covering 4 abnormal regimes.

And start a benchmark using only this couple of input files.

Warning DB.png Important
  • Note that all speeds are not necessarily represented in "abnormal behaviors".
  • It is not a problem. Later on, unseen anomalies can still be detected, because the learning happens in-situ, and not in the Studio

XXXXX THIS DIV WILL NOT BE DISPLAYED SINCE THE FEATURE IS NOT IMPLEMENTED IN STUDIO v3 FOR THE MOMENT XXXXX

Info white.png Information

For anomaly detection, the Studio gives the possibility to add several signal couples, which seems contrary to the instructions above. In fact, adding signal couples is used when creating a general AI library that adapts to different types of machines.

Example:

I want to detect anomalies on industrial pumps of different brands / types. My detection algorithms need to be adaptable, instead of specialized. I recorded different nominal behaviors (such as pump running at max capacity or pump running at half capacity) on three different pumps (Pump A, Pump B and Pump C). I also recorded one type of anomaly (such as minor leak) for each of the 3 pump types, so I have 3 batches of abnormal signals.
Therefore I:
  • Concatenate all nominal behaviors for Pump A into one nominal file "Nominal A",
  • Concatenate all nominal behaviors for Pump B into a separate nominal file "Nominal B",
  • Concatenate all nominal behaviors for Pump C into a separate nominal file "Nominal C",
  • Also import my anomalies into 3 separate files, "Abnormal A", "Abnormal B" and "Abnormal C".
And start a benchmark using 3 couples of signal files:
  • "Nominal A" + "Abnormal A"
  • "Nominal B" + "Abnormal B"
  • "Nominal C" + "Abnormal C"


2. 1-class Classification:

For 1-class classification, the guideline is to generate a single file containing all signal examples corresponding to the unique class to be learned.
If this single class contains distinct behaviors or regimes, they should all be concatenated into 1 input file.

As a result, 1-class classification benchmarks will be started using 1 single input file.


3. n-class Classification:

For n-class classification, all signal examples corresponding to one given class should be gathered into the same input file.
If any class contains distinct behaviors or regimes, they should all be concatenated into 1 input file for that class.

As a result, n-class classification benchmarks will be started using one input file per class.

Example:

For the identification of types of failures on a motor, 5 classes can be considered, each corresponding to a behavior, such as:
  1. normal behavior
  2. misalignment
  3. imbalance
  4. bearing failure
  5. excessive vibration
This would result in the creation of 5 distinct classes (import one .txt / .csv file for each), each containing a minimum of 20-50 signal examples of said behavior.


4. Extrapolation:

For extrapolation, all signal examples should be gathered into the same input file.
This file should contain all target values to be used for learning, along with their associated buffers of data (representing the known parameters).

As a result, extrapolation benchmarks will be started using 1 single input file.


4.3.2.3. How to use the "Datalogger" feature

This section explains how to configure the STEVAL-STWINKT1B for datalogging, using the Studio's Datalogger feature.
Using the STWIN, you will be able to log signal examples into a Studio-compatible format, directly to an SD card, and import them easily into the Studio.

Warning white.png Warning

Make sure that you're using a STWINKT1B (rev. B), as the STEVAL-STWINKT1 is not compatible.

On the Studio's Home screen, click Datalogger (the last icon on the project creation bar at the top).

XXXXX PLACEHOLDER DATALOGGER 1 XXXXX

On the first screen (Connect and Flash), follow the instructions given on the right side:

  • Download the HSDatalog firmware for the STWIN
  • Connect the STWIN to the computer using STLINK via USB
  • Flash the datalogging firmware to the STWIN

Then, the STWIN datalogger is ready to be configured. Click the second icon Configure datalogger on the top bar.

XXXXX PLACEHOLDER DATALOGGER 1 XXXXX

Here, the STWIN sensors will be configured:

  1. Select which sensors should be activated.
    In this example, we have the IIS3DWB accelerometer, and the HTS221 temperature and humidity sensors.
  2. Select the Full Scale parameters and Output Data Rate to be used for each sensor.
  3. Then, follow the instructions to the right side of the screen:
  • Click DOWNLOAD CONFIGURATION to get a .json configuration file (DeviceConfig.json)
  • Copy / paste this configuration file at the root of your SD card.
  • Insert the SD card into the STWIN

Finally, to start logging data:

  1. Simply switch on the board (PWR button)
  2. Start logging data by pressing the USR button
  3. Stop the logging process by pressing the USR button again

All data logged in this way is now available in the SD card, with one .dat file for each sensor.
The whole SD card folder is ready to be imported in NanoEdge AI Studio, see this section XXXXX LINK PLACEHOLDER XXXXX.

4.3.2.4. Signal summary screen

The Signals screen contains a summary of all information related to the imported signals:

  1. List of imported input files
  2. Information about the input file selected, and basic checks
  3. Signal previews
  4. Optional: frequency filtering for the signals

XXXXX PLACEHOLDER SIGNAL SCREEN NUMBERED 1-4 XXXXX

  • Imported files: in this example (n-class classification project) we have imported a total of 7 input files, each corresponding to one of the 7 classes to distinguish on the system (here, a multispeed USB fan).
  • File information: The selected file ("speed_1") contains 100 lines (or signal examples), each composed of 768 numerical values.
    • The Check for RAM and the next 5 checks are blocking, meaning that any error in the input file must be fixed before proceeding further.
      Here, all checks were successfully passed (green icon). However, if a check returns an error, a red icon will be displayed.
    • Click "Run optional checks" to scan your input file and run additional checks (e.g., search for duplicate signals, equal consecutive values, random values, outliers...).
      Failing these additional checks gives warnings that suggest possible modifications on your input files. Click any warning for more information and suggestions.
  • Signal previews: these graphs show a summary of the data contained in each signal example within the input file. There are as many graphs as sensor axes.
    • The graph's x-axis corresponds to the columns' in the input file.
    • The y-values contain an indication of the mean value of each column (across all lines, or signals), their min-max values, and standard deviation.
    • Optionally, FFT (Fast Fourier Transform) plots can be displayed to transpose each signal from time domain to frequency domain.
  • Frequency filtering: this is used to alter the imported signals by filtering out unwanted frequencies.
    • Click FILTER SETTINGS above the signal preview plots
    • Toggle "filter activated / deactivated" as required
    • Input the sampling frequency (output data rate) used on the sensor used for signal acquisition.
    • Select the low and high cutoff frequencies you wish to use for the signals (maximum: half the sampling frequency).
      Within the input signals, only the frequencies that fall between these two boundaries will be kept; all frequencies outside the window will be ignored.

XXXXX PLACEHOLDER FILTER SETTINGS XXXXX

Warning DB.png Important

It is only possible to filter out the frequencies lower than half the sampling frequency used to acquire input signals.

Warning white.png Warning

Once frequency filtering is activated in a project, it automatically applies to all signals within the current project.
This option is taken into account during benchmarking, and needs to be disabled manually.


4.3.3. Optimize and benchmark

During the benchmarking process, NanoEdge AI Studio uses the signal examples imported in the previous step to automatically search for the best possible NanoEdge AI Library.

XXXXX PLACEHOLDER AD BENCHMARK SCREEN XXXXX

The benchmark screen, summarizing the benchmark process, contain the following sections:

  1. List of benchmarks
  2. Benchmark results graph
  3. Search information window
  4. Performance evolution graph

To start a benchmark:

  1. Click RUN NEW BENCHMARK
  2. Select which input files (signal examples) to use
  3. Optional: change the number of CPU cores to use
  4. Click START.
Info white.png Information
  • Benchmarks may take a long time (several hours) to complete, i.e. to find a fully optimized library. However, the bulk of the optimization process is typically carried out within the first 30-60 minutes. Therefore, it is recommended, when doing exploratory work or running quick tests, to start testing your candidate libraries (Emulator) without waiting several hours for full completion (unless trying to refine previous results).
  • Benchmarks can be paused / resumed, or stopped at any time, without cancelling the process (the best library found will not be lost).
  • Useful information can be found in the project bar at the top (under the button for Optimize and benchmark), such as:
    • Total number of benchmarks run in the current project.
    • Number of libraries tested in total for the current benchmark.
    • Time elapsed for the current benchmark.
  • Benchmark progress in % is displayed on the left side of the screen, next to the name / ID of the benchmark, in the benchmark list under the RUN NEW BENCHMARK button.
4.3.3.1. Benchmarking process

Each candidate library is composed of a signal preprocessing algorithm, a machine learning model, and some hyperparameters. Each of these 3 elements can come in many different forms, and use different methods or mathematical tools, depending on the use case. This results in a very large number of possible libraries (many hundreds of thousands), which need to be tested, to find the most relevant one (i.e. the one that gives the best results) given the signal examples provided by the user.

In a nutshell, the Studio automatically:

  1. divides all the imported signals into random subsets (same data, cut in different ways),
  2. uses these smaller random datasets to train, cross-validate, and test one single candidate library many times,
  3. takes the worst results obtained obtained from step #2 to rank this candidate library, then moves on to the next one,
  4. repeats the whole process until convergence (when no better candidate library can be found).

Therefore, at any point during benchmark, only the performances of the best candidate library found so far are displayed (and for a given library, the score shown is the worst result obtained on the randomized input data).

Warning DB.png Important

Remember that, while classification and extrapolation models are trained (and their knowledge learned) in the Studio during this process, the anomaly detection libraries are not.
During benchmark, the best anomaly detection library is selected, but it is untrained. Training only happens later on, inside the microcontroller, when the user runs iterations of the learn() function.

4.3.3.2. Performance indicators

During benchmark, all libraries are ranked based on 4 criteria, depending on the project type, in order of importance:

  1. Balanced accuracy / Recall / Accuracy / SMAPE
  2. Confidence / R-SQUARED
  3. RAM requirement
  4. Flash requirement

1. Primary indicator (~90% of the total weight):

Balanced accuracy (anomaly detection)
This is the library's ability to correctly identify regular signals as regular, and abnormal signals as abnormal. It takes the number of signals per class (and potential imbalances) into consideration.
100% balanced accuracy means that all signals are correctly identified.
Recall (1-class classification)
This metric quantifies the number of correct positives predictions made, out of all positive predictions that could have been made.
Accuracy (n-class classification)
This is the ability of the library to attribute each signal to the correct class (a signal is attributed to the class that has the highest probability).
SMAPE (extrapolation)
It is the Symmetric Mean Absolute Percentage Error of the extrapolations.

2. Secondary indicator (~9% of the total weight):

Confidence (anomaly detection, 1-class classification, n-class classification)
This metric represents the ability of the library to put mathematical distance between the signals pertaining to a given class, and those pertaining to another class, with respect to the decision boundary separating them.
100% means that all classes are perfectly separated, with no overlap / ambiguity.
R-SQUARED (extrapolation)
This is the coefficient of determination, which provides a measures of how well the observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.

3. Other indicators (~1% of the total weight):

RAM (all projects)
This is the maximum amount of RAM memory used by the library (and its dynamic knowledge) when it is integrated into the target microcontroller.
Flash (all projects)
This is the maximum amount of Flash memory used by the library (and its static knowledge) when it is integrated into the target microcontroller.
4.3.3.3. Benchmark progress

Along with the 4 performance indicators, a graph shows the position in real time of signal examples (data points) imported.
The type of graph depends on the type of project:

XXXXX PLACEHOLDER AD GRAPH + CLASSIF GRAPH XXXXX

The anomaly detection plot (left side) shown similarity score (%) vs. the signal number. The threshold (decision boundary between the two classes, "nominal" and "anomaly") set at 90% similarity, is shown as a gray dashed line.

The n-class classification plot (right side) shows probability percentage of the signal (the % certainty associated to the class detected) vs. the signal number.

XXXXX PLACEHOLDER 1CC GRAPH + EXTRAPOL GRAPH XXXXX

The 1-class classification plot (left side) shows a 2D projection of the decision boundary separating regular signals from outliers. The outliers are the few (~3-5%) signals examples, among all the signals imported as "regular" which appear to be most different from the rest (~95-97%) of the others.

The extrapolation plot (right side) shows the extrapolated value (estimated target) vs. the real value which was provided in the input files.

XXXXX ADD INFO ABOUT PERFORMANCE INDICATOR EVOLUTION GRAPH + SEARCH INFORMATION WINDOW XXXXX

4.3.3.4. Benchmark results

When the benchmark is complete, the a summary of the benchmark infromation appears

NanoEdgeAI benchmark lib results.png

XXXXX UPDATE IMAGE

Only the best library is shown. However, several "candidates" are saved for each benchmark.
You may select a different library by clicking "N libraries" (see above, "16 libraries"). This feature is useful if you want to use a library that has better performance in terms of a secondary indicator (for instance if you want to prioritize low RAM or high Confidence).

NanoEdgeAI benchmark change lib.png

XXXXX UPDATE IMAGE

Just select a different library by clicking the crown icon, under "Lib selected", and validate your change of result by clicking OK.

[Anomaly detection only]: After the benchmark is complete, a plot of the library learning behavior is shown:

NanoEdgeAI 44 screen4 plotiteration.png

XXXXX UPDATE IMAGE

This graph shows the number of learning iterations needed to obtain optimal performance from the library, when it is embedded in your final hardware application. In this particular example, NanoEdge AI Studio recommended that the learn() is called 70 times, at the very minimum.

Warning white.png Warning
  • Never use fewer iterations than the recommended number, but feel free to use more (for example 3 to 10 times more).
  • This iteration number corresponds to the number of lines to use in your input file, as a bare minimum.
  • These iterations must include the whole range of all kinds of nominal behaviors that you want to consider on your machine.
4.3.3.5. Possible cause for poor benchmark results

If your keep getting poor benchmark results, you may try the following:

  • Increase the "Max RAM" or "Max Flash"" parameters (such as 32 Kbytes or more).
  • Adjust your sampling frequency; make sure it is coherent with the phenomenon you want to capture.
  • Change your buffer size (and hence, signal length); make sure it is coherent with the phenomenon to sample.
  • Make sure your buffer size (number of values per line) is a power of two (except for multi-sensor).
  • If using a multi-axis sensor, treat each axis individually by running several benchmarks with a single-axis sensor.
  • Include more signal examples (lines) in your input files.
  • Check the quality of your signal examples; make sure they contain the relevant features / characteristics.
  • Check that your input files do not contain (too many) parasite signals (for instance no anomalous signals in the nominal file, for anomaly detection, and no signals belonging to another class, for classification).
  • Increase the variety of your signal examples (more nominal regime, or more anomalies, or more classes).
  • Decrease the variety of your signal examples ( fewer nominal regime, or fewer anomalies, or fewer classes).
  • Check that the sampling methodology and sensor parameters are kept constant throughout the project for all signal examples recorded (in all input files; nominal, abnormal or class files).
  • Check that your signals are not too noisy, too low intensity, too similar, or unrepeatable.
  • Remember that microcontrollers are resource-constrained (audio/video, image and voice recognition are not be supported).

Low confidence scores are not necessarily an indication or poor benchmark performance, if the (balanced) accuracy is sufficiently high (> 80-90%). Always use the associated Emulator to determine the performance of a library, preferably using data that has not been used before (for the benchmark).

Warning DB.png Important

Signal confirmation procedure:

Even with lower (balanced) accuracy scores, detection results can often be greatly improved by implementing a simple confirmation mechanism in the final algorithm / C code. This approach may prove extremely useful, depending on the use case, to limit the number of false positives (or false negatives).

In practice, it consists in validating anomalies before raising alerts, instead of taking the detection results directly. For example, anomalies may be counted as "true anomalies" only after N successive validations using consecutive (distinct) data buffers. The same approach can of course be used to confirm "nominal" signals. Validations can be made using counters, or any statistical tool such as means, modes, or others.

The same approach can be used to confirm that a signal pertains to the correct class, in classification projects. This is useful to minimize classification errors, or eliminate transient regimes.

[Classification only]: in classification projects, this confirmation feature is available natively when using the Emulator to test libraries with serial data (see this section).

4.3.4. Emulator

Here (Step: Emulator), you are able to test the library that was selected during the benchmark process (Step: Optimize and Benchmark) using NanoEdge AI Emulator.

NanoEdgeAI 5 emulator top.png

NanoEdge AI Emulator is a clone of the library that emulates its behavior, and is directly usable within the Studio interface. There is no need to embed a library in order to test its performance with real, "unseen" data. Therefore, each library, among hundreds of thousands of possibilities, comes with its own Emulator.

The Emulator can be also be downloaded as a standalone .exe (Windows®) or .deb (Linux®) to be used in the terminal through the command line interface.

Info white.png Information
Refer to the documentation before using the Emulator through the CLI (Emulator for anomaly detection or Emulator for classification).


This screen gives a summary of the selected benchmark (progress, performance, input files used):

Select the benchmark to use, on the left side of the screen, to load the associated emulator.

When you are ready to start testing, click Initialize Emulator.

4.3.4.1. Anomaly detection

Functions:

Here are the functions of the anomaly detection library that are available through its Emulator:

initialize() run first before learning/detecting, or to reset the knowledge of the library/emulator
set_sensitivity() adjust the pre-set, internal detection sensitivity (does not affect learning, only returned similarity scores)
learn() start a number of learning iterations (to establish an initial knowledge, or enrich an existing one)
detect() start a number detection iterations (inference), once a minimum knowledge base has been established

For more information, see the Emulator and Library documentations for anomaly detection.

The testing procedure goes as follows:

NanoEdgeAI ad emu functions.png
Warning DB.png Important

When building a smart device, the final features heavily depend on the way those functions are called. It is entirely up to the developer to design relevant learning and detection strategies, depending on the project specificities and constraints.

NanoEdgeAI ild.png

For example for a hypothetical machine, one strategy could be to:

  • initialize the model
  • establish an initial knowledge base by calling learn() every minute for 24 hours on that machine
  • switching to inference mode by calling detect() 10 times every hour (and averaging the returned scores), each day
  • blink a LED and ring alarms whenever detect() returns any anomaly (average score <90%)
  • run another learning cycle to enrich the existing knowledge, if temperature rises above 60°C (and the machine is still OK)
  • send a daily report (average number of anomalies per hour, with date, time, machine ID) using Bluetooth® or LoRa®

In summary, those smart functions can be triggered by external data (from sensors, buttons, to account for and adapt to environment changes).
The scores returned by the smart functions can trigger all kinds of behaviors on your device.
The possibilities are endless.

Learning:

After initialization, no knowledge base exists yet. It needs to be acquired in-situ, using real signals. Your library is not pre-trained with the signals imported before benchmark, in Steps 2 and Step 3. Therefore, you need to learn some signals.

Warning white.png Warning
A learning phase corresponds to several iterations of the learn() function. You must use at least the minimum number of iterations recommended in the benchmark summary from Step 4. This learning is incremental and unsupervised.

To learn some signals from a file, click Select file and open the file containing your training data.

To learn some signals "live" from your Serial port, using your own data logger, click Serial data. Then, select your Serial / COM port (refresh if needed), choose your preferred baudrate, and Start recording by clicking the red button.

As soon as some signals are learned, the number of learned signals is indicated.

Click Go to detection after all relevant signals (nominal, by definition) have been learned.

Detection:

When a first knowledge base has been established, you can use Detection using any signals, to check if they would be classified as nominal or anomaly by the library, and make sure this library performs as intended.

As usual, the signals to use for detection can be imported from file, or from Serial port using a data logger.

Select the signals that you wish to use, and adjust the sensitivity if needed. A pie chart summarizes the detection results.

When detecting using live data from the Serial port, a graph shows how the detection performance (similarity percentage) evolves in real time.

Info white.png Information
All details of all learning and detection iterations (such as similarity and signal status) are available on the terminal window embedded on the right side of the screen.
NanoEdgeAI ad emu terminal output.png
Info white.png Information
Feel free to repeat as many times as needed, adjusting the sensitivity or running additional Learning cycles in the process.
  • If the results obtained are satisfactory, move on to the next step, and Deploy your library on your microcontroller.
  • Otherwise, it is time to review your data logging procedure (sampling frequency, buffer size, signal length), import other sets of signals, and start a new benchmark. Also see the next section, Possible cause of poor emulator results.

You will probably not land your ideal library the first time. Using NanoEdge AI Studio is an iterative process. Try, learn, adjust, and repeat!

4.3.4.2. Classification

Here are the functions of the classification library that are available through its Emulator:

knowledge_init() run first to initialize the knowledge
classifier() run an inference iteration (detect which class the input signal belongs to)

For more information, see the Emulator and Library documentations for classification.

Just like in anomaly detection (see "Important" section above), the classifier function can be called dynamically whenever needed. It can be triggered by external data (for example from sensors, buttons, to account for and adapt to environment changes), and the class / probabilities returned can trigger all kinds of behaviors on your device.

To classify signals from a file, click Select file and open the file containing the signal examples to classify. You can see a pie chart summarizing the classification.

The image below shows data from a 3-speed fan:

  • 3 signals were detected at "speed 1"
  • 7 at "speed 2"
  • 17 at "speed 3"
  • 6 when the fan air flow was obstructed
  • and so on
NanoEdgeAI class emu signals file.png


You see a pie chart summarizing the classification, as well as a graph showing the probabilities associated to each classification iteration (the image below shows data from a 3-speed fan, and "speed 1" is currently being detected).

NanoEdgeAI class emu signals live.png
Warning DB.png Important

To minimize classification errors, or eliminate transient regimes during your detections, you may choose to validate signals by increasing the number of consecutive confirmations.

In the example above, the number of confirmations is set to 2, meaning that a signal is only validated as pertaining to a given class after 2 consecutive data buffers have been successfully classified.

In this example, on a total number of 96 signals seen, 40 verified classifications have been counted (8+8+5+5+4+5+5), out of a possible maximum of 96/2 = 48.

See also this note about Signal confirmation procedure.

Info white.png Information
All details of all classification iterations (such as class IDs and class probabilities) are available on the terminal window embedded on the right side of the screen.
NanoEdgeAI class emu terminal output.png
4.3.4.3. Possible causes of poor emulator results

Here are possible reasons for poor anomaly detection or classification results:

  • The data used for library selection (benchmark) is not coherent with the one you are using for testing via Emulator/Library. The regular/abnormal or class signals imported in the Studio must correspond to the same machine behaviors, regimes, and physical phenomena as the ones used for testing.
  • Your (balanced) accuracy score was well below 90% or your confidence score was too low to provide sufficient data separation.
  • You used an insufficient number or signals in either regular/abnormal or class signal files. Make sure that you used enough lines in your input files (minimum 20-50). For anomaly detection, make sure that you use at least the minimum number recommended by the Studio, and possibly more.
  • The sampling method is inadequate for the physical phenomena studied, in terms of frequency, buffer size, or duration for instance.
  • The sampling method has changed between Benchmark and Emulator tests. The same parameters (frequency, signal lengths, buffer sizes) must be kept constant throughout the whole project.
  • [Anomaly detection]: you have not run enough learning iterations (your Machine Learning model is not rich enough), or this data is not representative of the signal examples used for benchmark. Do not hesitate to run several learning cycles, as long as they all use nominal data as input (only normal, expected behavior should be learned).
  • [Classification]: the machine status or working conditions have drifted between Benchmark and Emulator tests, and classes are not recognized anymore. In that case, update the imported "class" files, and start a new benchmark.

4.3.5. Deploy

This feature is only available:

  • in the Trial version of NanoEdge AI Studio, limited to the featured boards which can be selected during project creation
  • in the Paid version of NanoEdge AI Studio
4.3.5.1. General case

In this step (Step: Deploy), the library is compiled and downloaded, ready to be used on your microcontroller for your embedded application.

NanoEdgeAI 6 deploy top.png

Before compiling the library, several compilation flags are available:

[[File:NanoEdgeAI_compilation_flags.png]

If you ran several benchmarks, make sure that the correct benchmark is selected. Then, when you are ready to download the NanoEdge AI Library, click Compile.

NanoEdgeAI compile.png

Select Development version to get a library that is intended for testing and prototyping. If you would like to start producing your device, integrating NanoEdge AI Library, contact STMicroelectronics for more details and to get the proper library version.

After a short delay, a .zip file is downloaded to your computer.

NanoEdgeAI 6 zip file.png

It contains all relevant documentation, the NanoEdge AI Emulator (both Windows® and Linux® versions), the NanoEdge AI header file (C and C++), a .json file containing some library details, and the model knowledge (for classification only).

You can also re-download any previously compiled library, via the archived libraries list:

NanoEdgeAI 6 archived libraries.png
4.3.5.2. Multi-library

In this final step (Step: Deploy) you also have the possibility to add a suffix to the library you are about to compile and download.

NanoEdgeAI multi library.png

This is useful to integrate multiple libraries into the same device / code, when there is a need to:

  • monitor several signal sources coming from different sensor types, concurrently, independently,
  • train Machine Learning models and gather knowledge from these different input sources,
  • take decisions based on the outputs of the Machine Learning algorithms for each signal type.

For instance, one library can be created for 3-axis vibration analysis, and suffixed vibration:

NanoEdgeAI multilib vibration.png

Later on, a second library can be created later on, for 1-axis electric current analysis, and suffixed current:

NanoEdgeAI multilib current.png

All the NanoEdge AI functions in the corresponding libraries (as well as the header files, variables, and knowledge files if any) is suffixed appropriately, and is usable independently in your code. See below the header files and the suffixed functions and variables corresponding to this example:

NanoEdgeAI multi lib suffixes.png

Congratulations! You can now use your NanoEdge AI Library!
It is ready to be linked to your C code using your favorite IDE, and embedded in your microcontroller.

For more info, check the library documentation (AD library, CL library), as well as the code snippets on the right side of the screen, which provide general guidelines about how your code could be structured, and how the NanoEdge AI Library functions must be called.

5. Resources

Documentation
All NanoEdge AI Studio documentation is available here.

Tutorials
Step-by-step tutorials, to use NanoEdge AI Studio to build a smart device from A to Z.