Implementing accelerated machine-learning applications with an advanced MCU – Embedded

Embedded.com
Historically, Artificial Intelligence (AI) was a GPU / CPU or even DSP-dependent technology. However, more recently AI is moving into data acquiring systems by integration within constrained applications running on smaller microcontrollers (also known as MCUs). This trend is mostly driven by the Internet of Things (IoT) market, within which Silicon Labs is a major player.
To address this new IoT trend, Silicon Labs has announced a Wireless MCU that can perform hardware accelerated AI operations. To achieve this, this MCU has been designed to embed a Matrix Vector Processor (MVP), namely the EFR32xG24.
In this article, I will first cover some AI basics that will highlight which use cases MVP has been made for. And most of all, how to use EFR32xG24 for designing an AI IoT application.
Artificial Intelligence, Machine Learning and Edge Computing in a nutshell
AI is a system that tries to mimic human behavior. More specifically, it is an electrical and/or mechanical entity that mimics a response to an input similarly to what a human would do. Although the terms AI and Machine Learning (ML) are often used interchangeably, they represent two different methodologies. AI is a broader concept, while ML is a subset of AI.
Using Machine Learning, a system can make predictions and improve (or train) itself after repetitive use of what is called a model. A model is the use of a trained algorithm which will eventually be used to emulate decision making. This model can be trained by collecting data or by using existing datasets. When this system applies its “trained” model to newly acquired data to make decisions, we refer to it as Machine Learning Inference.
As mentioned previously, inference needs computational power that was usually handled by high end computers. However, we are now able to run inference on more constrained devices that do not need to be connected to such high-end computers; this is called Edge Computing.
By running inferences on an MCU, one is considered performing Edge Computing. Edge Computing involves running data processing algorithms at the closest point to where that data is acquired. Examples of edge devices are usually simple and constrained devices such as sensors, or basic actuators (lightbulbs, thermostats, door sensors, electricity meters, and so on). These devices are typically running on low power, ARM Cortex-M class MCUs:
click for full size image

Performing Edge Computing has a lot of benefits. Arguably the most valuable benefit is that the system which uses edge computing does not depend on an external entity. Devices can “make their own decisions” locally.
Making decisions locally has the following practical benefits:
Silicon Labs’ EFR32xG24 for Edge Computing
EFR32xG24 is a Secure Wireless MCU that supports several 2.4 GHz IoT protocols (Bluetooth Low Energy, Matter, Zigbee, and OpenThread protocols). It also includes Secure Vault, which is an improved security feature set that is common to all Silicon Labs Series 2 platforms.
However, in addition to improved security and connectivity unique to this MCU is a Hardware Accelerator for Machine Learning models Inference (amongst other accelerations), called Matrix Vector Processor (MVP).
The MVP provides the ability to run Machine Learning inferences more efficiently with up to 6x lower power and 2-4x faster speed when compared to ARM Cortex-M without hardware acceleration (the actual improvement being dependent upon the model and application).
click for full size image

The MVP is designed to offload the CPU by handling intensive floating-point operations. It is especially designed for complex matrixed floating-point multiplications and additions.
The MVP consists of a dedicated hardware arithmetic logic unit (ALU), a load/store unit (LSU) and a sequencer.
click for full size image

As a result, MVP helps accelerate processing and saving power on a wide variety of applications, such as Angle-of-Arrival (AoA), MUSIC algorithm computations, Machine Learning (Eigen or Basic Linear Algebra Subprograms BLAS), and so on.
Because this device is a simple MCU, it cannot address all the use cases AI/ML could cover. It is designed to address the following four categories listed below along with real-life applications:
To help address these, Silicon Labs delivers dedicated sample applications that are based on an AI/ML framework called TensorFlow.
TensorFlow is an end-to-end open-source platform for machine learning from Google. It has a com-prehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
The Tensor Flow project has also an optimized for embedded hardware variant called TensorFlow Lite for Microcontrollers (TFLM). It is an open-source project, where most of the code is contributed by community engineers, including Silicon Labs and other silicon vendors. At the moment, this is the only framework delivered with Silicon Labs Gecko SDK Software Suite to create AI/ML applications.
Available AI/ML examples from Silicon Labs are:
To start developing an application based on any of these, you can have very little experience, or you can be an expert. Silicon Labs provides multiple Machine Learning development tools to choose from depending on your level of Machine Learning expertise.
For First-Time ML Developers, you can start from one of our examples or try one of our 3rd party partners. Our 3rd party ML partners support the full end-to-end workflow with richly featured, and easy-to-use GUI interfaces for building the optimal Machine Learning model for our chips.
For Experts in ML that want to work directly with the Keras/TensorFlow platform, Silicon Labs offers a self-serve, self-support reference package that organizes that model development workflow into one that is tailored for building ML models for Silicon Labs chips.
click for full size image

Developing an ML-Enabled Application Example: Voice-Controlled Zigbee Switch with EFR32xG24
To create an ML-enabled application, two main steps are necessary. The first step is to create a wireless application, which you can do using either Zigbee, BLE, Matter, or any proprietary 2.4 GHz protocol-based application. It can even be a non-connected application. The second step is to build an ML model to integrate it with the application.
As mentioned above, Silicon Labs provides several options to create an ML application for its MCUs. The approach chosen here is by using an existing sample application with a predefined model. In this example, the model is trained to detect two voice commands: “on” and “off”.
Getting Started with an EFR32xG24 Application
To get started, get the EFR32MG24 developer’s kit, BRD2601A (left).
This development kit is a compact board embedding several sensors (IMU, Temperature, Relative Humidity and more), LEDs, and Stereo I2S microphones.
This project will use I2S microphones.
These devices might not be as rare as GPUs, but if you do not have the chance of getting one of these kits, you can also use an older Series 1 based devkit called “Thunderboard Sense 2” Ref. SLTB004A (right).
However, this MCU does not have MVP and will perform all the inference using the main core without acceleration.
Next, you need the Silicon Labs’ IDE, Simplicity Studio, to create the ML project. It comes with a simple way of downloading Silicon Labs’ Gecko SDK Software Suite, which brings the required libraries and drivers needed for the application, as follows.
click for full size image

The IDE also provides tools to further analyze your application power consumption or networking operations.
Creating the Zigbee 3.0 Switch Project with MVP Enabled
Silicon Labs provides a ready to use sample application, Z3SwitchWithVoice, that you will create and build. The application already comes with an ML model, so you do not need to create one.
After it is created, note that a Simplicity Studio project is made of source files brought by components, which are GUI entities that make the use of Silicon Labs’ MCUs easy by simplifying the integration of complex software. In this case, you can see that MVP support and the Zigbee networking stack are installed by default.
click for full size image

The main application code is  in the app.c source file.
On the networking side, the application can be paired to any existing Zigbee 3.0 network by a simple button press, also known as “network steering”. Once on a network, the MCU will look for a compatible and pairable lighting device, also known as “binding”.
When the networking part of the application is up and running, the MCU will then periodically poll samples of microphone data and run the inference on it. This code is located in keyword_detection.c.
Upon detection of a keyword, the handler in app.c will send the corresponding Zigbee command:
At this point, you have a hardware accelerated inference running on a wireless MCU for edge computing.
Customizing the TensorFlow Model to Use Different Command Words
As mentioned before, the actual model was already integrated in that application and was not modified further. However, if you were integrating the model yourself, you would do so with the following steps:
These steps must be followed no matter how familiar you are with Machine Learning. The difference though will be on how you can build the model, as follows:
Within Simplicity Studio, the latter is the simplest. To change the model in Simplicity Studio, copy your .tflite model file into the config/tflite folder of your project. The project configurator provides a tool that will automatically convert .tflite files into a sl_ml_model source and header files. The full documentation for this tool is available at Flatbuffer Conversion.
[Note: All images and code are courtesy of Silicon Labs.]
Related Contents:
For more Embedded, subscribe to Embedded’s weekly email newsletter.

You must or to post a comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Your account has been deactivated.
Sorry, we could not verify that email address.
Your account has been reactivated.
You must verify your email address before signing in. Check your email for your verification email, or enter your email address in the form below to resend the email.
Please confirm the information below before signing in. Already have an account? Sign In.
We have sent a confirmation email to {* emailAddressData *}. Please check your email and click on the link to verify your email address.
We’ve sent an email with instructions to create a new password. Your existing password has not been changed.
{| foundExistingAccountText |} {| current_emailAddress |}.
Sorry, we could not verify that email address. Enter your email below, and we’ll send you another email.
Check your email for a link to verify your email address.
Thank you for verifiying your email address.

Your password has been successfully updated.
We didn’t recognize that password reset code. Enter your email below, and we’ll send you another email.
We’ve sent you an email with instructions to create a new password. Your existing password has not been changed.

source

Enable Exclusive OK No thanks