ARM’s Zach Shelby introduced the use of microcontrollers for machine learning and artificial intelligence at the ECF19 event in Helsinki on last Friday. The talk showed that that artificial intelligence and machine learning can be applied to small embedded devices in addition to the cloud-based model. In particular, artificial intelligence is well suited to the devices of the Internet of Things. The use of machine learning in IoT is also sensible from an energy efficiency point of view if unnecessary power-consuming communication can be avoided (for example local keyword detection before sending voice data to cloud more more detailed analysis).
According to Shelby , we are now moving to a third wave of IoT that comes with comprehensive equipment security and voice control. In this model, machine learning techniques are one new application that can be added to previous work done on IoT.
In order to successfully use machine learning in small embedded devices, the problem to be solved is that it has reasonably little incoming information and a very limited number of possible outcomes. ARM Cortex M4 processor equipped with a DSP unit is powerful enough for simple hand writing decoding or detecting few spoken words with machine learning model. In examples the used machine learning models needed less than 100 kilobytes of memory.
The presentation can be now viewed on YouTube:
Important tools and projects mentioned on the presentation:
uTensor (ARM MicroTensor)
Articles on presentation:
https://www.uusiteknologia.fi/2019/05/20/ecf19-koneoppiminen-mahtuu-mikro-ohjaimeen/
http://www.etn.fi/index.php/72-ecf/9495-koneoppiminen-mullistaa-sulautetun-tekniikan
420 Comments
Tomi Engdahl says:
ESP32-Cam on your water meter with “AI-on-the-edge” — also for gas and power meters
https://www.youtube.com/watch?v=iUgxwbfkIqU
0:00 Intro
0:50 Requirements
1:39 3D-Model and camera focus
3:05 Firmware installation
4:21 Wiring of the ESP32-cam
6:25 Hardware modification
9:44 Installation on the water meter
10:43 Initial setup
13:07 MQTT, Node-Red, Grafana, …
14:00 Outro
https://github.com/jomjol/AI-on-the-edge-device/
Tomi Engdahl says:
Run Machine Learning Code in an Embedded IoT Node to Easily Identify Objects
https://www.digikey.com/en/articles/run-machine-learning-code-in-an-embedded-iot-node?dclid=CLG_joKc0fQCFeEHogMdwQIGyg
Internet of Things (IoT) networks operating in dynamic environments are being expanded beyond object detection to include visual object identification in applications such as security, environmental monitoring, safety, and Industrial IoT (IIoT). As object identification is adaptive and involves using machine learning (ML) models, it is a complex field that can be difficult to learn from scratch and implement efficiently.
The difficulty stems from the fact that an ML model is only as good as its data set, and once the correct data is acquired, the system must be properly trained to act upon it in order to be practical.
This article will show developers how to implement Google’s TensorFlow Lite for Microcontrollers ML model into a Microchip Technology microcontroller. It will then explain how to use the image classification and object detection learning data sets with TensorFlow Lite to easily identify objects with a minimum of custom coding.
It will then introduce a TensorFlow Lite ML starter kit from Adafruit Industries that can familiarize developers with the basics of ML.
Tomi Engdahl says:
TensorFlow Lite for MCUs is AI on the Edge
https://www.mouser.com/empowering-innovation/more-topics/ai?utm_source=endeavor&utm_medium=display&utm_campaign=ed-personifai-eit-ai-#article1-ai
TensorFlow, a Google-led effort, is a set of open-source software libraries that enable developers to easily integrate complex numerical computation algorithms and machine learning (ML) into their projects (Figure 1). According to Google, these libraries provide stable application programming interfaces for Python (Python 3.7+ across all platforms) and C. Also, they provide APIs without backward compatibility guarantees for C++, Go, Java and JavaScript. Additionally, an alpha release is available for Apple’s Swift language.
TensorFlow offers so-called end-to-end machine learning support for the development and utilization of deep neural networks (DNN). DNNs are an implementation of ML that are particularly adept at pattern recognition and object detection and classification. TensorFlow libraries support both phases of the machine-learning process, which are training and inferencing. The first is the training of deep neural networks that requires significant computing horsepower typically found in server-grade hardware and graphical processing units (GPUs). More recently application-specific integrated circuits known as Tensor Processing Unit (TPUs) have been developed to support the training efforts. The second phase, inferencing, is utilizing the trained DNNs in the real-world to respond to new inputs and make recommendations based on the analysis of those inputs against the trained models. This is the phase that should be of keen interest to embedded product developers.
The release of TensorFlow Lite for Microcontrollers (a subset of the TensorFlow libraries) is specifically geared for performing inferencing on memory-constrained devices typically found in most embedded systems applications. It does not allow you to train new networks. That still requires the higher-end hardware.
The Workflow
According to the documentation provided for TensorFlow Lite for Microcontrollers, the developer workflow can be broken down into five keys steps (Figure 2). These steps are:
Create or Obtain a TensorFlow Model: The model must be small enough to fit on your target device after conversion, and it can only use supported operations. If you want to use operations that are not currently supported, you can provide your custom implementation.
Convert the Model to a TensorFlow Lite FlatBuffer: You will convert your model into the standard TensorFlow Lite format using the TensorFlow Lite converter. You might wish to output a quantized model since these are smaller in size and more efficient to execute.
Convert the FlatBuffer to a C byte array: Models are kept in read-only program memory and provided in the form of a simple C file. Standard tools can be used to convert the FlatBuffer into a C array.
Integrate the TensorFlow Lite for Microcontrollers C++ Library: Write your microcontroller code to collect data, perform inference using the C++ library, and make use of the results.
Deploy to your Device: Build and deploy the program to your device.
Some caveats that a developer should be aware of when selecting a compatible embedded platform for use with TensorFlow Lite libraries include:
32-bit architecture such as Arm Cortex-M processors and ESP32-based systems.
It can run on systems where memory size is measured in the tens of kilobytes.
TensorFlow Lite for Microcontrollers is written in C++ 11.
TensorFlow Lite for Microcontrollers is available as an Arduino library. The framework can also generate projects for other development environments such as Mbed.
No need for operating system support, dynamic memory allocation or any of the C/C++ standard libraries.
Next Steps
Google offers four pre-trained models as examples that can be used to run on embedded platforms. With a few slight modifications, they can be used on various development boards. Examples include:
Hello World: Demonstrates the absolute basics of using TensorFlow Lite for Microcontrollers.
Micro-Speech: Captures audio with a microphone to detect the words “yes” and “no”.
Person Detection: Captures camera data with an image sensor to detect the presence or absence of a person.
Magic Wand: Captures accelerometer data to classify three different physical gestures.
Tomi Engdahl says:
Tekoälyä mikro-ohjaimelle ilman erityisosaamista
https://etn.fi/index.php?option=com_content&view=article&id=12943&via=n&datum=2021-12-10_14:47:09&mottagare=31202
Tällä hetkellä tekoäly siirtyy nopeasti pilvestä verkon reunalle. Tämä tarkoittaa käytännössä, että ohjainpiirien pitää pystyä laskemaan koneoppimismalleja sulautetuissa laitteissa, mikä vaatii yhä enemmän suunnittelijalta. STMicroelectronics on esitellyt työkalun, jolla sulautettujen ohjelmistojen suunnittelija voi hyödyntää tekoälyä ilman datatieteilijän asiantuntemusta.
Kyse on NanoEdge AI Studio -työkalusta. ST on nyt tuonut ohjelmistosta V3-version, joka on samalla ensimmäinen iso päivitys työkaluun sen jälkeen, kun ST osti työkalun kehittäneen Cartesiamin aiemmin tänä vuonna.
NanoEdge AI Studion uusi versio lähtee liikkeelle neljällä algoritmiperheellä: poikkeamien havaitseminen, regressio, yksiluokkainen ja moniluokkainen luokittelu. Näihin ollaan tuomassa lisää algoritmeja, mutta ) rikastamista vieläkin useammilla algoritmeilla. Toisaalta lisäämme myös algoritmiperheiden määrää.
Käytännössä työkalut etsivät sopivia algoritmeja. Kun paras löytyy haluttuun prosessiin, se voidaan tuoda osaksi ohjaimen C-koodia. Kirjasto voidaan viedä STWIn-kortille dataloggeri-toiminnolla. Mikro-ohjeimelle vietynä nämämallit voivat sekä oppia kokemastaan että päätellä itsenäisesti, ilman yhteyttä pilvipalveluun.
Tomi Engdahl says:
With AIfES (Artificial Intelligence for Embedded Systems) from Fraunhofer IMS, it’s possible to run, and even train, artificial neural networks on almost any hardware — including the 8-bit Arduino Uno. In this tinyML Foundation Talk, AIfES inventor and product manager Pierre Gembaczka introduces the framework and shows off an on-device demo.
tinyML Talks Germany: AIfES – an open-source standalone AI framework for almost any hardware
https://m.youtube.com/watch?v=dOUZdm0GagU&t=1108s
In the last few years there have been more and more solutions for running machine learning (ML) on microcontrollers. Some of the most popular are scaled down versions of frameworks designed for servers. But those are only suitable for fairly powerful MCUs.
This is now a thing of the past. With the open-source solution AIfES (Artificial Intelligence for Embedded Systems) from the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) it’s possible to run, and even train, artificial neural networks (ANN) on almost any hardware, including 8-bit microcontrollers. To be able to try AIfES directly, it has been released for the amazing Arduino IDE as a library and can be used with any Arduino or Arduino compatible board.
AIfES inventor and product manager Pierre Gembaczka will introduce the framework and will also show an on-device training live demo. The new features of the upcoming update will also be shown.
Tomi Engdahl says:
This Device Uses Machine Learning at the Edge to Detect Wildfires Early
https://www.hackster.io/news/this-device-uses-machine-learning-at-the-edge-to-detect-wildfires-early-120b62e705c1
By integrating edge ML with a solar-powered microcontroller, this device can quickly send alerts only when there’s a wildfire nearby.
Tomi Engdahl says:
This system classifies different types of clouds using tinyML
https://blog.arduino.cc/2021/12/06/this-system-classifies-different-types-of-clouds-using-tinyml/
Tomi Engdahl says:
This Arduino device can detect which language is being spoken using tinyML
https://blog.arduino.cc/2021/12/08/this-arduino-device-can-detect-which-language-is-being-spoken/
Tomi Engdahl says:
Picovoice Launches Completely Free Usage Tier for Offline Voice Recognition — for Up to Three Users
Supporting up to three active users a month, the new free tier requires no credit card — and you can even use it commercially.
https://www.hackster.io/news/picovoice-launches-completely-free-usage-tier-for-offline-voice-recognition-for-up-to-three-users-e1eafbc97bb0
Tomi Engdahl says:
Fabio Antonini created a device that uses an Edge Impulse model on a Nano 33 BLE Sense to classify how a bicycle is being ridden along with the conditions.
This Arduino device knows how a bike is being ridden using tinyML
https://blog.arduino.cc/2021/12/28/this-arduino-device-knows-how-a-bike-is-being-ridden-using-tinyml/
Fabio Antonini loves to ride his bike, and while nearly all bike computers offer information such as cadence, distance, speed, and elevation, they lack the ability to tell if the cyclist is sitting or standing at any given time. So, after doing some research, he came across an example project that utilized Edge Impulse and an Arduino Nano 33 BLE 33 Sense’s onboard accelerometer to distinguish between various kinds of movements. Based on this previous work, he opted to create his own ML device using the same general framework.
Machine Learning and bike with Edge Impulse Studio
How ML can help riding by bike
https://medium.com/@fabio.antonini.1969/bike-riding-by-machine-learning-12779a80383b
Tomi Engdahl says:
Tips and tricks for deploying TinyML
A typical TinyML deployment has many software and hardware requirements, and there are best practices that developers should be aware of to help simplify this complicated process.
https://www.techtarget.com/searchenterpriseai/feature/Tips-and-tricks-for-deploying-TinyML
Tomi Engdahl says:
Motion Classifier with Edge Impulse
https://pelinbalci.com/tinyml/2022/01/01/Motion-Classifier.html
Tomi Engdahl says:
Run Machine Learning Code in an Embedded IoT Node to Easily Identify Objects
https://www.digikey.com/en/articles/run-machine-learning-code-in-an-embedded-iot-node?dclid=CN66o8vHl_UCFRVuGAod3MwONQ
Tomi Engdahl says:
Failing Faster in TinyML
CFU Playground offers a framework to rapidly iterate on hardware machine learning accelerator designs.
https://www.hackster.io/news/failing-faster-in-tinyml-f29c15d75891
Tomi Engdahl says:
https://blog.arduino.cc/2022/01/17/aifes-releases-exciting-new-version-of-tinyml-library-for-arduino/
Tomi Engdahl says:
https://wiki.seeedstudio.com/Wio-Terminal-TinyML/
Tomi Engdahl says:
ESP32-CAM: TinyML Image Classification – Fruits vs Veggies
Learning Image Classification on embedding devices (ESP32-CAM)
https://www.hackster.io/mjrobot/esp32-cam-tinyml-image-classification-fruits-vs-veggies-4ab970
Tomi Engdahl says:
Meta AI has detailed what it claims is the “first high-performance self-supervised [machine learning] algorithm” capable of operating with speech, vision, and text: data2vec.
Meta AI Releases Data2vec, an ML Algorithm That Works Across Text, Images, and Speech
https://www.hackster.io/news/meta-ai-releases-data2vec-an-ml-algorithm-that-works-across-text-images-and-speech-e74bb7b1147f
Designed around a teacher-student model, data2vec claims to outperform rivals — despite working across three different modalities of data.
Meta AI claims data2vec offers simplified training — yet matches or outperforms modality-specific rivals.
Tomi Engdahl says:
Google Unveils the Coral Dev Board Micro, Its First Microcontroller-Based TinyML Edge AI Board
The latest entry in the Coral range of low-power edge AI development boards is also Google’s first microcontroller board — “coming soon.”
https://www.hackster.io/news/google-unveils-the-coral-dev-board-micro-its-first-microcontroller-based-tinyml-edge-ai-board-31364ab0db63
Tomi Engdahl says:
Silicon Labs Announces BG24, MG24 TinyML 2.4GHz SoCs — and Boasts of a Fourfold Performance Gain
Company makes some bold performance gains for its new parts, which include native TensorFlow support.
https://www.hackster.io/news/silicon-labs-announces-bg24-mg24-tinyml-2-4ghz-socs-and-boasts-of-a-fourfold-performance-gain-0bdadcfd345e
Tomi Engdahl says:
Phil Caridi’s MetalDetector uses an Edge Impulse tinyML model on a Nano 33 BLE Sense to determine how metal your music is.
Instead of sensing the presence of metal, this tinyML device detects rock (music)
https://blog.arduino.cc/2022/01/29/instead-of-sensing-the-presence-of-metal-this-tinyml-device-detects-rock-music/
Tomi Engdahl says:
https://www.edn.com/wireless-socs-integrate-ai-ml-accelerators/
Tomi Engdahl says:
https://blog.arduino.cc/2022/01/29/this-contactless-system-combines-embedded-ml-and-sensors-to-improve-elevator-safety/
Tomi Engdahl says:
Based on an Arduino Portenta H7 and Vision Shield, TinySewer is a low-power device that uses embedded ML to automate the pipe inspection process.
Tomi Engdahl says:
Machines are getting better at writing their own code. But human-level is ‘light years away’
https://www.cnbc.com/2022/02/08/deepmind-openai-machines-better-at-writing-their-own-code.html#Echobox=1644304469
DeepMind announced on Wednesday that it has created a piece of software called AlphaCode that can code just as well as an average human programmer.
The London-headquartered firm tested AlphaCode’s abilities in a coding competition on CodeForces.
But computer scientist Dzmitry Bahdanau wrote on Twitter that human level coding is “still light years away.”
Tomi Engdahl says:
TinyML is faster, real-time, power-efficient and privacy-friendly more than any other form of edge analytics. Wevolver.com takes a look at how Arduino Pro is making it possible for everyone: https://blog.arduino.cc/2022/02/10/from-embedded-sensors-to-advanced-intelligence-driving-industry-4-0-innovation-with-tinyml/
Tomi Engdahl says:
How can we make TinyML secure? (And why we need to)
https://staceyoniot.com/how-can-we-make-tinyml-secure-and-why-we-need-to/
Tomi Engdahl says:
With a bit of Edge Impulse machine learning and a microphone, this gauge can sense whether or not the music being played is metal enough.
This Clever Device Detects the Amount of Metal in Music
https://www.hackster.io/news/this-clever-device-detects-the-amount-of-metal-in-music-d08509982627
With a bit of machine learning and a microphone, this gauge can sense whether or not the music being played is metal enough.
As the basis of his device, Caridi’s metal detector uses an Arduino Nano 33 BLE Sense due to its onboard microphone, adequate processing power, and tight integration with Edge Impulse’s data ingestion service. In order to move the needle back and forth as the amount of metal changes, a micro servo is positioned at its base. And finally, a single 9V battery supplies power to everything via a 5V linear regulator.
Once trained on this now-processed data, the Keras neural network was able to achieve an impressive accuracy of 88.2%. Caridi then built and exported this model as an Arduino library for use his sketch. His Edge Impulse project can be found here.
https://studio.edgeimpulse.com/public/65438/latest
Tomi Engdahl says:
https://www.philcaridi.com/metaldetector/
Phil Caridi had the idea to build a metal detector, but rather than using a coil of wire to sense eddy currents, his device would use a microphone to determine if metal music is playing nearby.
Tomi Engdahl says:
Weather Station Predicts Air Quality
https://hackaday.com/2022/02/13/weather-station-predicts-air-quality/
Measuring air quality at any particular location isn’t too complicated. Just a sensor or two and a small microcontroller is generally all that’s needed. Predicting the upcoming air quality is a little more complicated, though, since so many factors determine how safe it will be to breathe the air outside. Luckily, though, we don’t need to know all of these factors and their complex interactions in order to predict air quality. We can train a computer to do that for us as [kutluhan_aktar] demonstrates with a machine learning-capable air quality meter.
O3 & BLE Weather Station Predicting Air Quality
https://hackaday.io/project/183935-o3-ble-weather-station-predicting-air-quality
Via Nano 33 BLE, collate local weather data, build and train a TensorFlow neural network model, and run the model to predict air quality.
Tomi Engdahl says:
Designed to let high-power digital parts stay asleep, this power-saving tinyML chip does its work in the analog domain.
Aspinity Launches AML100 Analog TinyML Chip, Promises 95 Percent Power Saving
https://www.hackster.io/news/aspinity-launches-aml100-analog-tinyml-chip-promises-95-percent-power-saving-76cea7b268d7
Designed to let high-power digital parts stay asleep, this power-saving tinyML chip does its work in the analog domain.
“We’ve long realized that reducing the power of each individual chip within an always-on system provides only incremental improvements to battery life,” claims Tom Doyle, founder and chief executive of Aspinity. “That’s not good enough for manufacturers who need revolutionary power improvements. The AML100 reduces always-on system power to under 100μA, and that unlocks the potential of thousands of new kinds of applications running on battery.”
The AML100 is made up of an array of independent configurable analog blocks (CABs), each of which can be turned to a variety of tasks including sensor interfacing and machine learning. Key to its capabilities is its operation in the analog, rather than digital, domain, which Aspinity claims allows the system to intelligently reduce the volume of data being shuffled around a hundredfold.
That, in fact, is how the chip delivers on its claimed 95 percent power savings: By figuring out what data are important before anything is converted to digital, the chip can keep higher-power digital components like microcontrollers asleep — dramatically boosting battery life for always-on tinyML projects.
Tomi Engdahl says:
Using an Arduino Nano 33 BLE Sense and an ESP32, Redditor ‘Summit AI ML’ created a tinyML-based tamperproof system to transport agricultural products: reddit.com/r/diyelectronics/comments/st15li/arec_agricultural_records_on_electronic_contracts
Blockchain, IoT, machine learning, this project has it all!
Tomi Engdahl says:
https://etn.fi/index.php/13-news/13199-st-tuo-tekoaelyn-mems-anturille
Tomi Engdahl says:
https://www.edgeimpulse.com/blog/analyze-power-consumption-in-embedded-ml-solutions
Tomi Engdahl says:
https://www.edn.com/ai-tools-pair-with-dual-core-mcu-for-iot-edge-design/
Tomi Engdahl says:
Tiny MIT Chip Aims to Protect TinyML and Edge AI Data From Side-Channel Power Analysis Attacks
Designed for protecting data on edge devices from power analysis attacks, this tiny chip is small enough for embedded use.
https://www.hackster.io/news/tiny-mit-chip-aims-to-protect-tinyml-and-edge-ai-data-from-side-channel-power-analysis-attacks-dcc2b7c8465f
Tomi Engdahl says:
Tiny ML:stä odotetaan esineiden internetin eli IoT:n vallankumousta. ML-lyhenne tulee englannin sanoista machine learning. Etuliite tiny puolestaan vihjaa, että TinyML-ohjelmistot on tarkoitettu toimimaan hyvin vaatimattomalla raudalla. Mutta mitä käyttökohteita teknologialla on?
https://www.dna.fi/yrityksille/blogi/-/blogs/kuinka-tehda-pienista-antureista-alykkaita-telenorin-tutkimusyksikko-tyostaa-seuraavaa-vallankumouksellista-iot-innovaatiota
Tomi Engdahl says:
Classify Music Genre with Arduino Nano 33 BLE Sense © MIT
This project can automatically classify three different musical genres (i.e., classical, metal, and reggae) from device-playing music files.
https://create.arduino.cc/projecthub/tronixlabph/classify-music-genre-with-arduino-nano-33-ble-sense-fb2903
Tomi Engdahl says:
Kutluhan Aktar developed a low-cost, tinyML-driven device to collect water quality data from various sources and predict pollution levels based on oxidation-reduction potential, pH, total dissolved solids, and turbidity measurements.
GSM & SMS Enabled AI-driven (TinyML) Water Pollution Monitor © CC BY
https://create.arduino.cc/projecthub/kutluhan-aktar/gsm-sms-enabled-ai-driven-tinyml-water-pollution-monitor-4a06e6
Via MKR GSM 1400, collate water quality data from resources over GPRS to train a Neuton model, run the model, and transmit results via SMS.
Tomi Engdahl says:
AnalogLamb’s TinyML Maple Eye ESP32-S3 Is a Dual-Screen Alternative to Espressif’s ESP32-S3-EYE
Equipped with a 2MP camera, dual displays, 8MB of PSRAM, microSD, Wi-Fi, Bluetooth, and more, this is a feature-packed edge AI dev board.
https://www.hackster.io/news/analoglamb-s-tinyml-maple-eye-esp32-s3-is-a-dual-screen-alternative-to-espressif-s-esp32-s3-eye-1e3f389ed5de
Beijing-based AnalogLamb has announced a tinyML and edge AI development board built with computer vision in mind, pairing an Espressif ESP32-S3 with two-megapixel camera sensor, a microphone, and two compact on-board color displays: the Maple Eye ESP32-S3.
“The Maple Eye ESP32-S3 is a small-sized AI development board produced by AnalogLamb,” the company writes of its board design, also known as the Maple Eye Alef. “It is based on the ESP32-S3 SoC and ESP-WHO, Espressif’s AI development framework.”
Tomi Engdahl says:
FOMO is a novel machine learning architecture that brings real-time object detection, tracking and counting to the smallest devices, including microcontrollers or DSPs.
Try it now: edgeimpulse.com/fomo
Tomi Engdahl says:
30 FPS object detection on the Nicla Vision? What the F… OMO?!
https://docs.edgeimpulse.com/docs/tutorials/fomo-object-detection-for-constrained-devices
(via Edge Impulse)
Tomi Engdahl says:
$8 LU-ASR01 offline speech recognition board features “TW-ASR ONE” chip
https://www.cnx-software.com/2022/04/15/8-lu-asr01-offline-speech-recognition-board-features-tw-asr-one-chip/
LU-ASR01 is a board capable of offline speech recognition with a built-in microphone, a speaker connector, twelve through holes for GPIOs and a temperature sensor interface for DHT11/DS18B20, plus a USB Type-C port for power and programming.
Tomi Engdahl says:
Industry 4.0 : Predictive Maintenance © GPL3+
A TinyML model using Arduino Portenta and Edge Impulse to predict the anomalous operation in Industrial machineries like Pump, valves & fans
https://create.arduino.cc/projecthub/manivannan/industry-4-0-predictive-maintenance-3bb415
Tomi Engdahl says:
Man Makes His Own X-Ray Machine After Hospital Charges Him $70,000
https://www.iflscience.com/technology/man-makes-his-own-xray-machine-after-hospital-charges-him-70000/
A man has built his own X-ray machine after receiving a hospital bill of $69,210.32.
In a video, YouTuber Willam Osman starts picking out which of his possessions he’s about to start selling to pay the debt. Thankfully, he’ll only have to pay around $2,500 thanks to his “great insurance”, but as he explains in the video many millions of Americans don’t have the same plan. The bill apparently got him thinking: could he make his own X-ray machine for cheaper than what he was charged?
Tomi Engdahl says:
Here’s a tutorial that shows you how to use the Arduino Nicla Vision to detect the presence and position of objects in a camera image: https://docs.arduino.cc/tutorials/nicla-vision/blob-detection
Tomi Engdahl says:
Run Machine Learning Code in an Embedded IoT Node to Easily Identify Objects
https://www.digikey.com/en/articles/run-machine-learning-code-in-an-embedded-iot-node?dclid=CNztkbrXtvcCFQ2DmgodNTgJfg
Internet of Things (IoT) networks operating in dynamic environments are being expanded beyond object detection to include visual object identification in applications such as security, environmental monitoring, safety, and Industrial IoT (IIoT). As object identification is adaptive and involves using machine learning (ML) models, it is a complex field that can be difficult to learn from scratch and implement efficiently.
The difficulty stems from the fact that an ML model is only as good as its data set, and once the correct data is acquired, the system must be properly trained to act upon it in order to be practical.
This article will show developers how to implement Google’s TensorFlow Lite for Microcontrollers ML model into a Microchip Technology microcontroller. It will then explain how to use the image classification and object detection learning data sets with TensorFlow Lite to easily identify objects with a minimum of custom coding.
It will then introduce a TensorFlow Lite ML starter kit from Adafruit Industries that can familiarize developers with the basics of ML.
ML for embedded vision systems
ML in a broad sense gives a computer or an embedded system similar pattern recognition capabilities as a human. From a human sensory standpoint this means using sensors such as microphones and cameras to mimic human sensory perceptions of hearing and seeing. While sensors are easy to use for capturing audio and visual data, once the data is digitized and stored it must then be processed so it can be matched against stored patterns in memory that represent known sounds or objects. The challenge is that the image data captured by a camera for a visual object, for example, will not exactly match the stored data in memory for an object. A ML application that needs to visually identify the object must process the data so that it can accurately and efficiently match the pattern captured by the camera to a pattern stored in memory.
There are different libraries or engines used to match the data captured by the sensors. TensorFlow is an open-source code library that is used to match patterns. The TensorFlow Lite for Microcontrollers code library is specifically designed to be run on a microcontroller, and as a consequence has reduced memory and CPU requirements to run on more limited hardware. Specifically, it requires a 32-bit microcontroller and uses less than 25 kilobytes (Kbytes) of flash memory.
However, while TensorFlow Lite for Microcontrollers is the ML engine, the system still needs a learning data set of the patterns it is to identify. Regardless of how good the ML engine is, the system is only as good as its learning data set, and for visual objects some of the learning data sets can require multiple gigabytes of data for many large models. More data requires higher CPU performance to quickly find an accurate match, which is why these types of applications normally run on powerful computers or high-end laptops.
For an embedded systems application, it should only be necessary to store those specific models in a learning data set that are necessary for the application. If a system is supposed to recognize tools and hardware, then models representing fruit and toys can be removed. This reduces the size of the learning data set, which in turn lowers the memory needs of the embedded system, thus improving performance while reducing costs.
An ML microcontroller
To run TensorFlow Lite for Microcontrollers, Microchip Technology is targeting machine learning in microcontrollers with the Arm® Cortex®-M4F-based ATSAMD51J19A-AFT microcontroller (Figure 1). It has 512 Kbytes of flash memory with 192 Kbytes of SRAM memory and runs at 120 megahertz (MHz). The ATSAMD51J19A-AFT is part of the Microchip Technology ATSAMD51 ML microcontroller family. It is compliant with automotive AEC-Q100 Grade 1 quality standards and operates over -40°C to +125°C, making it applicable for the harshest IoT and IIoT environments. It is a low-voltage microcontroller and operates from 1.71 to 3.63 volts when running at 120 MHz.
Tomi Engdahl says:
https://hackaday.com/2022/04/30/training-doppler-radar-with-smart-watch-imus-data-for-activity-recognition/
Tomi Engdahl says:
https://hackaday.com/2022/04/30/learn-sign-language-using-machine-vision/
Tomi Engdahl says:
Edge Impulse Announces Official Espressif ESP32 Support, Releases Open Source ESP-EYE Firmware
Initial firmware offers a 3-4x speedup over stock TensorFlow Lite for Microcontrollers, with optimizations to follow.
https://www.hackster.io/news/edge-impulse-announces-official-espressif-esp32-support-releases-open-source-esp-eye-firmware-b626af54d66e