Machine learning possible on microcontrollers

ARM’s Zach Shelby introduced the use of microcontrollers for machine learning and artificial intelligence at the ECF19 event in Helsinki on last Friday. The talk showed that that artificial intelligence and machine learning can be applied to small embedded devices in addition to the cloud-based model. In particular, artificial intelligence is well suited to the devices of the Internet of Things. The use of machine learning in IoT is also sensible from an energy efficiency point of view if unnecessary power-consuming communication can be avoided (for example local keyword detection before sending voice data to cloud more more detailed analysis).

According to Shelby , we are now moving to a third wave of IoT that comes with comprehensive equipment security and voice control. In this model, machine learning techniques are one new application that can be added to previous work done on IoT.

In order to successfully use machine learning in small embedded devices, the problem to be solved is that it has reasonably little incoming information and a very limited number of possible outcomes. ARM Cortex M4 processor equipped with a DSP unit is powerful enough for simple hand writing decoding or detecting few spoken words with machine learning model. In examples the used machine learning models needed less than 100 kilobytes of memory.

zackdscf6473

The presentation can be now viewed on YouTube:

Important tools and projects mentioned on the presentation:

TinyML

TensorFlow Lite

uTensor (ARM MicroTensor)

TensorFlow Lite Micro

Articles on presentation:

https://www.uusiteknologia.fi/2019/05/20/ecf19-koneoppiminen-mahtuu-mikro-ohjaimeen/

http://www.etn.fi/index.php/72-ecf/9495-koneoppiminen-mullistaa-sulautetun-tekniikan

 

420 Comments

  1. Tomi Engdahl says:

    Like a Life Alert for robots! Train an Edge Impulse tinyML model to recognize the difference between a fall and a sudden, normal movement.

    Robot Fall Detection with Edge Impulse
    https://www.hackster.io/gatoninja236/robot-fall-detection-with-edge-impulse-94e512

    How do you tell the difference between a fall and a sudden, normal movement? Train a machine learning model to detect when a fall occurs.

    Reply
  2. Tomi Engdahl says:

    “The latest trend in machine learning has developers thinking very, very small – and that’s huge.”

    The upcoming #ArmDevSummit is filled with hands-on tinyML talks, including a session with Massimo! https://bit.ly/3hEZX8J

    TinyML Enables AI in Smallest Endpoint Devices
    The latest trend in machine learning has developers thinking very, very small – and that’s huge.
    https://www.arm.com/blogs/blueprint/tinyml

    Reply
  3. Tomi Engdahl says:

    tinyML just got even tinier! Edge Impulse’s new EON Compiler runs neural networks in significantly less RAM and ROM, while retaining the same accuracy and speed: https://bit.ly/3j0TqXh

    Reply
  4. Tomi Engdahl says:

    Run a tinyML model on a Nano 33 BLE Sense to know when your autonomous robot has fallen and can’t get up.

    Robot Fall Detection with Edge Impulse © GPL3+
    https://create.arduino.cc/projecthub/gatoninja236/robot-fall-detection-with-edge-impulse-94e512

    How do you tell the difference between a fall and a sudden, normal movement? Train a machine learning model to detect when a fall occurs.

    Reply
  5. Tomi Engdahl says:

    Skoltech Scientists Combine a Glove, Machine Learning, and a Drone for Gesture-Based Light Painting
    Using an Arduino Uno-based wearable controller and a machine learning base station, the drone can turn gestures into art.
    https://www.hackster.io/news/skoltech-scientists-combine-a-glove-machine-learning-and-a-drone-for-gesture-based-light-painting-b4bc96332379

    Reply
  6. Tomi Engdahl says:

    Adafruit Launches BrainCraft HAT for Raspberry Pi as an All-in-One TensorFlow Lite Dev Platform
    https://www.hackster.io/news/adafruit-launches-braincraft-hat-for-raspberry-pi-as-an-all-in-one-tensorflow-lite-dev-platform-6c18259279c0

    In development for a year now, the BrainCraft HAT is finally available — and it has some great design tweaks, including a cooling fan.

    Reply
  7. Tomi Engdahl says:

    “As an example, the Bonsai algorithm developed by Microsoft can be as small as 2 KB but can have even better performance than a typical 40 MB kNN algorithm, or a 4 MB neural network. This result may not sound important, but the same accuracy on a model 1/10,000th of the size is quite impressive. A model this small can be run on an Arduino Uno, which has 2 KB RAM available.”

    Tiny Machine Learning: The Next AI Revolution
    The bigger model is not always the better model
    https://towardsdatascience.com/tiny-machine-learning-the-next-ai-revolution-495c26463868

    Reply
  8. Tomi Engdahl says:

    AttendNets is a highly compact, low-precision deep neural network architecture designed for visual perception in tinyML applications.

    It’s Close Enough
    https://www.hackster.io/news/it-s-close-enough-d58cff097354

    Visual attention condensers bring deep learning visual perception models to TinyML applications.

    Sometimes close is good enough. That is the idea behind AttendNets, a highly compact, low-precision deep neural network architecture designed for visual perception in TinyML applications.

    Deep learning has provided a seemingly endless stream of breakthroughs in computer vision in recent years. Generally, accuracy is valued more highly in deep learning applications than is optimizing the model for minimal complexity. Accordingly, the complexities of these models makes deploying them on low-power, highly resource-constrained devices a major challenge.

    AttendNets adapts deep learning models to resource-constrained devices by introducing the concept of visual attention condensers and by tailoring models specifically to the type of hardware that they will run on. Building upon the attention condenser, which is a self-attention mechanism that provides condensed embeddings, the team added optimizations for working with images. The resulting visual attention condensers reduce model complexity through better handling of the high channel dimensionality of image data. An iterative generative synthesis approach is taken to generate the final architectural design of the network. This yields an optimal balance between image recognition accuracy and performance on a constrained edge device.

    Reply
  9. Tomi Engdahl says:

    “Suddenly the same exact ML application code runs 15 times faster on our Nano BLE Sense board. Thanks to Arm’s dramatic optimizations, users of Arduino TensorFlow library were able to benefit from these gains immediately — making even cooler machine learning applications possible.”

    Join Massimo Banzi tomorrow at Arm DevSummit 2020 to learn more about a low-code aproach to AIoT development and watch him build an end-to-end tinyML IoT project on a Portenta board with the Arduino IoT Cloud.

    Massimo Banzi: Arduino Is For Everyone
    https://www.arm.com/blogs/blueprint/arduino-massimo-banzi

    Ahead of his presentation at Arm DevSummit 2020, Arduino co-founder Massimo Banzi talks about the new world of TinyML, the low-code approach and the democratization of technology

    Reply
  10. Tomi Engdahl says:

    Big tinyML news! Just announced at Arm DevSummit, the new Arduino Portenta Vision Shield includes a low-power camera, twin microphones, and Ethernet or LoRa connectivity.

    Arduino Launches Portenta H7 Vision Shield Add-On for Edge Computer Vision, Voice Work
    https://www.hackster.io/news/arduino-launches-portenta-h7-vision-shield-add-on-for-edge-computer-vision-voice-work-c5c166d206da

    Ultra-low-power motion-sensing camera, two beam-forming microphones, and your choice of Ethernet or LoRa connectivity — plus a microSD slot.

    Reply
  11. Tomi Engdahl says:

    In addition to yesterday’s launch of the Portenta Vision Shield, we’ve teamed up with OpenMV to offer you a free license to their IDE — an easy way into computer vision using MicroPython as a programming paradigm.

    Embedded machine vision goes pro with the new Portenta Vision Shield
    https://blog.arduino.cc/2020/10/06/embedded-machine-vision-goes-pro-with-the-new-portenta-vision-shield/

    the launch of the Arduino Portenta Vision Shield, a production-ready expansion for the powerful Arduino Portenta H7 that adds a low-power camera, two microphones, and connectivity — everything you need for the rapid creation of edge ML applications.

    Reply
  12. Tomi Engdahl says:

    Inspired by yesterday’s Arm DevSummit tinyML workshop with Pete Warden and Fredrik Knutsson? Here’s how to get started with TensorFlow Lite for Microcontrollers on your new Nano 33 BLE Sense: http://bit.ly/2oLIfuN

    Reply
  13. Tomi Engdahl says:

    With our new Portenta Vision Shield and the OpenMV IDE, you can start running machine vision examples in just minutes! bit.ly/30Dabkh

    Reply
  14. Tomi Engdahl says:

    Embedded machine vision goes pro with the new Portenta Vision Shield
    https://blog.arduino.cc/2020/10/06/embedded-machine-vision-goes-pro-with-the-new-portenta-vision-shield/

    We’re excited to announce the launch of the Arduino Portenta Vision Shield, a production-ready expansion for the powerful Arduino Portenta H7 that adds a low-power camera, two microphones, and connectivity — everything you need for the rapid creation of edge ML applications.

    Always-on machine vision
    The Portenta Vision Shield comes with an ultra-low-power Himax camera. The camera module autonomously detects motion while the Portenta H7 is in stand-by — only waking up the microcontroller when needed.

    The Portenta Vision Shield features two ultra-compact and omnidirectional MP34DT06JTR microphones, bringing voice recognition and audio event detection. Both the video and audio data can be stored on an SD card, and transmitted through Ethernet or LoRa® modules (plus option of the WiFi or BLE on the Portenta H7 module).

    Reply
  15. Tomi Engdahl says:

    Launched with the claim of performing edge AI tasks in one percent of the power envelope required by rivals, Maxim Integrated’s MAX78000 impresses.

    Maxim Launches Edge AI MAX78000 SoC with Neural Network Accelerator, RISC-V Coprocessor
    https://www.hackster.io/news/maxim-launches-edge-ai-max78000-soc-with-neural-network-accelerator-risc-v-coprocessor-6781b3e72c0d

    Launched with the claim of performing edge AI tasks in one percent of the power envelope required by rivals, the MAX78000 impresses.

    Maxim Integrated has announced the launch of a new chip for the IoT, the MAX78000, claiming to accelerate edge AI tasks for a hundredth of the power required by rival platforms.

    “We’ve cut the power cord for AI at the edge,”

    The MAX78000 system-on-chip (SoC) is built around a dual-core Arm Cortex-M4 processor, with floating-point unit, running at up to 100MHz, with 512kB of flash memory and 128kB of static RAM (SRAM) plus a performance-boosting 16kB instruction cache. It also includes a low-power 60MHz coprocessor based on the free and open source RISC-V instruction set architecture – the same approach as taken by rival Espressif for its recently-launched ESP32-S2.

    Full details on the part are available on the Maxim website, though pricing is only “on request;” an evaluation kit is also available, priced at $168.

    https://www.maximintegrated.com/en/products/microcontrollers/MAX78000.html?utm_source=Maxim&utm_medium=press-rels&utm_content=MAX78000&utm_campaign=FY21_Q2_2020_OCT_MSS-LPMicros_WW_AICampaign_EN&utm_term=WF7093

    MAX78000
    Ultra-Low-Power Arm Cortex-M4 Processor with FPU-Based Microcontroller with Convolutional Neural Network Accelerator
    A New Breed of AI Micro Built to Enable Neural Networks to Execute at Ultra-Low Power

    Reply
  16. Tomi Engdahl says:

    This tutorial walks through using the new Portenta Vision Shield with Open MV to detect the presence and position of objects in a camera image: https://bit.ly/33V19Bl

    Reply
  17. Tomi Engdahl says:

    A project with M&M’s and Arduino, what’s not to love? Create a candy color classifier using tinyML on your Nano 33 BLE Sense.

    Classify Candy in Free Fall Using TinyML © CC BY
    https://create.arduino.cc/projecthub/8bitkick/classify-candy-in-free-fall-using-tinyml-2836bf

    Using the Arduino KNN library to classify the color of M&Ms we throw at it.

    Reply
  18. Tomi Engdahl says:

    Simone Salerno updated his word classification model on the Nano 33 BLE Sense. Now with ready-made sketches for training and testing!

    https://eloquentarduino.github.io/2020/08/better-word-classification-with-arduino-33-ble-sense-and-machine-learning/

    Reply
  19. Tomi Engdahl says:

    If you’re tired of the filters on Instagram, you can now create one yourself! Learn how to build a MicroPython application with the OpenMV IDE that uses the new Portenta Vision Shield to detect faces and overlay them with a custom bitmap image: https://bit.ly/2Hg9VRa

    Reply
  20. Tomi Engdahl says:

    Eric Lin’s tinyML application utilizes a Nano 33 BLE with a 5MP camera to detect whether someone is wearing a face mask.

    TinyML Facemask Detection
    https://m.youtube.com/watch?feature=youtu.be&v=fHipLt2VpqY

    Reply
  21. Tomi Engdahl says:

    “The only way to scale up to the kinds of hundreds of billions or trillions of devices we’re expecting to emerge into the world in the next few years is if we take people out of the care and maintenance loop.”

    Great article on tinyML from ZDNet featuring Pete Warden!

    Google AI executive sees a world of trillions of devices untethered from human care
    https://www.zdnet.com/article/google-ai-executive-sees-a-world-of-trillions-of-devices-untethered-from-human-care/

    Google says hardware in embedded devices needs to improve to make possible a world of peel-and-stick sensors free of wall power and human maintenance.

    Reply
  22. Tomi Engdahl says:

    ESP32-CAM Face Recognition Door Lock System
    http://mag.breadboard.pk/esp32-cam-face-recognition-door-lock-system/

    In this tutorial we build a Face ID controlled Digital Door lock system using ESP32-CAM.

    The AI-Thinker ESP32-CAM module is a low-cost development board with a very small size OV2640 camera and a micro SD card slot. It has an ESP32 S chip with built-in Wi-Fi and Bluetooth connectivity, with 2 high-performance 32-bit LX6 CPUs, 7-stage pipeline architecture. We have previously explained ESP32-CAM in detail and used it to build a Wi-Fi door Video doorbell. This time we will use the ESP32-CAM to build a Face Recognition based Door Lock System using a Relay module and Solenoid Lock.

    Reply
  23. Tomi Engdahl says:

    Speech Recognition using Arduino
    http://mag.breadboard.pk/speech-recognition-using-arduino/

    Speech recognition technology is very useful in automation which not only gives you hands free control over devices but also adds security to the system. Apart from making voice controlled gadgets, speech recognition also provides significant help to people suffering from various disabilities.

    Now in this project, we are going to use machine learning to train a speech recognition model using Edge Impulse Studio with three commands i.e. ‘LIGHT ON’, ‘LIGHT OFF’, and ‘NOISE’. Edge Impulse is an online machine learning platform that enables developers to create the next generation of intelligent device solutions with embedded Machine Learning. We used Edge impulse studio previously to differentiate cough and noise sounds.

    Creating the Dataset for Arduino Speech Recognition
    Here Edge Impulse Studio is used to train our Speech Recognition model. Training a model on Edge Impulse Studio is similar to training machine learning models on other machine learning frameworks. For training, a machine learning model’s first step is to collect a dataset that has the samples of data that we would like to be able to recognize.

    We will create a dataset with three classes “LED ON”, “LED OFF” and “noise”. To create a dataset, create an Edge Impulse account, verify your account and then start a new project. You can load the samples by using your mobile, your Arduino board or you can import a dataset into your edge impulse account. The easiest way to load the samples into your account is by using your mobile phone. For that connect the mobile with Edge Impulse.

    After uploading the samples for the first class now set the change the label and collect the samples for ‘light off’ and ‘noise’ class.

    Test data should be at least 30% of training data, so collect the 4 samples of ‘noise’ and 4 to 5 samples for ‘light on’ and ‘light off’.

    Training the Model
    As our dataset is ready, now we can create an impulse for the data. For that go to ‘Create impulse’ page. Change the default settings of a 1000 ms Window size to 1200ms and 500 ms Window increase to 50ms. This means our data will be processed 1.2 s at a time, starting each 58ms.

    Now on ‘Create impulse’ page click on ‘Add a processing block’. In the next window select the Audio (MFCC) block. After that click on ‘Add a learning block’ and select the Neural Network (Keras) block. Then click on ‘Save Impulse’.

    In the next step go to the MFCC page and then click on ‘Generate Features’. It will generate MFCC blocks for all of our windows of audio.

    After that go to the ‘NN Classifier’ page and click on the three dots on the upper right corner of the ‘Neural Network settings’ and select ‘Switch to Keras (expert) mode’.

    Replace the original with the following code and change the ‘Minimum confidence rating’ to ‘0.70’. Then click on the ‘Start training’ button. It will start training your model.

    After training the model it will show the training performance. For me, the accuracy was 81.1% and loss was 0.45 that is not ideal performance but we can proceed with it. You can increase your model’s performance by creating a vast dataset.

    Now as our Speech Recognition model is ready, we will deploy this model as Arduino library. Before downloading the model as a library you can test the performance by going to the ‘Live Classification’ page. The Live classification feature allows you to test the model both with the existing testing data that came with the dataset or by streaming audio data from your mobile phone.

    Now to download the model as Arduino Library, go to the ‘Deployment’ page and select ‘Arduino Library’. Now scroll down and click on ‘Build’ to start the process. This will build an Arduino library for your project.

    Now add the library in your Arduino IDE. For that open the Arduino IDE and then click on Sketch > Include Library > Add.ZIP library

    Then, load an example by going to File > Examples > Your project name – Edge Impulse > nano_ble33_sense_microphone

    To control the LED we have to save all the command probabilities in three different variables so that we can put conditional statements on them. So according to the new code if the probability of ‘light on’ command is more than 0.50 then it will turn on the LED and if the probability of ‘light off’ command is more than 0.50 than it will turn off the LED.

    Reply
  24. Tomi Engdahl says:

    Simone Salerno shares an even easier way to train and deploy TensorFlow Lite RNN models on an Arduino Nano 33 BLE Sense for micro speech recognition.

    EloquentTinyML: Easier Voice Classifier on Nano 33 BLE Sense
    https://www.hackster.io/alankrantas/eloquenttinyml-easier-voice-classifier-on-nano-33-ble-sense-ebb81e

    An easier way to train and deploy Tensorflow Lite RNN model on Arduino Nano 33 BLE Sense – for micro speech recognition.

    Reply
  25. Tomi Engdahl says:

    Alan Wang explores an easier way to train and deploy TensorFlow Lite RNN models on a Nano 33 BLE Sense for micro speech recognition.

    EloquentTinyML: Easier Voice Classifier on Nano 33 BLE Sense © CC BY-NC-SA
    https://create.arduino.cc/projecthub/alankrantas/eloquenttinyml-easier-voice-classifier-on-nano-33-ble-sense-ebb81e

    An easier way to train and deploy Tensorflow Lite RNN model on Arduino Nano 33 BLE Sense – for micro speech recognition.

    Just open any example from the Arduino_TensorflowLite library, and you’ll see why. The TF Lite C++ APIs are not that well documented, either.

    Thankfully, a smart guy named Simone Salerno (@EloquentArduino) has written an library EloquentTinyML (a wrapped-up version of TF Lite) for Arduino IDE, and a Python tool package TinyML gen. With both of them you can build and upload a TF Lite model to your board in a much, much simpler way.

    https://github.com/eloquentarduino/EloquentTinyML

    Reply
  26. Tomi Engdahl says:

    “TinyML makes AI ubiquitous and accessible to consumers. It will bring intelligence to millions of devices that we use on a daily basis.”

    (via Forbes)
    How TinyML Makes Artificial Intelligence Ubiquitous
    https://www.forbes.com/sites/janakirammsv/2020/11/03/how-tinyml-makes-artificial-intelligence-ubiquitous/?sh=566b73757622

    The rise of TinyML marks a significant shift in how end-users consume AI. Vendors from the hardware and software industries are collaborating to bring AI models to the microcontrollers. 

    The ability to run sophisticated deep learning models embedded within an electronic device opens up many avenues. TinyML doesn’t need an edge, cloud, or Internet connectivity. It runs locally on the same microcontroller, which has the logic to manage the connected sensors and actuators.

    Phase 1 – AI in the Cloud

    Phase 2 – AI at the Edge

    Phase 3 – AI in the Microcontroller

    Though TinyML is in its infancy, there is a vibrant ecosystem in the making. Electronic chip and IoT kit makers such as Adafruit, Mediatek, Arduino and STM are supporting TinyML in their devices. Microsoft’s Azure Sphere, the secure microcontroller, can also run TinyML models. TensorFlow Lite, a variation of the popular open source deep learning framework, can be ported to supported devices. Another open source machine learning compiler and runtime, Apache TVM, can also be used to convert models into TinyML. 

    TinyML makes AI ubiquitous and accessible to consumers. It will bring intelligence to millions of devices that we use on a daily basis.

    Reply
  27. Tomi Engdahl says:

    This tutorial shows how to use the new Portenta Vision Shield with OpenMV IDE’s built-in blob detection algorithm to detect the location of objects in an image: https://bit.ly/33V19Bl

    Reply
  28. Tomi Engdahl says:

    Simone Salerno walks you through training a CNN in TensorFlow and deploying it onto your Nano 33 BLE Sense with his EloquentTinyML library.

    https://eloquentarduino.github.io/2020/11/tinyml-on-arduino-and-stm32-cnn-convolutional-neural-network-example/

    Reply
  29. Tomi Engdahl says:

    “The Arduino Nano 33 BLE Sense is the suggested hardware for deploying machine learning models on edge.”

    An Introduction to TinyML
    Machine Learning meets Embedded Systems
    https://towardsdatascience.com/an-introduction-to-tinyml-4617f314aa79

    Reply
  30. Tomi Engdahl says:

    This Adafruit BrainCraft Device Uses Machine Learning to Audibly Announce What It Sees
    https://www.hackster.io/news/this-adafruit-braincraft-device-uses-machine-learning-to-audibly-announce-what-it-sees-5b895db9a76e

    Adafruit has a guide that will walk you through how to use the new BrainCraft HAT to build a camera that performs object recognition.

    Reply
  31. Tomi Engdahl says:

    In case you haven’t heard, OpenMV is now an official Arduino partner — supporting the Portenta H7 with computer vision functionality! They’ll be working closely with us to improve the performance of the OpenMV library, including firmware support for things like Ethernet, SDIO WiFi, DisplayPort, and more.

    https://openmv.io/blogs/news/arduino-partnership

    Reply
  32. Tomi Engdahl says:

    Fresh off the heels of his tinyML Remoticon workshop, Shawn Hymel has added an Arduino tutorial to his keyword spotting repo. Follow along to perform speech recognition on the Nano 33 BLE Sense! https://bit.ly/32Vznnn

    Reply
  33. Tomi Engdahl says:

    Think of this Portenta Vision Shield project as creating your own camera filter that puts a smile on every face it detects! https://bit.ly/2Hg9VRa

    Reply
  34. Tomi Engdahl says:

    Join Element14 Community’s upcoming webinar with Edge Impulse for an introduction to tinyML on Arduino. Plus, attendees will have a chance to win an Oura Ring and receive a Nano 33 BLE Sense board!

    Introduction to TinyML by Edge Impulse (Register to Win an Oura Ring or an Arduino Nano Sense 33!)
    https://www.element14.com/community/events/5674/l/introduction-to-tinyml-by-edge-impulse-register-to-win-an-oura-ring-or-an-arduino-nano-sense-33

    Reply
  35. Tomi Engdahl says:

    See the Inner Workings of a Convolutional Neural Network with This PCB Business Card
    This business card made by Paul Klinger is able to classify digits by running a CNN and show each layer’s state.
    https://www.hackster.io/news/see-the-inner-workings-of-a-convolutional-neural-network-with-this-pcb-business-card-ac92186dc15a

    Reply
  36. Tomi Engdahl says:

    Train a tinyML model to recognize certain keywords and control an RGB light strip using a Nano 33 BLE Sense.

    TinyML Keyword Detection for Controlling RGB Lights © GPL3+
    https://create.arduino.cc/projecthub/gatoninja236/tinyml-keyword-detection-for-controlling-rgb-lights-9f51e9

    Train a TensorFlow model to recognize certain keywords and control an RGB light strip using an Arduino Nano 33 BLE Sense.

    Machine learning at the edge is extremely useful for creating devices that can accomplish “intelligent” tasks with far less programming and logical flow charts compared to traditional code. That’s why I wanted to incorporate at-the-edge keyword detection that can recognize certain words and then perform a task based on what was said.

    Reply
  37. Tomi Engdahl says:

    How To Run TensorFlow Lite on Raspberry Pi for Object Detection
    https://m.youtube.com/watch?feature=youtu.be&v=aimSGOAUI8Y

    Julkaistu 12.11.2019
    TensorFlow Lite is a framework for running lightweight machine learning models, and it’s perfect for low-power devices like the Raspberry Pi! This video shows how to set up TensorFlow Lite on the Raspberry Pi for running object detection models to locate and identify objects in real-time webcam feeds, videos, or images.

    Reply
  38. Tomi Engdahl says:

    TinyML Packs a Punch!
    Improve your boxing technique with a pair of Arduinos and TensorFlow Lite.
    https://www.hackster.io/news/tinyml-packs-a-punch-ccb2e9a086a3

    Reply
  39. Tomi Engdahl says:

    M-See-U! Jeremy Ellis’ image classification example uses Edge Impulse, our new Portenta Vision Shield, and the OpenMV IDE to recognize microcontrollers.

    For more Arduino/OpenMV machine vision tutorials: arduino.cc/pro

    https://m.youtube.com/watch?feature=youtu.be&v=WexgCijWfvY

    Reply
  40. Tomi Engdahl says:

    After experiencing Zoom fatigue, Brandon Cohen jokingly built an AI device that uses a Nano 33 BLE Sense and an Edge Impulse tinyML model to listen for his name and alert him during meetings.

    Using TinyML and Ardunio to alert when your name is called on a conference call
    https://www.brandonfcohen.com/name-alert/

    Reply
  41. Tomi Engdahl says:

    Arduino and Harvard’s edX Democratize ML with Tiny Machine Learning Kit and Professional Certificate
    Everything you need to get started with tinyML in one small box!
    https://www.hackster.io/news/arduino-and-harvard-s-edx-democratize-ml-with-tiny-machine-learning-kit-and-professional-certificate-17967020459a

    Reply
  42. Tomi Engdahl says:

    TinyML Keyword Detection for Controlling RGB Lights © GPL3+
    Train a TensorFlow model to recognize certain keywords and control an RGB light strip using an Arduino Nano 33 BLE Sense.
    https://create.arduino.cc/projecthub/gatoninja236/tinyml-keyword-detection-for-controlling-rgb-lights-9f51e9

    Reply
  43. Tomi Engdahl says:

    Washing machine learning?

    Follow along with Shawn Hymel’s latest tinyML project, which uses an Arduino Nano 33 BLE Sense and Edge Impulse to detect an overloaded or off-balance dryer: https://bit.ly/3hQLXdQ

    Reply
  44. Tomi Engdahl says:

    Running RNNs in 2KB of RAM on Arduino! Shiftry is an automatic compiler from high-level floating-point ML models to fixed-point C-programs with 8-bit and 16-bit integers, which have significantly lower memory requirements: bit.ly/2LVmYKK

    Reply
  45. Tomi Engdahl says:

    Simone Salerno puts the Arduino Portenta H7, Teensy 4.0, and STMicroelectronics NV STM32 Nucleo H743ZI2 through their paces for on-device machine learning.

    Eloquent Arduino Pits Three Popular Dev Boards Head-to-Head to See Which Comes Out on Top for TinyML
    https://www.hackster.io/news/eloquent-arduino-pits-three-popular-dev-boards-head-to-head-to-see-which-comes-out-on-top-for-tinyml-5ee9183baebc?1980fa3bbff704c8bec1b7196cf7dfbc

    Simone Salerno puts the Arduino Portenta H7, Teensy 4.0, and STM32 Nucleo H743ZI2 through their paces for on-device machine learning.

    Reply
  46. Tomi Engdahl says:

    Eloquent Arduino Pits Three Popular Dev Boards Head-to-Head to See Which Comes Out on Top for TinyML
    https://www.hackster.io/news/eloquent-arduino-pits-three-popular-dev-boards-head-to-head-to-see-which-comes-out-on-top-for-tinyml-5ee9183baebc

    Simone Salerno puts the Arduino Portenta H7, Teensy 4.0, and STM32 Nucleo H743ZI2 through their paces for on-device machine learning.

    Reply
  47. Tomi Engdahl says:

    We’ve partnered with Edge Impulse to add support for the Portenta H7 and Portenta Vision Shield, enabling embedded developers to collect image data from the field, quickly build classifiers to interpret the world, and deploy models back to the production-ready hardware. From predictive maintenance and industrial automation to wildlife monitoring, the possibilities are endless!

    Make your embedded device see the world with the Arduino Portenta H7
    https://www.edgeimpulse.com/blog/computer-vision-portenta-h7

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*