The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.
Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.
Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.
Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.
Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.
Traditional approaches
C/C++
Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.
C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.
And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.
Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.
Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.
Java
Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones, ) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.
Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.
Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).
My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.
Increasing trends
Databases
Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.
There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).
SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).
If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.
RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.
Software quality
Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind
Safety
Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?
IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.
If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.
Security
The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.
Developing Secure Embedded Software white paper explains why some commonly used approaches to security typically fail:
MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY
Some techniques for building security to embedded systems:
Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed
Flood of new languages
Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. There are also many other relevant languages
One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages (Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again. Scripting languages are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.
Scripting languages have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used. High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.
There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.
Bash and other shell scipting
Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).
The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.
There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.
Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.
Lua
Lua is a lightweight cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.
Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.
There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.
PHP
PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.
Python
Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.
Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming, Many other paradigms are supported using extensions, including design by contract and logic programming.
Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.
Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.
Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.
For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriate. Examples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.
Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.
JavaScript and node.js
JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases! There is huge community of developers. JavaScript is a high-level language.
Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.
Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.
It doesn’t require a complicated IDE; all you really need is a terminal.
JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.
JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!
JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications
JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!
Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET
method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.
Future trends
According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:
One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.
Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.
This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.
Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.
Sources
Most important information sources:
New approaches to dominate in embedded development
A New Approach for Distributed Computing in Embedded Systems
New Approaches to Systems Engineering and Embedded Software Development
Embracing Java for the Internet of Things
Embedded Linux – Shell Scripting 101
Embedded Linux – Shell Scripting 102
Embedding Other Languages in BASH Scripts
PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog
JavaScript: The Perfect Language for the Internet of Things (IoT)
Anyone using Python for embedded projects?
JavaScript: The Perfect Language for the Internet of Things (IoT)
MICROCONTROLLERS AND NODE.JS, NATURALLY
Node.JS Appliances on Embedded Linux Devices
The smartest way to program smart things: Node.js
Embedded Software Can Kill But Are We Designing Safely?
DEVELOPING SECURE EMBEDDED SOFTWARE
1,667 Comments
Tomi Engdahl says:
Making ISO 26262 Traceability Practical
March 11, 2022
Increase system quality and accelerate functional-safety assessments by identifying and fixing the traceability gaps between disparate systems.
https://www.electronicdesign.com/technologies/embedded-revolution/article/21235207/arteris-ip-making-iso-26262-traceability-practical?utm_source=EG+ED+Auto+Electronics&utm_medium=email&utm_campaign=CPS220314014&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
What you’ll learn:
How to bridge the gaps between requirements, architecture, design, verification, and validation.
How to use requirements management to support ISO 26262 standards.
The ISO 26262 standard states that functional-safety assessors should consider if requirements management, including bidirectional traceability, is adequately implemented. The standard doesn’t specify how an assessor should go about accomplishing this task. However, it’s reasonable to assume that a limited subset of connections between requirements and implementation probably doesn’t rise to the expectation.
An apparently simple requirement from a customer, such as “The software interface of the device should be compliant in all respects with specification ABCD-123, except where explicitly noted elsewhere in these requirements,” expands into a very complex set of requirements when fully elaborated.
How can an architect and design team effectively manage this level of traceability amid a mountain of specifications and requirements lists? How can they ensure that what they have built and tested at each step of the development cycle ties back to the original requirements (Fig. 1)? This is especially challenging since handoffs between requirements, architecture, design, verification, and validation depend on human interpretation to bridge the gaps between these steps.
The Default Approach to Traceability
The most obvious way to implement traceability is through a matrix, whether implemented in a dedicated tool like Jama Connect or an Excel spreadsheet. One requirement per line, maybe hierarchically organized, with ownership, source reference, implementation reference, status and so on. This matrix is a bidirectional reference.
Matrices can work well when they’re relatively small. Perhaps the architect will split these up into sub-teams and assign the responsibility for checking correspondence between sub-matrices and the system matrix to an integrator.
Matrices become unmanageable, though, when the number of requirements moves into the thousands. A matrix provides a disciplined way to organize data, but it doesn’t provide automation. Ensuring correspondence between requirements and implementation is still a manual task.
First Steps to Automation
The core problem is connecting between domains that speak very different languages: requirements management, documentation, chip assembly, verification, and hardware/software interface (HSI) descriptions. One approach common in software and mechanical systems is through application-lifecycle-management (ALM) or product-lifecycle-management (PLM) solutions, where all development tools are offered under a common umbrella by a single provider. With that level of control, an ALM or PLM could manage traceability in data between these domains.
However, it’s difficult to see how that kind of integration could work with electronic-design-automation (EDA) tool flows, where the overriding priority is to stay current with leading technologies and complexities. System-on-chip (SoC) development teams demand best-in-class flows and are unlikely to settle for solutions offering traceability at the expense of lower capability.
A second approach expands the scope of requirements-management tools by connecting to data assets within implementation files. Requirements tools naturally have no semantic understanding of these foreign objects, so all responsibility to capture, check, and maintain these links still rests entirely with the SoC development team. Tracing is simplified under a common interface, but the approach doesn’t significantly reduce manual effort or opportunity for errors in creation or maintenance.
Tomi Engdahl says:
Deploying AI in Advanced Embedded Systems
March 14, 2022
Today’s advanced products, from consumer wearables to smart EVs, are starting to leverage the power of AI to increase performance and functionality. However, those solutions require the appropriate hardware to run on.
https://www.electronicdesign.com/technologies/systems/video/21235475/electronic-design-deploying-ai-in-advanced-embedded-systems?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS220315064&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
Tomi Engdahl says:
How 3D Tech Advances Will Impact Robotics Vision Systems in 2022
March 14, 2022
Demand for advanced imaging technology is affecting robotics in scores of industries. What innovations in 2022 will further sculpt this fast-moving space?
https://www.electronicdesign.com/technologies/embedded-revolution/article/21236082/orbbec-how-3d-tech-advances-will-impact-robotics-vision-systems-in-2022?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS220315064&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
Tomi Engdahl says:
https://hackaday.com/2022/03/21/heroic-efforts-give-smallest-arm-mcu-a-breakout-open-debugger/
Tomi Engdahl says:
Hacked GDB Dashboard Puts It All On Display
https://hackaday.com/2022/03/22/hacked-gdb-dashboard-puts-it-all-on-display/
Tomi Engdahl says:
Web Serial Terminal Means It’s Always Hacking Time
https://hackaday.com/2022/03/21/web-serial-terminal-means-its-always-hacking-time/
Arguably one of the most important pieces of software to have in your hardware hacking arsenal is a nice serial terminal emulator. There’s plenty of choice out there, from classic command line tools to flashier graphical options, which ultimately all do the same thing in the end: let you easily communicate with gadgets using UART. But now you’ve got a new choice — instead of installing a serial terminal emulator, you can simply point your browser to the aptly-named serialterminal.com.
Well, maybe. As of this writing it only works on Chrome/Chromium (and by extension, Microsoft Edge), so Firefox fans will be left out in the cold unless Mozilla changes their stance on the whole Web Serial API concept. But assuming you are running the appropriate browser, you’ll be able to connect with your serial gadgets with a simple interface that should be familiar to anyone who’s worked with more traditional terminal software. In a quick test here at the Hackaday Command Center, we were able to bring up the Bus Pirate UI with no problems using Chrome on Linux.
https://www.serialterminal.com/
Tomi Engdahl says:
Run Machine Learning Code in an Embedded IoT Node to Easily Identify Objects
https://www.digikey.com/en/articles/run-machine-learning-code-in-an-embedded-iot-node?dclid=CJGi_Pa06PYCFQVbwgodxiEIfw
Internet of Things (IoT) networks operating in dynamic environments are being expanded beyond object detection to include visual object identification in applications such as security, environmental monitoring, safety, and Industrial IoT (IIoT). As object identification is adaptive and involves using machine learning (ML) models, it is a complex field that can be difficult to learn from scratch and implement efficiently.
The difficulty stems from the fact that an ML model is only as good as its data set, and once the correct data is acquired, the system must be properly trained to act upon it in order to be practical.
This article will show developers how to implement Google’s TensorFlow Lite for Microcontrollers ML model into a Microchip Technology microcontroller. It will then explain how to use the image classification and object detection learning data sets with TensorFlow Lite to easily identify objects with a minimum of custom coding.
The Co-Processor Architecture: An Embedded System Architecture for Rapid Prototyping
https://www.digikey.com/en/articles/the-co-processor-architecture-an-embedded-system-architecture-for-rapid-prototyping?dclid=CO2et_W06PYCFetGHgId6pkJXQ
The embedded systems designer finds themselves at a juncture of design constraints, performance expectations, and schedule and budgetary concerns. Indeed, even the contradictions in modern project management buzzwords and phrases further underscore the precarious nature of this role: “fail fast”; “be agile”; “future-proof it”; and “be disruptive!”. The acrobatics involved in even trying to satisfy these expectations can be harrowing, and yet, they have been spoken and continue to be reinforced throughout the market. What is needed is a design approach, which allows for an evolutionary iterative process to be implemented, and just like with most embedded systems, it begins with the hardware architecture.
Tomi Engdahl says:
https://www.digikey.com/en/articles/specification-and-use-of-muting-functions-on-light-curtains?dclid=CJKh7_K06PYCFdASGAodJhcNMQ
Tomi Engdahl says:
3 Immutable Operating Systems: Bottlerocket, Flatcar and Talos Linux
https://thenewstack.io/3-immutable-operating-systems-bottlerocket-flatcar-and-talos-linux/
For those that don’t know, immutable operating systems have been increasing in popularity recently. An immutable operating system is one in which some, or all, of the operating system file systems, are read-only, and cannot be changed.
Immutable operating systems have a lot of advantages. They are inherently more secure, because many attacks and exploits depend on writing or changing files. Also, even if an exploit is found, bad actors cannot change the operating system on disk (which in itself will thwart attacks that depend on writing to the filesystem), so a reboot will clear any memory-resident malware and recover back to a non-exploited state.
Immutable systems are also easier to manage and update: the operating system images are not patched or updated but replaced atomically (in one operation that is guaranteed to fully complete or fully fail — no partial upgrades!)
Immutable systems also can claim to be more stable than traditional operating systems, simply by virtue of eliminating many of the vectors that introduce instability into a system — most of which are human. No sysadmins can “just change this one setting to fix things” — with unforeseen impacts that aren’t found until hours later. (I’ve been that sysadmin.) No partially complete terraform or puppet runs that leave systems in odd states…
Tomi Engdahl says:
The concept of streaming data from firmware and displaying it on an #oscilloscope can be a powerful tool and can speed up your signal processing firmware #debugging.
Read the full article: http://arw.li/6180KQVEp
#EDN #IndustryNews #Engineering
Analog debugging on bare-metal systems
https://www.edn.com/analog-debugging-on-bare-metal-systems/?utm_source=edn_facebook&utm_medium=social&utm_campaign=Articles
Tomi Engdahl says:
R&D Efficiency
5 common risks in device development projects
https://www.etteplan.com/stories/5-common-risks-device-development-projects?utm_campaign=unspecified&utm_content=unspecified&utm_medium=email&utm_source=apsis-anp-3&pe_data=D43445A477046455B45724541514B71%7C30700027
Developing a modern device with embedded electronics and a nice user interface is a complicated task. Success requires a wide array of skills, great teamwork, experience, and luck. Many challenges need to be tackled along the way to reach the goal. In worst case scenarios, the project is badly delayed, development costs shoot through the roof, or the project is put on hold indefinitely. The management needs to be aware of all possible risks.
1. Poorly defined requirements
2. Clash of software and hardware
3. Immature technology
4. Poor manufacturability
5. Price tag of human resources
How much will device development cost? Because modern devices depend so much on software, budgeting is more complicated than before. Software development takes human resources and putting an exact price tag for them can be hard. Basically, everything comes back to requirements. Sometimes, just one change in them requires doubling the size of the software team. Cost estimates can be made only after the requirements are ready.
“In the end, companies need to understand that a modern device is never really ready. However, they must get to the market fast to avoid burning the market potential. A minimum viable product is enough to validate whether the device has real potential and to improve market understanding. This leads to a continuum to keep on developing the product,” Juha Nieminen says.
Tomi Engdahl says:
Lengthen the life cycle of your connected device
https://www.etteplan.com/stories/lengthen-life-cycle-your-connected-device?utm_campaign=unspecified&utm_content=unspecified&utm_medium=email&utm_source=apsis-anp-3&pe_data=D43445A477046455B45724541514B71%7C30700027
There are many examples of devices having a shortened life cycle due to a power consumption issue, which leads to the battery of the device behaving unexpectedly. The frustration of users and claims of liability are a bad look and can cause damage to a company’s reputation. Fortunately, there is an answer for this problem many device manufacturers are facing: including automated power consumption measurements to your continuous integration of the software development cycle.
Devices with a long lifecycle need a power consumption profile to match
Power consumption should be measured over a long period of time to better monitor the life cycle of the device. This is important to do, not only during development, but before every major release of firmware. After all, the life cycle of your device’s battery correlates with the life cycle of your device. Especially when it is not easily replaceable.
For example, sensor nodes that measure the water level at the top of a buoy in the middle of the sea are designed to last for a long time without maintenance. In this scenario, the battery needs to last as long as possible so the power consumption needs to be known.
Measure the power consumption already during the product development
It’s important to measure the power consumption of your device during the product development process to avoid any unpleasant surprises. If the power consumption levels increase unexpectedly between software versions, the device might need software fixes which can lengthen the time to market and cause additional costs in the development.
Software updates can cause an increase in power consumption
Modern devices require periodical software updates. Due to these updates, the power consumption of the device can change unexpectedly. Hence, it should be a standard operating practice to measure the power consumption during all software development activities.
Tomi Engdahl says:
https://www.etteplan.com/stories/lengthen-life-cycle-your-connected-device?utm_campaign=newsletter-3-2022&utm_content=newsletter&utm_medium=email&utm_source=apsis-anp-3&pe_data=D444350457342435846784742504471%7C30724798
https://www.etteplan.com/stories/5-common-risks-device-development-projects?utm_campaign=newsletter-3-2022&utm_content=newsletter&utm_medium=email&utm_source=apsis-anp-3&pe_data=D444350457342435846784742504471%7C30724798
Tomi Engdahl says:
Using Statistics Instead Of Sensors
https://hackaday.com/2022/04/03/using-statistics-instead-of-sensors/
Statistics often gets a bad rap in mathematics circles for being less than concrete at best, and being downright misleading at worst. While these sentiments might ring true for things like political polling, it hides the fact that statistical methods can be put to good use in engineering systems with fantastic results. [Mark Smith], for example, has been working on an espresso machine which can make the perfect shot of coffee, and turned to one of the tools in the statistics toolbox in order to solve a problem rather than adding another sensor to his complex coffee-brewing machine.
Using R-squared to Detect Espresso Shot Volume With a Water Tank Sensor
https://surfncircuits.com/2022/03/19/using-r-squared-to-detect-espresso-shot-volume-with-a-water-tank-sensor/
Tomi Engdahl says:
https://hackaday.com/2022/04/05/hacker-dictionary-rs-485-will-go-the-distance/
Tomi Engdahl says:
PyModbus – A Python Modbus Stack
https://pymodbus.readthedocs.io/en/3.0.0/readme.html
Pymodbus is a full Modbus protocol implementation using twisted/tornado/asyncio for its asynchronous communications core. It can also be used without any third party dependencies (aside from pyserial) if a more lightweight project is needed. Furthermore, it should work fine under any python version >= 3.7 (including python 3+)
Features
Client Features
Full read/write protocol on discrete and register
Most of the extended protocol (diagnostic/file/pipe/setting/information)
TCP, UDP, Serial ASCII, Serial RTU, and Serial Binary
asynchronous(powered by twisted/tornado/asyncio) and synchronous versions
Payload builder/decoder utilities
Pymodbus REPL for quick tests
Server Features
Can function as a fully implemented modbus server
TCP, UDP, Serial ASCII, Serial RTU, and Serial Binary
asynchronous(powered by twisted) and synchronous versions
Full server control context (device information, counters, etc)
A number of backing contexts (database, redis, sqlite, a slave device)
Tomi Engdahl says:
https://github.com/smarmengol/Modbus-Master-Slave-for-Arduino
Tomi Engdahl says:
Reaaliaikaisia C++-sovelluksia Arm-piireille
https://etn.fi/index.php/13-news/13403-reaaliaikaisia-c-sovelluksia-arm-piireille
Saksalainen SEGGER tunnetaan laadukkaista sulautettujen sovellusten mikro-ohjainten työkauistaan. Nyt yritys on tuonut Arm-pohjaisten suunnittelujen Embedded Studioonsa tuen reaaliaikaiselle muistinhallinnalle.
Tämä parantaa koorin tehokkuutta ja vasteaikaa muistin varaamisessa ja vapauttamisessa, mikä mahdollistaa kovan reaaliaikaisen (hard realtime) C++:lla kirjoitettujen sovellusten käytön. Laajennus nostaa SEGGERin Arm-työkalut samalle tasolle kuin äskettäin julkistettu Embedded Studion versio 6 RISC-V:lle.
Tomi Engdahl says:
Voice of the Modern Developer: Insights From 400+ Developers https://www.tromzo.com/blog/voice-of-the-modern-developer
Instead of propagating the blame game between Dev and AppSec teams, we believe it is more productive to better understand the challenges developers face, how they feel about security, and what organizations can do to bake security into the development process. To that end, we commissioned a survey of over 400 AppSec professionals for our first annual State of Modern Application Security Report.. [Key findings:] 42% of developers push vulnerable code once per month, Developers fix only 32% of vulnerabilities, A third of vulnerabilities are noise, 33% believe that developers and security are siloed.
Tomi Engdahl says:
Google Teams Up With GitHub for Supply Chain Security
https://www.securityweek.com/google-teams-github-supply-chain-security
Google has teamed up with GitHub for a solution that should help prevent software supply chain attacks such as the ones that affected SolarWinds and Codecov.
Google’s open source security team explained that in the SolarWinds attack hackers gained control of a build server and injected malicious artifacts into a build platform. In the Codecov attack, threat actors bypassed trusted builders to upload their artifacts.
“Each of these attacks could have been prevented if there were a way to detect that the delivered artifacts diverged from the expected origin of the software,” Google explained. “But until now, generating verifiable information that described where, when, and how software artifacts were produced (information known as provenance) was difficult. This information allows users to trace artifacts verifiably back to the source and develop risk-based policies around what they consume.”
Google and GitHub now propose a new method for generating what they describe as “non-forgeable provenance.” The method leverages GitHub Actions workflows for isolation and Sigstore signing tools for authenticity.
The goal is to help projects building on GitHub runners achieve a high SLSA level, which reassures consumers that their artifacts are trustworthy and authentic.
SLSA (Supply-chain Levels for Software Artifacts) is a framework designed for improving the integrity of a project by enabling users to trace software from the final version back to its source code. In this case, the goal is to achieve SLSA level 3 out of a total of four levels.
Improving software supply chain security with tamper-proof builds
April 7, 2022
https://security.googleblog.com/2022/04/improving-software-supply-chain.html
Many of the recent high-profile software attacks that have alarmed open-source users globally were consequences of supply chain integrity vulnerabilities: attackers gained control of a build server to use malicious source files, inject malicious artifacts into a compromised build platform, and bypass trusted builders to upload malicious artifacts. Each of these attacks could have been prevented if there were a way to detect that the delivered artifacts diverged from the expected origin of the software. But until now, generating verifiable information that described where, when, and how software artifacts were produced (information known as provenance) was difficult. This information allows users to trace artifacts verifiably back to the source and develop risk-based policies around what they consume. Currently, provenance generation is not widely supported, and solutions that do exist may require migrating build processes to services like Tekton Chains.
This blog post describes a new method of generating non-forgeable provenance using GitHub Actions workflows for isolation and Sigstore’s signing tools for authenticity. Using this approach, projects building on GitHub runners can achieve SLSA 3 (the third of four progressive SLSA “levels”), which affirms to consumers that your artifacts are authentic and trustworthy.
SLSA (“Supply-chain Levels for Software Artifacts”) is a framework to help improve the integrity of your project throughout its development cycle, allowing consumers to trace the final piece of software you release all the way back to the source. Achieving a high SLSA level helps to improve the trust that your artifacts are what you say they are.
https://slsa.dev/
Supply chain Levels for Software Artifacts, or SLSA (salsa).
It’s a security framework, a check-list of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure in your projects, businesses or enterprises. It’s how you get from safe enough to being as resilient as possible, at any link in the chain.
Tomi Engdahl says:
Wired is more reliable, has fewer moving parts, and can solve the “how do I get power to these things” problem. It’s intrinsically simpler: no radios, just serial data running as voltage over wires. But nobody likes running cable, and there’s just so much more demo code out there for an ESP solution. There’s an undeniable ease of development and cross-device compatibility with WiFi. Your devices can speak directly to a computer, or to the whole Internet. And that’s been the death of wired.
https://hackaday.com/2022/04/09/the-virtue-of-wires-in-the-age-of-wireless/
Hacker Dictionary: RS-485 Will Go The Distance
https://hackaday.com/2022/04/05/hacker-dictionary-rs-485-will-go-the-distance/
Tomi Engdahl says:
Nyt voit tutustua OnePlus 10 Pron lähdekoodiin
https://etn.fi/index.php/13-news/13435-nyt-voit-tutustua-oneplus-10-pron-laehdekoodiin
OnePlus julkisti maaliskuun lopulla uuden 10 Pro -mallinsa myös Euroopassa. Nyt yhti on julkistanut puhelimen lähdekoodin Githubissa. Jokainen voi halutessaan tutustua siihen, millaisesta koodista huippuluokan Android-älypuhelin koostuu.
Lähdekoodi julkistetaan ennen kaikkea sovelluskehittäjien avuksi. Lisäksi lähdekoodin avulla on mahdollista koodata käyttöjärjestelmästä oma, avoin versionsa.
Githubin tilasto kertoo, että OnePlus 10 Pron koodista 98,3 prosenttia on C-kieltä. Assemblyä on yhden prosentin verran, Pythonia 0,1 prosenttia ja Perliä 0,1 prosenttia.
https://github.com/OnePlusOSS/android_kernel_msm-5.10_oneplus_sm8450
Tomi Engdahl says:
Planning Ahead: Data-Streaming Basics
April 13, 2022
Developing systems that handle streaming data like audio and video can be challenging. It means continuous data processing in real-time.
https://www.electronicdesign.com/technologies/communications/video/21238305/electronic-design-planning-ahead-datastreaming-basics
Processing streaming data from sensors, such as cameras for video or microphones for audio data, is common practice. These days, high-speed data from multiple sources is often the case for applications where filters and machine-learning models are being applied in real-time. Mapping data to software data structures and combining them to application code can be a challenge
This video (at the top of the article) is the first in a series I’m doing with Heather Meloy Gorr, Senior Product Manager at MathWorks, talking about the challenges, methodologies, and solutions associated with high-speed communications and data processing.
Tomi Engdahl says:
Evaluating Different Development and Prototyping Boards for Wearable Applications
https://www.digikey.com/en/articles/evaluating-different-development-and-prototyping-boards-for-wearable-applications?dclid=CNCt09iRsfcCFR5HHgIdk7MKgQ
The open source Arduino concept has proved to be tremendously successful among hobbyists and makers. It has also been embraced by professional designers for early development and prototyping, and more recently for full-on designs. With the emergence of applications such as wearables and health monitoring, both types of users require higher performance and more functionality in ever smaller board form factors.
This article briefly discusses how Arduino boards have evolved to meet the needs of makers and professionals for high performance and functionality in low-power, space-constrained applications. It then introduces and shows how to get started with a recent addition to the Arduino family, the Seeeduino XIAO from Seeed Technology Co.
Tomi Engdahl says:
How to Select the Right RTOS and Microcontroller Platform for the IoT
https://www.digikey.com/en/articles/how-to-select-the-right-rtos-and-microcontroller-platform-for-the-iot?dclid=CKjkw9eRsfcCFelJHgId2kgNFQ
Developing an Internet of Things (IoT) device can be more challenging than many developers or companies realize. The very act of connecting an embedded system to the cloud dramatically increases the timing complexity for the system. An increase in timing complexity means that developers need a better way to manage how their software will decide what code should be running when. The best way to avoid writing custom schedulers or dealing with the timing at the bare metal level is to instead use a real-time operating system (RTOS) to help us manage the timing complexities.
One challenge with using an RTOS today is that many developers are coming from a bare metal environment without an operating system (OS), and selecting the right RTOS for a given application can be challenging. A quick survey of the RTOS market online would find that there are over 100 RTOSs available that developers can use that vary from open source to certified commercial RTOSs. So how does one go about selecting an RTOS and get started?
In this article, we are going to walk through how to evaluate which RTOS is best for your application and then examine development platforms from STMicroelectronics and Renesas that can be used to get started.
Factors to consider when selecting an RTOS
Real-time operating systems form the foundation on which developers build their application code. Selecting the right RTOS is critical to ensuring that the application is being built upon a sturdy and proven foundation. However, it’s often the case that RTOS selection is based on just a single parameter: cost.
While cost is an important factor to consider, it should not be the only one. A development team could easily spend ten times the cost of a commercial RTOS if they struggle to port, implement, or lack support for the RTOS that they select, not to mention the time that could be lost on a project. In general, there are eight different categories that a development team should consider when selecting an RTOS for their application. These include:
Legal liability and exposure
Performance
Features
Cost
Ecosystem
Middleware
RTOS vendor
Engineering preference
Within each category there may be several criteria that should be evaluated for each RTOS. For example, in the legal liability category, teams may want to consider the following:
RTOS infringement liability
Indemnification
Warranty
The need to have the RTOS reviewed from a legal standpoint
In the performance category, developers might consider the following:
Executable memory footprint
RAM footprint
Highest degree of determinism
Run-time efficiency
Tomi Engdahl says:
8-bittinen ei suostu kuolemaan
https://etn.fi/index.php?option=com_content&view=article&id=13492&via=n&datum=2022-04-28_15:02:32&mottagare=30929
Moni analyytikko on povannut 8-bittisten ohjainpiirien käytön loppumista, mutta 50 vuotta markkinoilla ollut tekniikka ei suostu kuolemaan. Microchip alleviivaa tätä esittelemässä peräti viisi uutta PIC- ja AVR-perhettä, joissa on yhteensä 65 uutta ohjainta.
Siruja esitellyt Microchipin tuotemarkkinoinnin päällikkö Öyvind Ström muistutti, että kaikki muut valmistajat ovat luopuneet 8-bittisten kehittämisestä. Microchipillä on hallussaan kolmannes 8-bittisten noin viiden miljardin dollarin markkinoista ja ohjaimien tuotantoon kolmessa tehtaassaan yhtiö panostaa miljardi dollaria seuraavan vuoden aikana.
- Itse asiassa 8-bittisten markkina kasvaa tasaisesti. Siruja valmistetaan 130 nanometrin prosessissa, joka on osoittautunut ihanteelliseksi suorituskyvyn ja kustannusten yhdistelmäksi, Ström kehuu.
Microchipin valikoimassa on 8-bittisissä yli 1500 tuotenumeroa ja osaa siruista on valmistettu yli 20 vuoden ajan. Nyt valikoimaan lisätään neljä uutta PIC-perhettä ja yksi AVR-perhe. – 8-bittisten kysyntä kasvattaa älykkyyden jakaminen. Ohjaimille tulee lisää analogiatoimintoja ja CIP-lohkojen eli ytimestä riippumattomien oheislaitteiden käyttökelpoisuus kasvattaa suosiota, Ström sanoo.
CIP-oheislaitteet ovat yksi 8-bittisten suosiota kasvattaa tekniikka. Sen hyödyntämisessä on Strömin mukaan oma haasteensa. – HW-suunnittelija ymmärtää tekniikan edut paremmin, C-koodia vääntävälle ohjelmistosuunnittelijalle tämä on isompi haaste, Ström sanoo. Molemmat kuitenkin näkevät, miten etuja tekniikasta on omien asiakkaiden kannalta.
https://page.microchip.com/8Bit-Mega-Launch.html
Tomi Engdahl says:
The Co-Processor Architecture: An Embedded System Architecture for Rapid Prototyping
https://www.digikey.com/en/articles/the-co-processor-architecture-an-embedded-system-architecture-for-rapid-prototyping?dclid=CNKZ07jXtvcCFQvjmgodgi4Bwg
Editor’s Note — Although well known for its digital processing performance and throughput, the co-processor architecture provides the embedded systems designer opportunities to implement project management strategies, which improve both development costs and time to market. This article, focused specifically upon the combination of a discrete microcontroller (MCU) and a discrete field programmable gate array (FPGA), showcases how this architecture lends itself to an efficient and iterative design process. Leveraging researched sources, empirical findings, and case studies, the benefits of this architecture are explored and exemplary applications are provided. Upon this article’s conclusion, the embedded systems designer will have a better understanding of when and how to implement this versatile hardware architecture.
Introduction
The embedded systems designer finds themselves at a juncture of design constraints, performance expectations, and schedule and budgetary concerns. Indeed, even the contradictions in modern project management buzzwords and phrases further underscore the precarious nature of this role: “fail fast”; “be agile”; “future-proof it”; and “be disruptive!”. The acrobatics involved in even trying to satisfy these expectations can be harrowing, and yet, they have been spoken and continue to be reinforced throughout the market. What is needed is a design approach, which allows for an evolutionary iterative process to be implemented, and just like with most embedded systems, it begins with the hardware architecture.
The co-processor architecture, a hardware architecture known for combining the strengths of both microcontroller unit (MCU) and field programmable gate array (FPGA) technologies, can offer the embedded designer a process capable of meeting even the most demanding requirements, and yet it allows for the flexibility necessary to address both known and unknown challenges. By providing hardware capable of iteratively adapting, the designer can demonstrate progress, hit critical milestones, and take full advantage of the rapid prototyping process.
Within this process are key project milestones, each with their own unique value to add to the development effort. Throughout this article, these will be referred to by the following terms: The Digital Signal Processing with the Microcontroller milestone, the System Management with the Microcontroller milestone, and the Product Deployment milestone.
By the conclusion of this article, it will be demonstrated that a flexible hardware architecture can be better suited to modern embedded systems design than a more rigid approach. Furthermore, this approach can result in improvements to both project cost and time to market. Arguments, provided examples, and case studies will be used to defend this position. By observing the value of each milestone within the design flexibility that this architecture provides, it becomes clear that an adaptive hardware architecture is a powerful driver in pushing embedded systems design forward.
Tomi Engdahl says:
Run Machine Learning Code in an Embedded IoT Node to Easily Identify Objects
https://www.digikey.com/en/articles/run-machine-learning-code-in-an-embedded-iot-node?dclid=CNztkbrXtvcCFQ2DmgodNTgJfg
Internet of Things (IoT) networks operating in dynamic environments are being expanded beyond object detection to include visual object identification in applications such as security, environmental monitoring, safety, and Industrial IoT (IIoT). As object identification is adaptive and involves using machine learning (ML) models, it is a complex field that can be difficult to learn from scratch and implement efficiently.
The difficulty stems from the fact that an ML model is only as good as its data set, and once the correct data is acquired, the system must be properly trained to act upon it in order to be practical.
This article will show developers how to implement Google’s TensorFlow Lite for Microcontrollers ML model into a Microchip Technology microcontroller. It will then explain how to use the image classification and object detection learning data sets with TensorFlow Lite to easily identify objects with a minimum of custom coding.
It will then introduce a TensorFlow Lite ML starter kit from Adafruit Industries that can familiarize developers with the basics of ML.
ML for embedded vision systems
ML in a broad sense gives a computer or an embedded system similar pattern recognition capabilities as a human. From a human sensory standpoint this means using sensors such as microphones and cameras to mimic human sensory perceptions of hearing and seeing. While sensors are easy to use for capturing audio and visual data, once the data is digitized and stored it must then be processed so it can be matched against stored patterns in memory that represent known sounds or objects. The challenge is that the image data captured by a camera for a visual object, for example, will not exactly match the stored data in memory for an object. A ML application that needs to visually identify the object must process the data so that it can accurately and efficiently match the pattern captured by the camera to a pattern stored in memory.
There are different libraries or engines used to match the data captured by the sensors. TensorFlow is an open-source code library that is used to match patterns. The TensorFlow Lite for Microcontrollers code library is specifically designed to be run on a microcontroller, and as a consequence has reduced memory and CPU requirements to run on more limited hardware. Specifically, it requires a 32-bit microcontroller and uses less than 25 kilobytes (Kbytes) of flash memory.
However, while TensorFlow Lite for Microcontrollers is the ML engine, the system still needs a learning data set of the patterns it is to identify. Regardless of how good the ML engine is, the system is only as good as its learning data set, and for visual objects some of the learning data sets can require multiple gigabytes of data for many large models. More data requires higher CPU performance to quickly find an accurate match, which is why these types of applications normally run on powerful computers or high-end laptops.
For an embedded systems application, it should only be necessary to store those specific models in a learning data set that are necessary for the application. If a system is supposed to recognize tools and hardware, then models representing fruit and toys can be removed. This reduces the size of the learning data set, which in turn lowers the memory needs of the embedded system, thus improving performance while reducing costs.
An ML microcontroller
To run TensorFlow Lite for Microcontrollers, Microchip Technology is targeting machine learning in microcontrollers with the Arm® Cortex®-M4F-based ATSAMD51J19A-AFT microcontroller (Figure 1). It has 512 Kbytes of flash memory with 192 Kbytes of SRAM memory and runs at 120 megahertz (MHz). The ATSAMD51J19A-AFT is part of the Microchip Technology ATSAMD51 ML microcontroller family. It is compliant with automotive AEC-Q100 Grade 1 quality standards and operates over -40°C to +125°C, making it applicable for the harshest IoT and IIoT environments. It is a low-voltage microcontroller and operates from 1.71 to 3.63 volts when running at 120 MHz.
Tomi Engdahl says:
https://www.digikey.com/en/articles/specification-and-use-of-muting-functions-on-light-curtains?dclid=CNa3o7bXtvcCFZafmgodlqoLDg
Tomi Engdahl says:
Yocto or Buildroot? Which to Use when Building your Custom Embedded Systems
https://www.incredibuild.com/blog/yocto-or-buildroot-which-to-use-when-building-your-custom-embedded-systems?utm_medium=email
So, you’re building the next big smart microwave with an integrated tablet. That’s great! Except now you need to build a custom operating system to help manage everything without burning your food (and hopefully your house down). You need a project that can help you create an effective Linux build, and it has to be fast. We’ve got good news! There are some great projects to help you do it. The real question is, which one should you go with?
There are quite a few options to choose from, but for this article, we’ll focus on the two most popular frameworks out there: Yocto and Buildroot. These two projects have become go-to’s for teams looking to build embedded systems for quite different reasons, and they offer unique benefits (and some drawbacks) that make them worth a look. So, let’s break them down to see which is the better choice for you.
Tomi Engdahl says:
OPC Unified Architecture
https://en.wikipedia.org/wiki/OPC_Unified_Architecture
IEC61850 companion specification for electrical substation automation systems
https://opcfoundation.org/markets-collaboration/iec61850/
Tomi Engdahl says:
https://ldra.com/ldra-blog/software-requirements-change-management-case-study/?utm_medium=email
Tomi Engdahl says:
The open source repository of OPC UA is now available on the open source GitHub web site at http://github.com/opcfoundation. Open source is a very important strategy to eliminate roadblocks to adoption of the technology.”
https://opcfoundation.org/news/opc-foundation-news/opc-foundation-announces-opc-ua-open-source-availability/
There are a number of OPC open-source initiatives already available from suppliers, research institutes and academia for OPC UA. The open source repository from the OPC Foundation is intended to supplement these other open-source initiatives, providing additional value. The OPC Foundation has committed resources to moderate/maintain and extend the technology to keep pace with technology changes in the industry as well as the extensions to the OPC UA architecture and corresponding companion specification.
The OPC Foundation has always followed the model of validating the OPC specifications by requiring reference implementations prior to releasing any OPC specifications. This strategy guarantees that the OPC specifications are not just academic, as the reference implementation validates that the technology solves real-world problems and can be adopted into real products. The availability of the OPC UA technology as open-source allows OPC to be leveraged in many additional markets providing information integration and interoperability from the embedded world to the cloud.
Tomi Engdahl says:
List of Open Source OPC UA Implementations – open62541/open62541 Wiki
https://github-wiki-see.page/m/open62541/open62541/wiki/List-of-Open-Source-OPC-UA-Implementations
open62541 is an open source C (C99) implementation of OPC UA licensed under the Mozilla Public License v2.0.
https://open62541.org/
Tomi Engdahl says:
Take Control of Your RISC-V Codebase
Jan. 25, 2022
Delivering more complex software at an ever-increasing pace raises the risks of software errors, which can affect product quality as well as cause security issues. This becomes even more of a reality with the relatively new RISC-V codebase.
https://www.electronicdesign.com/technologies/embedded-revolution/article/21215008/iar-systems-take-control-of-your-riscv-codebase
What you’ll learn:
Reusing your codebase for other projects.
Why code quality is such a big issue in your RISC-V codebase.
What are the main coding standards that could be applied to your code build?
Why code-analysis tools offer the fastest ways to better code.
When we talk about take control of your RISC-V codebase, there are really two aspects to it. The first meaning is reusing your codebase for future projects. The second aspect is that poor code quality is actually a widespread problem—there’s quite a bit of evidence to support the claim that bad coding practices lead directly to vulnerabilities.
Clearly, then, every developer and company must improve code quality so that the software stands the test of time. In other words, it needs to be defect-free, or as close to defect-free as possible.
Tomi Engdahl says:
Arduino And Git: Two Views
https://hackaday.com/2022/04/29/arduino-and-git-two-views/
You can’t do much development without running into Git, the version control management system. Part of that is because so much code lives on GitHub which uses Git, although you don’t need to know anything about that if all you want to do is download code. [Dr. Torq] has a good primer on using Git with the Arduino IDE, if you need to get your toes wet.
You might think if you develop by yourself you don’t need something like Git. However, using a version control system is a great convenience, especially if you use it correctly. There’s a bug out in the field? What version of the firmware? You can immediately get a copy of the source code at that point in time using Git. A feature is broken? It is very easy to see exactly what changed. So even if you don’t work in a team, there are advantages to having source code under control.
Tutorial: Git an Arduino IDE Workflow
https://thenewstack.io/tutorial-git-an-arduino-ide-workflow/
GitHub Tutorial without using the Command Line
https://www.youtube.com/watch?v=tCuPbW31vAw&t=1s
Tomi Engdahl says:
https://hackaday.com/2022/05/03/linux-fu-the-infinite-serial-port/
Tomi Engdahl says:
https://etn.fi/index.php/tekniset-artikkelit/13520-suorituskyky-ja-tehonkulutus-tasapainoon
Tomi Engdahl says:
https://www.geocene.com/tech/hardware/2022/05/03/Coding-Up-an-IoT-PCB-Design.html
Tomi Engdahl says:
Module Type Package (MTP)
Offering process function as a service for modular enabled automation solutions with plug and produce capabilities is the future of the process industry
https://new.abb.com/control-systems/modular-automation/module-type-package
Tomi Engdahl says:
Creating An Image Format For Embedded Hardware
https://hackaday.com/2022/05/08/creating-an-image-format-for-embedded-hardware/
Tomi Engdahl says:
PIF – «Portable Image File» Format
https://github.com/gfcwfzkm/PIF-Image-Format
The Portable Image Format (PIF) is a basic, bitmap-like image format with the focus on ease of use (implementation) and small size for embedded applications. The file format not only offers special, reduced color sets to reduce size where 24-bit resolution are not required (or unable to be rendered by the display), but also features variable sized color tables to achive good-looking, custom images at reduced bit-per-pixel size. To further reduce the size of the image data, a simple RLE-compression can be used without loosing too many cycles on decompression. Thanks to supporting various Bit-Per-Pixel formats, RGB565 and RGB332 can be directly written to LCD displays who support it, and don’t need additional image data conversion.
Tomi Engdahl says:
DevOps käyttöön – turvallisempaa koodia sulautettuihin
https://etn.fi/index.php/tekniset-artikkelit/13534-devops-kaeyttoeoen-turvallisempaa-koodia-sulautettuihin
Jos olet sulautettujen ohjelmistojen tai IoT-laitteiden kehittäjä, olet todennäköisesti kuullut DevOpsista. Se on sarja lyhyisiin kehityssykleihin, toistuviin julkaisuihin ja nopeaan palautteeseen keskittyviä ohjelmistokehityskäytäntöjä, joissa kehitys ja toiminta liittyvät läheisesti toisiinsa. On aika ottaa siitä kaikki hyöty irti.
DevOps on ollut todellinen tehoisku lähes kaikessa muussa ohjelmistokehityksessä viimeisen vuosikymmenen aikana, erityisesti pilvessä, mobiilisovelluksissa ja peleissä. Toistaiseksi käyttöönotto on kuitenkin ollut hidasta sulautettujen ohjelmistojen ja IoT-laitteiden kehittäjien keskuudessa.
Ensiksi pitää todeta yksi asia: jokaisessa ohjelmistossa on virheitä. On olemassa akateemista tutkimusta, joka osoittaa, että jokaista tuhatta toimitettua koodiriviä kohden tuotetaan 50-100 bugia kehityksen aikana. Noin 5 prosenttia niistä on edelleen käytössä. Tämän tunnustaminen ja käyttöönoton ongelmiin valmistautuminen on DevOpsin kulmakivi. Tämä saavutetaan seuraamalla järjestelmällisesti ohjelmistoa käyttöönoton aikana.
Internet of Things- eli IoT-laitteiden ensisijainen tarkoitus on tarjota dataa ja näkemyksiä liiketoiminnan parantamiseksi esimerkiksi koneiden ennakoivan huollon kautta. Voimme soveltaa samaa ajattelua laiteohjelmistoon oppiaksemme, kuinka se toimii kentällä. Tämä vaatii niin kutsuttua laitepalautesilmukkaa (Device Feedback Loop), joka tarkoittaa DevOps-seurantaa laiteohjelmistoille ja tarjoaa mahdollisuuden raportoida kaikenlaisista laitteissa esiintyvistä ajonaikaisista ongelmista takaisin kehittäjille yhdessä tapahtuneen selittävien diagnostisten tietojen kanssa.
Virheistä, erityisesti kaatumisista ilmoittaminen on luultavasti ensimmäinen asia, joka tulee mieleen. Asiakkaiden raportoimia kaatumisia voi olla helvetillisen vaikea analysoida, koska vastaanotetusta palautteesta puuttuu usein tarpeeksi yksityiskohtia, jotta kehittäjät voisivat toistaa ongelman. Laitteen palautesilmukka voi varoittaa ohjelmistovian tapahtuessa ilman erillisiä käyttäjän toimia ja antaa tuotetiimille tarvittavat tiedot ongelman paikantamiseksi. Tämä mahdollistaa myös testauksen parantamisen, jotta samanlaiset ongelmat voidaan välttää tulevaisuudessa.
Tomi Engdahl says:
Korttipakasta apua ohjelmistokehitykseen
https://www.uusiteknologia.fi/2022/05/09/korttipakasta-apua-ohjelmistokehitykseen/
Jyväskylän yliopistossa on kehitetty starup-yrityksille uusi menetelmä ohjelmistokehitykseen. Ratkaisu rakentuu 17 tietokortista, jotka sisältävät hyväksi todettuja käytänteitä ja työtapoja. Korttipakka keskittyy vaatimusmäärittelyyn startup-yritysten ehdoilla.
Väitöskirja Improving Software Development in Early-Stage Startups (LINKKI, pdf, 27 Mt).
http://urn.fi/URN:ISBN:978-951-39-9133-3
Tomi Engdahl says:
IIoT mullistaa ohjauskaapin
https://etn.fi/index.php/tekniset-artikkelit/13535-iiot-mullistaa-ohjauskaapin
Ohjauskaappi on kaikessa yksinkertaisuudessaan teollisuuskoneiden asennuksen kulmakivi. Alkuaan vain muutamille releille, sulakkeille, kytkinlaitteille ja yksinkertaisille ohjaimille tarkoitetut kaapit ovat nyt murroksessa.
IIoT:n eli teollisen internetin, Teollisuus 4.0:n ja muiden toiminnallisen tehokkuuden parannusten myötä ohjauskaappeihin sijoitetaan nyt kaikki uudet automaatiokomponentit ja -järjestelmät. Koneet kuitenkin usein tarvitsevat enemmän tilaa kuin mitä on käytettävissä.
Kun lattiatilaa on vain rajoitetusti, teknologista kehitystä tarvitaan, jotta rajallista kaappitilaa voidaan hyödyntää mahdollisimman tehokkaasti.
Tomi Engdahl says:
Väitös löysi startupeille sopivat koodausmenetelmät
https://etn.fi/index.php/13-news/13537-vaeitoes-loeysi-startupeille-sopivat-koodausmenetelmaet
Jyväskylän yliopistossa on kehitetty startup-yrityksille sopiva ohjelmistonkehitysmenetelmä. Kai-Kristian Kemell kehitti tietojärjestelmätieteen väitöskirjassaan menetelmän, joka auttaa alkuvaiheessa olevia startup-yrityksiä muun muassa löytämään ohjelmistolle potentiaalisia käyttäjiä.
Startup-yritykset ovat maailmanlaajuisesti merkittävä taloudellinen ilmiö, joihin on pelkästään Euroopassa sijoitettu useita kymmeniä miljardeja euroja. Startup-yritykset myös nähdään innovaation edelläkävijöinä, jotka onnistuvat haastamaan markkinoita uusilla ideoilla.
Startup-yritykset eroavat muista yrityksistä eri tavoin. Näiden erojen takia tutkijat ovat yrittäneet ymmärtää, mitkä olemassa olevat tutkimustulokset pätevät myös startup-yrityksiin, ja mitkä olemassa olevat ohjelmistokehityksen menetelmät soveltuvat juuri startup-yritysten tarpeisiin.
Normaalissa yrityksessä vaatimusmäärittely, eli ohjelmistoprojektin tavoitteet ja vaatimukset hoidetaan yhteistyössä yrityksen asiakkaan kanssa. Startup-yrityksellä on kuitenkin alkuvaiheessa harvoin selvää asiakasta tai ohjelmiston loppukäyttäjää tiedossa.
- Pahin mahdollinen tilannehan on se, että startup-firma kehittää keskenään jotain tuotetta, jonka haluavat itse tehdä, mutta eivät käytännössä tiedä, haluaako kukaan muu lopulta sellaista käyttää. Korttien idea on jo aikaisessa vaiheessa alkaa eri tavoin selvittää, että kannattaako koko hommaa edes tehdä, Kemell summaa tutkimuksen hyötyjä.
Improving Software Development in Early-Stage Startups
https://jyx.jyu.fi/handle/123456789/80758
Tomi Engdahl says:
https://hackaday.com/2022/05/10/data-alignment-across-architectures-the-good-the-bad-and-the-ugly/
Tomi Engdahl says:
Easy Network Config For IoT Devices With RGBeacon
https://hackaday.com/2022/05/11/easy-network-config-for-iot-devices-with-rgbeacon/
When you’re hooking up hardware to a network, it can sometimes be a pain to figure out what IP address the device has ended up with. [Bas Pijls] often saw this problem occurring in the classroom, and set about creating a simple method for small devices to communicate their IP address and other data with a minimum of fuss.
[Bas] specifically wanted a way to do this without adding a display to the hardware, as this would add a lot of complexity and expense to simple IoT devices. Instead, RGBeacon was created, wherin a microcontroller flashes out network information with the aid of a single RGB WS2812B LED.
In fact, all three colors of the RGB LED are used to send information to a computer via a webcam. The red channel flashes out a clock signal, the green channel represents the beginning of a byte, and the blue channel flashes to indicate bits that are high. With a little signal processing, a computer running a Javascript app in a web browser can receive information from a microcontroller flashing its LEDs via a webcam.
https://fabacademy.org/2022/labs/waag/students/bas-pijls/blog/week14/
Tomi Engdahl says:
https://hackaday.com/2022/05/11/bare-metal-stm32-using-the-i2c-bus-in-master-transceiver-mode/
Tomi Engdahl says:
Is ESP8266 5 V Tolerant? This Curve Tracer Says Yes!
https://hackaday.com/2022/05/12/is-esp8266-5-v-tolerant-this-curve-tracer-says-yes/
Some people state that ESP8266 is tolerant of 5 V logic levels on its GPIOs, while others vehemently disagree, pointing at the datasheet-stated 3.6 V maximum. Datasheets aren’t source code for compiling the chip, however, and aren’t universally correct and complete either. [Avian] decided to dig deeper into the claims, conduct an experiment with an actual ESP8266 chip, then share the results for all of us.
For the experiment, he used a curve tracer – a device capable of producing a wide range of voltages and measuring the current being consumed, then plotting the voltage-to-current relationship. This helps characterize all sorts of variables, from diode breakdown voltages to transistor characteristics. The curve tracer he uses is a capable and professional-looking DIY build of his, and arguably, deserves a separate write-up!
A footnote about ESP8266 5V tolerance
https://www.tablix.org/~avian/blog/archives/2022/05/a_footnote_about_esp8266_5v_tolerance/
It never occurred to me that the GPIO pins of the ESP8266 would be able to work with 5V levels directly. I’ve done a few projects with this IC and I always considered it a purely 3.3V device that requires a level shifter to work with 5V logic. Normally, 5V tolerance of pins is advertised in some prime real estate in a microcontroller datasheet. On the other hand, ESP8266′s datasheet doesn’t mention it explicitly. Even the specification of voltage levels for digital IO is kind of vague about it:
The table says that the maximum input voltage for a high logic level on a digital pin is 3.6V. However the paragraph below also talks about pins having some protection against over voltage. My understanding is that there’s a circuit that shunts current from a pin to ground when the pin voltage reaches 6V, similar to a Zener diode. Snap-back might be a mistranslation of fold-back. In any case, the difference between 6V breakdown and 5.8V hold voltage doesn’t seem that significant. What good is this protection if the maximum allowed voltage is only 3.6V?
After searching the web a bit I came across a 2016 post on Hackaday. The article and comments give a good overview of what’s known about this. In short, official documentation is vague, experience on the topic is mixed but many people believe that pins are indeed 5V tolerant.
I wanted to quickly check this. As people discussed in the Hackaday comments, a digital pin that only accepts voltages up to the supply voltage will normally have an ESD protection diode connected from the pin to the supply. Such a pin will start sinking current when voltage on it exceeds supply plus a diode voltage drop. That is where the typical 3.6V specification for VIH maximum comes from: a 3.3V supply plus 0.3V Schottky diode drop. On the other hand, a pin that can tolerate higher voltages will instead have a Zener diode to ground, or some other circuit that performs a similar function. In that case, the pin will only sink current at higher voltages.
I connected a spare ESP-01 module to my curve tracer. I measured the I-V curve of the GPIO2 pin since that one is not connected to any external pull-up or pull-down resistors on this module. The module was powered from a development board with a 3.3V supply voltage. I used a simple Arduino program that just called pinMode(2, INPUT). This ensured that the pin was set to high impedance and not sourcing or sinking current, except for any protection circuit on it
In conclusion, it seems that indeed the digital pins on the ESP8266 have provisions for tolerating voltages higher than the supply. This doesn’t necessarily mean that they will also work reliably at those voltages. Even if they will, I would still want to avoid using 5V levels directly. The ESP8266 bootloader likes to put various pins in output mode. A 5V level on a 3.3V-powered output pin is certainly not healthy, regardless of whether the pin tolerates such a level in input mode.