The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.
Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.
Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.
Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.
Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.
Traditional approaches
C/C++
Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.
C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.
And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.
Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.
Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.
Java
Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones, ) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.
Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.
Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).
My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.
Increasing trends
Databases
Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.
There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).
SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).
If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.
RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.
Software quality
Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind
Safety
Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?
IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.
If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.
Security
The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.
Developing Secure Embedded Software white paper explains why some commonly used approaches to security typically fail:
MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY
Some techniques for building security to embedded systems:
Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed
Flood of new languages
Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. There are also many other relevant languages
One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages (Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again. Scripting languages are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.
Scripting languages have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used. High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.
There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.
Bash and other shell scipting
Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).
The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.
There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.
Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.
Lua
Lua is a lightweight cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.
Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.
There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.
PHP
PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.
Python
Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.
Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming, Many other paradigms are supported using extensions, including design by contract and logic programming.
Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.
Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.
Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.
For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriate. Examples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.
Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.
JavaScript and node.js
JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases! There is huge community of developers. JavaScript is a high-level language.
Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.
Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.
It doesn’t require a complicated IDE; all you really need is a terminal.
JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.
JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!
JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications
JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!
Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET
method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.
Future trends
According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:
One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.
Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.
This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.
Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.
Sources
Most important information sources:
New approaches to dominate in embedded development
A New Approach for Distributed Computing in Embedded Systems
New Approaches to Systems Engineering and Embedded Software Development
Embracing Java for the Internet of Things
Embedded Linux – Shell Scripting 101
Embedded Linux – Shell Scripting 102
Embedding Other Languages in BASH Scripts
PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog
JavaScript: The Perfect Language for the Internet of Things (IoT)
Anyone using Python for embedded projects?
JavaScript: The Perfect Language for the Internet of Things (IoT)
MICROCONTROLLERS AND NODE.JS, NATURALLY
Node.JS Appliances on Embedded Linux Devices
The smartest way to program smart things: Node.js
Embedded Software Can Kill But Are We Designing Safely?
DEVELOPING SECURE EMBEDDED SOFTWARE
1,687 Comments
Tomi Engdahl says:
https://hackaday.com/2019/01/12/preventing-embedded-fails-with-watchdogs/
Tomi Engdahl says:
Taming Concurrency
https://semiengineering.com/taming-concurrency/
What hoops will designers have to jump through to avoid concurrency bugs?
Tomi Engdahl says:
5 Technologies Embedded System Engineers Should Master in 2019
https://www.designnews.com/electronics-test/5-technologies-embedded-system-engineers-should-master-2019/53720324260073?ADTRK=UBM&elq_mid=7141&elq_cid=876648
Here are the technologies that have the greatest impact on the way we design and develop embedded systems.
Technology #1 – Defect Management
In 2018, I spent a lot of time talking about debugging techniques that developers can use to minimize the defects that are in their systems. The fact is, debugging techniques are the last resort to remove defects from an embedded system. The processes that are put in place during the design and development are far more important in minimizing defects. There have been several advances in the last few years that many embedded developers have not been taking advantage of. These include:
Continuous integration servers
Hardware-in-loop testing
Unit testing
Automated testing
There is a lot that developers can do in these areas to reduce the time spent debugging. In many cases developers tell themselves that they will look into these items or implement them after the next delivery when there is more time.
Technology #2 – Cloud Connectivity
Many “traditional” embedded systems are, or were, disconnected systems that had no access to the Internet. With the big push for the IoT, many systems are now adding wireless or wired connectivity and streaming loads of data up to the cloud for processing and storage. The traditional embedded software developer in general doesn’t have much experience with setting up cloud services, working with MQTT, or the many other technologies that are required for use with the cloud. There are several activities that developers should put into their calendars this year in order to become more familiar with cloud connectivity. These activities include:
Setting up a cloud service provider such as Amazon Web Services, Google Cloud, etc
Set up private and public keys along with a device certificate.
Write a device policy for devices connecting to the cloud service
Connect an embedded system to the cloud service
Transmit and receive information to the cloud
Build a basic dashboard to examine data in the cloud and control the device
Technology #3 – Security
With many devices now connecting to the cloud, a major concern facing developers is how to secure their systems. There are several new technologies, more than I could list in this post, that will be impacting how developers design their systems. These technologies vary from using security processors, Arm TrustZone, and multi-core microcontrollers to partition secure and non-secure application code. While there are several hardware technology sets available, the available software solutions have been expanding at an extraordinary rate. Many of these technologies are just being introduced and 2019 is an excellent year to focus in and master security concepts and apply them your embedded systems.
Technology #4 – Machine Learning
A major theme that we are going to hear about nearly non-stop in 2019 is about moving machine learning from the cloud to the edge. Machine learning has been a force to reckon with in the cloud and the ability to move machine learning to microcontroller-based systems is going to be a game changer.
Technology #5 – Low Power Design
Embedded designers have always had to contend with battery operated devices but with more IoT connected devices and sensor nodes, low power design is becoming a crucial design criteria that can dramatically affect the operating costs of a company.
Developers working with battery operated devices need to stay up to date in several key areas:
Wireless radio technologies
Hardware energy monitoring
Software energy consumption monitoring
Battery architectures
Power regulators
Tomi Engdahl says:
5 Technologies Embedded System Engineers Should Master in 2019
https://www.designnews.com/electronics-test/5-technologies-embedded-system-engineers-should-master-2019/53720324260073?ADTRK=UBM&elq_mid=7146&elq_cid=876648
Here are the technologies that have the greatest impact on the way we design and develop embedded systems.
Technology #1 – Defect Management
Technology #2 – Cloud Connectivity
Technology #3 – Security
Technology #4 – Machine Learning
Technology #5 – Low Power Design
While we should be looking to master these technologies, each area itself could require years to master. It’s important that developers select at least one technology to work at mastering and then at least keep abreast of the basics and advancements in the other areas.
Tomi Engdahl says:
Making Embedded Software Safe and Secure
https://www.eeweb.com/profile/jayabraham/articles/making-embedded-software-safe-and-secure
Traditional verification techniques of code review and testing may be inadequate to find bugs or to prove that the code is safe and secure
Code review: If we manually inspect this code, we will quickly identify that line 21 may result in a safety concern. On this line of code, we are performing the operation x / (x – y). A divide operation is being performed with integers, and if the variables x and y are equal, a divide-by-zero will occur. Manual inspection of the code prior to this operation is not trivial, and we cannot say with ease that x shall never be equal to zero. Hours of additional analysis may be needed to check this condition. We could create a guard condition, but this would add inefficiency to the code and add further requirements and design work to determine what action must be taken if x == y.
Testing approach: If you consider testing, you would need only two test cases for full modified condition/decision coverage (MCDC). These test cases would execute all branches of the code, but they do not sweep the full complement of inputs to the function. Is this sufficient to confirm robustness and safety? An exhaustive test would consider all possible values for each of the inputs, but exhaustive testing for the full combination of two 32-bit integers is not technically feasible.
Although code review and testing reveal some facets of our software, we are still left with the possibility that the code may contain a bug that results in a divide-by-zero program crash. Let’s consider static analysis, including augmenting this approach with formal methods.
Static analysis: Static analysis is often used to supplement test cases. Static analysis, which automates many manual verification tasks such as enforcing coding standards and style guides, can also find defects based on heuristics. For this example, a static analysis tool could broaden test coverage by checking many possible values for the variables. However, as a side effect, it will produce many false warnings for values that will never occur.
Formal methods: Static analysis tools that are based on formal methods verify run-time behavior and prove the absence of errors.
Tomi Engdahl says:
5 Technologies Embedded System Engineers Should Master in 2019
https://www.designnews.com/electronics-test/5-technologies-embedded-system-engineers-should-master-2019/53720324260073?ADTRK=UBM&elq_mid=7214&elq_cid=876648
Here are the technologies that have the greatest impact on the way we design and develop embedded systems.
Technology #1 – Defect Management
In 2018, I spent a lot of time talking about debugging techniques that developers can use to minimize the defects that are in their systems. The fact is, debugging techniques are the last resort to remove defects from an embedded system. The processes that are put in place during the design and development are far more important in minimizing defects. There have been several advances in the last few years that many embedded developers have not been taking advantage of. These include:
Continuous integration servers
Hardware-in-loop testing
Unit testing
Automated testing
Technology #2 – Cloud Connectivity
Many “traditional” embedded systems are, or were, disconnected systems that had no access to the Internet. With the big push for the IoT, many systems are now adding wireless or wired connectivity and streaming loads of data up to the cloud for processing and storage. The traditional embedded software developer in general doesn’t have much experience with setting up cloud services, working with MQTT, or the many other technologies that are required for use with the cloud. There are several activities that developers should put into their calendars this year in order to become more familiar with cloud connectivity. These activities include:
Setting up a cloud service provider such as Amazon Web Services, Google Cloud, etc
Set up private and public keys along with a device certificate.
Write a device policy for devices connecting to the cloud service
Connect an embedded system to the cloud service
Transmit and receive information to the cloud
Build a basic dashboard to examine data in the cloud and control the devi
Technology #3 – Security
With many devices now connecting to the cloud, a major concern facing developers is how to secure their systems.
These technologies vary from using security processors, Arm TrustZone, and multi-core microcontrollers to partition secure and non-secure application code. While there are several hardware technology sets available, the available software solutions have been expanding at an extraordinary rate.
Technology #4 – Machine Learning
A major theme that we are going to hear about nearly non-stop in 2019 is about moving machine learning from the cloud to the edge. Machine learning has been a force to reckon with in the cloud and the ability to move machine learning to microcontroller-based systems is going to be a game changer.
Technology #5 – Low Power Design
Embedded designers have always had to contend with battery operated devices but with more IoT connected devices and sensor nodes, low power design is becoming a crucial design criteria that can dramatically affect the operating costs of a company.
Developers working with battery operated devices need to stay up to date in several key areas:
Wireless radio technologies
Hardware energy monitoring
Software energy consumption monitoring
Battery architectures
Power regulators
Tomi Engdahl says:
Crash your code – Lessons Learned From Debugging Things That Should Never Happen™
https://hackaday.com/2019/01/22/crash-your-code-lessons-learned-from-debugging-things-that-should-never-happen/
Let’s be honest, no one likes to see their program crash. It’s a clear sign that something is wrong with our code, and that’s a truth we don’t like to see. We try our best to avoid such a situation, and we’ve seen how compiler warnings and other static code analysis tools can help us to detect and prevent possible flaws in our code, which could otherwise lead to its demise. But what if I told you that crashing your program is actually a great way to improve its overall quality? Now, this obviously sounds a bit counterintuitive, after all we are talking about preventing our code from misbehaving, so why would we want to purposely break it?
When Things Go Wrong
Crash Where Crashing Is Due
The thing is, by the time we can tell that our data isn’t as expected, it’s already too late. By working around the symptoms, we’re not only introducing unnecessary complexity (which we most likely have to drag along to every other place the data is passed on to), but are also covering up the real problem hiding underneath. That hidden problem won’t disappear by ignoring it, and sooner or later it will cause real consequences that force us to debug it for good. Except, by that time, we may have obscured its path so well that it takes a lot more effort to work our way back to the origin of the problem.
Worst case, we never get there, and instead, we keep on implementing workaround after workaround, spinning in circles, with the next bug just waiting to happen. We tiptoe around the issue for the sake of keeping the program running, and ignore how futile that is as a long-term solution. We might as well give up and abort right here and now — and I say, you should do exactly that.
Sure, crashing our program is no long-term solution either, but it also isn’t meant to be one. It is meant as indicator that we ended up in a situation that we didn’t anticipate, and our code is therefore not prepared to properly handle it. What led us there, and whether we are dealing with an actual bug or simply flawed logic in our implementation is a different story, and for us to find out.
Assertions Are Optional
By design, assertions are meant as a debugging tool during development, and while the libc documentation advises against it, it is common practice to disable them for a release build. But what if we don’t catch a problem during development, and it shows up in the wild one day? And chances are, that’s exactly what’s going to happen. Without the assertion code, neither the check that would prevent the problem is performed, nor would we get any information about it.
Okay, we are talking about purposely crashing our code here, so we could just make it a habit to always leave the assertions enabled, regardless of debug or release build. But that leaves still one other problem.
Assertion Messages Are Useless
If we take a look at the output from a failed assertion, we will know which assert() call exactly failed: the one that made sure the value is in valid range. So we also know that we are dealing with an invalid value. What we don’t know is the actual value that failed the assertion.
Sure, if we happen to get a core dump, and the executable contains debug information, we can use gdb to find out more about that. But unfortunately, we don’t often have that luxury outside of our own development environment, and we have to work with error logs and other debug output instead. In that case, we are left with output of very little value.
Crashing Better
If you ever find yourself in a situation where you have a myriad of reports of the exact same issue, and you are lucky enough to have an error log available for each individual incident, you will learn how frustratingly helpless it feels to know a certain condition failed, but to have zero information what exactly made it fail. It is when you realize and learn the hard way how useless, and almost counterproductive, error messages in the form of “expected situation is not true, period” without any further details really are.
Consider the following two error messages:
Assertion `data->value value is 255, expected value value (0x564d379681fc) was 234
As programmers, we grow up being indoctrinated on the importance of error handling, but in our early years, we rarely learn how to properly utilize it, and we might fail to see any actual benefit or even use for it at all.
Tomi Engdahl says:
Can You Trust Your C Compiler?
https://hackaday.com/2019/01/24/can-you-trust-your-c-compiler/
If you are writing a hello world program, you probably aren’t too concerned about how the compiler translates your source code to machine code. However, if your code runs on something that people’s lives depend on, you will want to be a bit pickier and use something like the COMPCERT compiler. It’s a formally verified compiler, meaning there is a mathematical proof that what you write in C will be correctly translated to machine code. The compiler can generate for PowerPC, ARM, RISC-V, and x86, accepting a subset of ISO C 99 with a few extensions. While it doesn’t produce code that runs as fast as gcc, you can be sure the generated code will do what you asked it to do.
CompCert
http://compcert.inria.fr/
The CompCert project investigates the formal verification of realistic compilers usable for critical embedded software. Such verified compilers come with a mathematical, machine-checked proof that the generated executable code behaves exactly as prescribed by the semantics of the source program. By ruling out the possibility of compiler-introduced bugs, verified compilers strengthen the guarantees that can be obtained by applying formal methods to source programs.
The main result of the project is the CompCert C verified compiler, a high-assurance compiler for almost all of the C language (ISO C99), generating efficient code for the PowerPC, ARM, RISC-V and x86 processors.
The latest release of the CompCert C compiler is version 3.4, released in September 2018.
Tomi Engdahl says:
Cool Tools: A Little Filesystem that Keeps Your Bits on Lock
https://hackaday.com/2019/01/24/cool-tools-a-little-filesystem-that-keeps-your-bits-on-lock/
Filesystems for computers are not the best bet for embedded systems. Even those who know this fragment of truth still fall into the trap and pay for it later on while surrounded by the rubble that once was a functioning project. Here’s how it happens.
Tomi Engdahl says:
Solving the Year 2038 problem in the Linux kernel
How the quest to prevent time from running out led to all corners of the Linux kernel.
https://opensource.com/article/19/1/year2038-problem-linux-kernel
Because of the way time is represented in Linux, a signed 32-bit number can’t support times beyond January 19, 2038 after 3:14:07 UTC. This Year 2038 (Y2038 or Y2K38) problem is about the time data type representation. The solution is to use 64-bit timestamps.
The problem also involves user space, C library, POSIX, and C standards. I found that the problem is really about interfaces between layers.
Solving one problem in the kernel rarely involves just one thing; it also involves the complexity of interrelated things in the kernel (there is always one more cleanup needed before the change) and interactions with the community (especially true as a newcomer).
The problem: The in-kernel representation of inode timestamps was in struct timespec, which is not Y2038 safe. The proposed solution: Change the representation to struct timespec64, which is Y2038 safe.
In January 2016, I posted the first request for comments (RFC) for this, asking if there was any opposition to the approach described above.
I posted another series (actually three) for solving the problem in three separate ways.
But we had to get rid of some old time interfaces before we could do the change. When I posted a series of this, Linus Torvalds did not like one of the interfaces (current_fs_time(sb)) because it took the superblock as an argument to access timestamp granularity. But the timestamps are really a feature of the inode, not the superblock. So, we got rid of this API.
By the end of this whole exercise, we got rid of three in-kernel APIs, rearranged some of the filesystem timestamp handling, handled print formats to support larger timestamps, analyzed 32-bit architecture object dumps, and rewrote at least five versions of the series from scratch. And this was just one of the problems we solved for the kernel. But Y2038 has been one of my favorite projects yet.
Tomi Engdahl says:
2019 Will Be the Year of Open Source
https://www.designnews.com/electronics-test/2019-will-be-year-open-source/174144007660005?ADTRK=UBM&elq_mid=7086&elq_cid=876648
From software and even hardware, we saw more activity in open source than ever before in 2018. And the momentum isn’t likely to slow down in 2019.
Tomi Engdahl says:
Get Ready for Intelligent Real-Time Systems
https://www.designnews.com/electronics-test/get-ready-intelligent-real-time-systems/17528569560076?ADTRK=UBM&elq_mid=7283&elq_cid=876648
Intelligence is quickly making its way from the cloud to the edge. Now is the time to start understanding Intelligent Real-Time Systems.
Intelligent Real-time Systems (IRS). Intelligent Real-time Systems are microcontroller-based devices that have the ability to learn to use data by running a resident, artificial intelligence algorithm (AI).
There have always been two different ways that teams could leverage artificial intelligence in their products. The first, and the most realistic for the last decade, has been to execute the AI algorithms in the cloud. The cloud has provided a unique platform where processing power seems limitless when compared to the processing available on a microcontroller. Machine learning (ML) algorithms could be provided with data and trained to recognize patterns that would otherwise have been nearly impossible for a developer to program (think handwriting character recognition).
Systems that use machine learning in the cloud can still use a real-time embedded system to collect data, but that data is then sent to the cloud for processing and any return-response then would be relayed back to the embedded system. As the reader can imagine, this is hardly a real-time or deterministic operation. Using the cloud, though, has worked and will continue to work in applications for the foreseeable future.
The second approach, which has generally been out of reach for most systems, is to process the data and execute the machine learning algorithm on the microcontroller. This is a far more interesting solution because it removes latency that would otherwise exist if the data needs to be processed in the cloud. The potential for businesses here is huge for several different reasons such as:
No longer requiring an internet connection which could reduce bill of material (BOM) costs and system complexity
Decrease in operating costs for cloud services and data processing plans
Offline product differentiation
Reduction in processing latencies and energy consumption
Improved product reliability and potentially security
The use of machine learning in deterministic, real-time systems
First, ARM has released CMSIS-NN, which is a C library designed for running low-level, optimized neural network algorithms on a Cortex-M processor. This allows developers to design and train their high-level machine learning algorithms and then deploy them onto a microcontroller. This can be considered the required foundation in order to run machine learning in an efficient manner, locally without the cloud.
A great example is the OpenMV which is a camera module based on the STM32 and provides local processing for capabilities such as”
Face detection
Eye detection
Color tracking
Video recording
Etc
Machine vision is a leading intelligence capability that many real-time embedded systems will require.
Tomi Engdahl says:
Notes on Build Hardening
https://blog.erratasec.com/2018/12/notes-on-build-hardening.html
What is build safety?
Modern languages (Java, C#, Go, Rust, JavaScript, Python, etc.) are inherently “safe”, meaning they don’t have “buffer-overflows” or related problems.
However, C/C++ is “unsafe”, and is the most popular language for building stuff that interacts with the network. In other cases, while the language itself may be safe, it’ll use underlying infrastructure (“libraries”) written in C/C++. When we are talking about hardening builds, making them safe or security, we are talking about C/C++.
In the last two decades, we’ve improved both hardware and operating-systems around C/C++ in order to impose safety on it from the outside. We do this with options when the software is built (compiled and linked), and then when the software is run.
That’s what the paper above looks at: how consumer devices are built using these options, and thereby, measuring the security of these devices.
In particular, we are talking about the Linux operating system here and the GNU compiler gcc. Consumer products almost always use Linux these days
How software is built
Software is first compiled then linked. Compiling means translating the human-readable source code into machine code. Linking means combining multiple compiled files into a single executable.
Build Safety of Software in 28 Popular Home Routers
https://cyber-itl.org/assets/papers/2018/build_safety_of_software_in_28_popular_home_routers.pdf
For many, wireless access points and home routers are just commodity appliances. However,
an insecure access point or router can be as damaging to a user’s security and privacy as an
insecure web browser or document suite
Tomi Engdahl says:
Implementing an IoT Edge Device While Minimizing NRE
https://www.mentor.com/tannereda/resources/overview/implementing-an-iot-edge-device-while-minimizing-nre-486621a5-f399-4641-8e3a-5a92a7ec9258?uuid=486621a5-f399-4641-8e3a-5a92a7ec9258&clp=1&contactid=1&PC=L&c=2019_01_29_ic_tanner_iotedge_device_nre_w
One designer in a garage, a small startup, small to mid-size companies, and even small groups within large companies with a “startup attitude” are designing IoT edge devices. These designers need to keep non-recurring expense (NRE) down by using affordable design tools that are easy to use to quickly produce results and by minimizing IP and fabrication costs. Their goal is to deliver a functioning device to their stakeholders while spending as little money as
possible to get there.
Tomi Engdahl says:
Shifting the Burden of Tool Safety Compliance from Users to Vendor
https://semiengineering.com/shifting-the-burden-of-tool-safety-compliance-from-users-to-vendor/
What safety-compliance strategies are used for tool classification and qualification in safety-critical hardware projects, focusing on ISO 26262, IEC 61508 and EN 50128.
Tomi Engdahl says:
Smoothsort Demystified
http://www.keithschwarz.com/smoothsort/
A few years ago I heard about an interesting sorting algorithm (invented by the legendary Edsger Dijkstra) called smoothsort with great memory and runtime guarantees.
Tomi Engdahl says:
GitHub Helps Developers Keep Dependencies Secure via Dependabot
https://www.securityweek.com/github-helps-developers-keep-dependencies-secure-dependabot
Microsoft-owned GitHub informed developers on Thursday that they can easily ensure that the dependencies used by their applications are always secure and up to date through an integration of its Security Advisory API with Dependabot.
Created by London-based developer Grey Baker, Dependabot is a management tool that helps GitHub users keep their dependencies up to date. The tool checks a user’s dependency files every day and creates pull requests in case an update is available. Users can manually review the requests and merge them, or they can configure Dependabot for automatic merger based on certain criteria.
https://github.com/marketplace/dependabot
Tomi Engdahl says:
Using Memory Differently
https://semiengineering.com/using-memory-differently/
Optimizing complex chips requires decisions about overall system architecture, and memory is a key variable.
What’s changed
But with so much memory on the chip—in some cases it accounts for half the area of large SoCs—little changes can have a big impact. This is particularly true in AI applications, where small memories are often scattered around the chips with highly specific processing elements in order to process massive amounts of data more quickly. The challenge now is to minimize the movement of that data.
“It has been through the emergence of the AI market and AI architectures that the idea of near-memory computing or in-memory computing has found its revival,” Macián said. “Once the idea was back on the table, people started looking at ways to use that same concept for things like RISC-V processors. Obviously, all processors have memories, and it has long been very important to select the right memories to obtain a good implementation of the processor in any die. We now have processor developers asking for memory that would perform some operation on the addresses before actually retrieving the data from the memory proper in order to simplify the circuitry around the memory. We are talking here about doing huge amounts of computation in the memory or near the memory, but with key elements, key operations that simplify greatly the logic around it.”
What differentiates AI chips from other architectures is a relatively simple main computing element—the multiply accumulate (MAC) block. This block is parallelized and repeated thousands of times in an ASIC. Because of this replication, any small improvement made in the area or power, for example, has a huge overall effect in the chip, Macian explained.
Tomi Engdahl says:
Data Protection Laws Will Change How Electronics Systems are Designed
https://www.eeweb.com/profile/loucovey/articles/data-protection-laws-will-change-how-electronics-systems-are-designed
The handwriting is on the wall about what data breaches will cost in the next decade; it’s time for the hardware industry to get serious about dealing with this issue
The advent of 5G cellular service is upon us (see “The 5G Future Begins Now!”). This is great news for the chip and electronic system industries and — possibly — outstanding news for the digital security industry.
The potential for designing secure mobile services is tremendous. Much of the most effective technology is available today. The question is whether providers are willing to make the additional expense. So far, the answer is “No!” Communications providers are loath to make their services secure because doing so hasn’t been cheap, and it is much easier to pass the blame onto others — specifically, the users for their lax security. That changed in May of last year with the launch of the General Data Protection Regulation (GDPR) in the European Union, and things will become more expensive in California when the California Consumer Protection Act (CCPA) takes effect in 2020. Both laws are targeted at the usual suspects in the data collection domain — like social media and online retailers — so the hardware and software industries are not really thinking about it. They should be.
Tomi Engdahl says:
Updating your safety critical product – a nightmare waiting to happen?
https://www.mentor.com/embedded-software/resources/overview/updating-your-safety-critical-product-a-nightmare-waiting-to-happen–662fa66b-718a-4b79-b5b5-2a8633c76a28?uuid=662fa66b-718a-4b79-b5b5-2a8633c76a28&clp=1&contactid=1&PC=L&c=2019_02_14_esd_updating_safety_product_wp
Almost all modern products include embedded software. Many of these products are targeted at safety critical applications, such as automotive, aerospace, and medical. The ability to update the embedded software in such products after shipments has significantly extended product life expectations. This in turn places increased requirements for long term software maintenance on the manufacturer.
Tomi Engdahl says:
https://semiengineering.com/week-in-review-design-low-power-29/
AdaCore says it is working with NVIDIA to implement Ada and Spark programming languages in some of NVIDIA’s security-critical firmware used in safety- and security-critical applications, such as automated and autonomous driving. NVIDIA is migrating some of its SoCs to the open-source RISC-V instruction set architecture and rewriting some of its security firmware from C to Ada and Spark.
ANSYS announced its ANSYS Cloud to give its customers on-demand simulation and high-performance computing. Engineers can get high-fidelity simulation results and save time when evaluating design variations by using ANSYS’s Cloud. “ANSYS Cloud puts the power of on-demand hardware and software delivered from the cloud in the hands of ANSYS customers to tackle their largest simulation models and provide unprecedented insights into product designs,”
Tomi Engdahl says:
The Biggest Embedded Software Issue Is …
Too many developers are writing software code without considering what could go wrong.
https://www.designnews.com/electronics-test/biggest-embedded-software-issue/4835890360082?ADTRK=UBM&elq_mid=7413&elq_cid=876648
There are many different problems and challenges that embedded software developers are facing today. One of the biggest and least spoken about issues that I have encountered is that developers are writing their software for success. Writing for success sounds great, except that what I mean is that developers are writing their software assuming that nothing will ever go wrong! What they are writing is functional prototype code that executes in a controlled, lab environment without issues. Don’t believe me? Let’s look at a publicly available example that I’ve recently encountered before discussing the failure mindset developers should be adopting.
Tomi Engdahl says:
Warnings Are Your Friend – A Code Quality Primer
https://hackaday.com/2018/11/06/warnings-are-your-friend-a-code-quality-primer/
Tomi Engdahl says:
Software: It Is All In The Details
https://hackaday.com/2019/01/06/software-it-is-all-in-the-details/
Who’s the better programmer? The guy that knows 10 different languages, or someone who knows just one? It depends. Programming is akin to math, or perhaps it is that we treat some topics differently than others which leads to misconceptions about what makes a good programmer, mathematician, or engineer. We submit that to be a great programmer is less about the languages you know and more about the algorithms and data structures you understand.
If you know how to solve the problem, mapping it to a particular computer language should be almost an afterthought. While there are many places that you can learn those things, there is a lot more focus on how to write the languages, C++ or Java or Python or whatever
Tomi Engdahl says:
Warnings On Steroids – Static Code Analysis Tools
https://hackaday.com/2018/12/12/warnings-on-steroids-static-code-analysis-tools/
Tomi Engdahl says:
https://hackaday.com/2018/12/17/a-deep-dive-into-low-power-wifi-microcontrollers/
Tomi Engdahl says:
A subset of the Scheme dialect of Lisp running on CircuitPython. How meta.
https://learn.adafruit.com/scheme-in-circuitpython?view=all
Tomi Engdahl says:
Hyvää koodia on todella vähän
http://www.etn.fi/index.php/13-news/9149-hyvaa-koodia-on-todella-vahan
Tomi Engdahl says:
Can Debug Be Tamed?
Can machine learning bring debug back under control?
https://semiengineering.com/bigger-debug-challenges-ahead/
Debug consumes more time than any other aspect of the chip design and verification process, and it adds uncertainty and risk to semiconductor development because there are always lingering questions about whether enough bugs were caught in the allotted amount of time.
Recent figures suggest that the problem is getting worse, too, as complexity and demand for reliability continue to rise. The big question now is whether new tool developments and different approaches can stem the undesirable trajectory of debug cost.
Time spent in debug has been tracked by the Mentor, a Siemens Company and the Wilson Research Group over several years. The latest numbers, shown in Figure 1, reveal that the largest amount of an IC and ASIC verification engineer’s time—44%—is spent in debugging.
Debug
The removal of bugs from a design
https://semiengineering.com/knowledge_centers/eda-design/verification/debug/
Tomi Engdahl says:
The Problem With Post-Silicon Debug
https://semiengineering.com/the-problem-with-post-silicon-debug/
Rising costs, tighter market windows and more heterogeneous designs are forcing chipmakers to rethink fundamental design approaches.
Tomi Engdahl says:
Open source software breaches surge in the past 12 months
A simple lack of time is blamed for a lack of security governance in open-source projects.
https://www.zdnet.com/article/open-source-software-breaches-surge-in-the-past-12-months/
Security breaches related to open-source security projects are on the rise and a lack of time being made available to developers to resolve vulnerabilities is believed to be to blame.
According to Sonatype’s DevSecOps Community Survey, in which over 5,500 IT professionals were asked to give their opinion on today’s open-source projects and the community’s security stance, open-source breaches have increased by 71 percent over the last five years.
Tomi Engdahl says:
Debunking the Top Myths About Unsupported Linux for Embedded Development
https://www.eetimes.com/document.asp?doc_id=1334334
Tomi Engdahl says:
A Brief History of Formal Verification
https://www.eeweb.com/profile/adarbari/articles/a-brief-history-of-formal-verification
As conventional simulation-based testing has increasingly struggled to cope with design complexity, strategies centered around formal verification have quietly evolved
Tomi Engdahl says:
Boeing’s B737 Max and Automotive ‘Autopilot’
https://www.eetimes.com/author.asp?section_id=36&doc_id=1334444
Why the catastrophic plane crashes of Indonesia’s Lion Air last October and another by Ethiopian Airlines last week should be setting off alarms in the automotive industry.
Should the catastrophic plane crashes of Indonesia’s Lion Air last October and another by Ethiopian Airlines last week set off alarms in the automotive industry?
Absolutely.
Tomi Engdahl says:
Debunking the Top Myths About Unsupported Linux for Embedded Development
https://www.eetimes.com/document.asp?doc_id=1334334
Embedded solution developers are often attracted to the “free” aspect of Linux and choose the RYO development route. However, an unsupported distribution might leave developers vulnerable to a wide range of hidden costs, risks, and time-consuming activities. This paper explores the top questions and myths about unsupported Linux for embedded applications.
Tomi Engdahl says:
MicroPython May Be Powering Your Next Embedded Device
MicroPython has announced that their pyboard D-series modules are now available.
https://www.designnews.com/electronics-test/micropython-may-be-powering-your-next-embedded-device/164173310860457?ADTRK=UBM&elq_mid=7879&elq_cid=876648
MicroPython has been an interesting project to watch over the last few years. If you’ve not heard of it, MicroPython is an open source project to port Python to run in a real-time, microcontroller-based environment. The ports typically are for ARM Cortex-M processors but there are several ports that run other architectures from Microchip Technology Inc. and other vendors. There are several advantages to using MicroPython over a traditional programming language like C such as:
Easy to learn (I’ve seen elementary students write Python code)
It is object-oriented.
Is an interpreted scripting language which removes compilation
Supported by a robust community including many add-on libraries which minimizes re-inventing the wheel
Includes error handling (something that C didn’t get the memo on)
Easily extensible
The Pyboard D-Series Module
As of this week, MicroPython has announced that their pyboard D-series modules are now available. These modules are particularly interesting because they provide a MicroPython compatible microcontroller along with built-in Wi-Fi and Bluetooth that can be connected to a carrier board through a mezzanine connector as shown below in Figure 1. This overcomes the challenges that developers face with using MicroPython in a production environment that forced them spin their own MicroPython compatible boards since the pyboard D-series is a module.
The first is their standard model which utilizes a STM32F722 microcontroller from STMicroelectronics N.V. to provide 256k RAM and 512k of internal flash.
The second option available is the pyboard D-series high-performance module. This module is based on the STM32F767, which provides 512k of RAM and 2MB of internal flash for application scripts.
Tomi Engdahl says:
5 Techniques for Accelerating Engineering Development
These techniques could help you get to market faster while reducing costs.
https://www.designnews.com/electronics-test/5-techniques-accelerating-engineering-development/37355547060410?ADTRK=UBM&elq_mid=7879&elq_cid=876648
Whether its a parts company, software supplier, or all the way to system integrators and even consultants, no one seems immunte to the ideas of decreasing costs and faster time to market, while improving product quality.
Here are my top five techniques for accelerating engineering development. These five techniques are just a few examples of low-hanging fruit that companies and developers can consider when trying to accelerate engineering development.
1.) Master Your Defects
2.) Have the Right Tools for the Job
3.) Focus on Your Value; Outsource the Rest
4.) Leverage Existing Software Platforms
5.) Leverage Existing Hardware Platforms
Tomi Engdahl says:
Boeing’s B737 Max and Automotive ‘Autopilot’
https://www.eetimes.com/author.asp?section_id=36&doc_id=1334444
Tomi Engdahl says:
http://www.etn.fi/index.php/13-news/9245-c-on-ylivoimaisesti-suosituin-kieli-sulautetuissa
Tomi Engdahl says:
The Joy Of Properly Designed Embedded Systems
https://hackaday.com/2019/03/20/the-joy-of-properly-designed-embedded-systems/
The ages-old dream of home automation has never been nearer to reality. Creating an Internet of Things device or even a building-wide collection of networked embedded devices is “easy” thanks to cheap building blocks like the ESP8266 WiFi-enabled microcontroller. Yet for any sizable project, it really helps to have a plan before getting started. But even more importantly, if your plan is subject to change as you work along, it is important to plan for flexibility. Practically, this is going to mean expansion headers and over-the-air (OTA) firmware upgrades are a must.
I’d like to illustrate this using a project I got involved in a few years ago, called BMaC, which grew in complexity and scope practically every month.
https://github.com/MayaPosch/BMaC
Tomi Engdahl says:
http://www.etn.fi/index.php/13-news/9254-linux-on-ykkonen-sulautetuissa
Tomi Engdahl says:
Accelerating Innovation with Embedded Motion Control
https://www.eeweb.com/profile/trinamic-cb/articles/accelerating-innovation-with-embedded-motion-control
Embedded motion controllers reduce multiple cycles of algorithm development, implementation, and testing to setting a few parameters and exporting the code to your own firmware
What do self-driving cars, advanced 3D printers, and the next generation of “smart” prosthetic limbs have in common? They are all beneficiaries of the emergence of embedded motion-control technologies (see also “Three Trends Driving Embedded Motion Control”). These systems, which pair application-specific motion-control silicon with open hardware and software platforms, are part of the Fourth Industrial Revolution, a trend that is accelerating the rate of innovation for robotics, industrial automation, and even consumer products that use mechatronic technology.
This new class of devices simplifies the development of mechatronic products by “encapsulating” most basic control functions as hardware logic or verified software building blocks that embedded developers can work with using the same rich toolsets and code libraries that they use for conventional applications. In addition to dramatically shortening development cycles, embedded motion controllers make it possible to add new capabilities to existing products while also facilitating the emergence of many new classes of products.
Three Trends Driving Embedded Motion Control
https://www.eeweb.com/profile/trinamic-cb/articles/three-trends-driving-embedded-motion-control
Battery/low-power operation
Living on the IoT
Bringing motion control to consumer and commercial applications
Conclusion
Embedded motion control technology’s lower cost, compact form factor, and versatility is reducing the cost of control automation and other traditional industrial applications. These advantages have also made it possible for embedded motion control technology to address new application spaces in the commercial and industrial sectors, where developers face new demands and requirements such as battery operation, IoT capability, and shorter development cycles.
Tomi Engdahl says:
The Joy Of Properly Designed Embedded Systems
https://hackaday.com/2019/03/20/the-joy-of-properly-designed-embedded-systems/
Tomi Engdahl says:
IoT Is the Top Technology for Developers
https://www.designnews.com/design-hardware-software/iot-top-technology-developers/111730606260481?ADTRK=UBM&elq_mid=7905&elq_cid=876648
Of all the emerging technologies, product developers point to IoT as the most important, though security remains a challenge.
According to a survey of developers by Avnet, IoT is cited most often as the most improved and the most important technology. This is followed by sensor technology, which is an integral part of IoT. Avnet surveyed 1190 members of its Hackster.io and Element 14 communities to find out how they’re focusing their development efforts and what challenges they’ve faced over the past year.
Results from the survey include:
26 percent of developers agree that IoT was the most improved technology over the past year. IoT also tops the list of most important technologies (37 percent), followed by sensors (24 percent).
An overwhelming majority (81 percent) of developers working at startups say IoT security is a major roadblock when launching new products and services.
One in 3 developers have recently looked for partners to help bring products to market. When looking for a partner, 76 percent of developers prefer the flexibility of choosing specialized expertise.
IoT technology was cited by respondents as the most significant growth in importance, up 14 percent from last year, followed closely by drones and robotics projects, which were up 8 percent. More than a quarter of developers (26 percent) also noted that IoT is also the most improved technology of the past year, followed closely by artificial intelligence at 25 percent.
Tomi Engdahl says:
The 3 least secure programming languages
https://www.techrepublic.com/article/the-3-least-secure-programming-languages/
Here’s how the seven most widely-used coding languages stack up when it comes to the total open source security vulnerabilities per language, according to the report:
C (47%)
PHP (17%)
Java (11%)
JavaScript (10%)
Python (5%)
C++ (5%)
Ruby (4%)
C has the highest number of vulnerabilities out of these seven languages, accounting for nearly 50% of all reported vulnerabilities over the last 10 years, according to the report.
Tomi Engdahl says:
How to Write a Secure Code in C/C++ Programming Languages
https://pentestmag.com/write-secure-code-cc-programming-languages/?fbclid=IwAR2lt0uw9sPRxEO9amJU25D90MRytboRXuB-BJXawVFm5SCOelKpygu5PbQ
Secure coding in C/C++ programming languages is a big deal. The two languages, which are commonly used in a multitude of applications and operating systems, are popular, flexible, and versatile. However, these languages are inherently vulnerable to exploitation.
Sometimes the solution is to code using a safer language like Java, that has suitable bounds-checking arrays and wonderful automatic garbage-collection feature. However, this is not always the best alternative, particularly if top performance is required or if C or C++ is preferred for developing the legacy code.
Therefore, to avoid shooting yourself in the foot, it is important to learn how to create a bulletproof, un-exploitable code in the C/C++ programming languages.
Tomi Engdahl says:
Is One Programming Language More Secure Than The Rest?
https://resources.whitesourcesoftware.com/blog-whitesource/is-one-language-more-secure
focus our attention on some of the most popular languages in use in the open source community over the past few years: C, Java, JavaScript, Python, Ruby, PHP, and C++.
Is One Programming Language More Secure Than The Rest?
MARCH 19, 2019
AYALA GOLDSTEIN
which language is the most secure
Want to liven up an open space full of software developers? Ask them what the best programming language is, and why. I think we all know that there is a high chance that lively debate will end with tears, rage, and broken friendships. Coders tend to take their programming languages very personally and in their battle to prove the dominance of their favorite language, the security card is often brought up.
Feeling the right mixture of brave and curious, we decided to address the debate over which programming language is the most secure head-on, and being as we’re in the business of open source security, we decided to write our latest WhiteSource report about how some of the top programming languages measure up when it comes to their security.
ANNUAL REPORT: THE STATE OF OPEN SOURCE VULNERABILITIES
Download Full Report
We dug through our open source vulnerabilities database, which aggregates information on open source vulnerabilities from multiple sources like the National Vulnerability Database (NVD), security advisories, GitHub and other popular open source projects issue trackers, to see if we could clearly crown one of the seven popular programming languages as the most secure.
Searching For The Most Secure Programming Language
First, we needed to decide which languages to take a closer look at. We managed to get through that potentially explosive debate by choosing to focus our attention on some of the most popular languages in use in the open source community over the past few years: C, Java, JavaScript, Python, Ruby, PHP, and C++.
We scoured our database to see the number of known open source security vulnerabilities in each language over the past ten years, as well as the breakdown of these vulnerabilities’ severity over time. In addition, we checked to see which CWEs are most common for each language.
Who Has The Most Vulnerabilities of Them All?
When looking at the total of reported open source vulnerabilities for each of the seven languages over the past 10 years, C took the top spot with nearly 50% of all of the reported vulnerabilities.
This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.
the percentage of critical vulnerabilities is declining in most of the languages covered in the report, except for JavaScript and PHP.
Two CWEs reigned supreme and featured among the three most common 70% of the languages: Cross-Site-Scripting (XSS) also known as CWE-79 and Input Validation, otherwise known as CWE-20.
Nearly all languages also have quite a few top ten CWEs in common. In addition to XSS and Input Validation, other CWEs that are were prominent across most languages are Information Leak/ Disclosure (CWE-200), Path Traversal (CWE-22), CWE-264 Permissions, Privileges, and Access Control, which is replaced in more recent years with its more specific close relative — Improper Access Control (CWE-284).
WhiteSource
Is One Programming Language More Secure Than The Rest?
MARCH 19, 2019
AYALA GOLDSTEIN
which language is the most secure
Want to liven up an open space full of software developers? Ask them what the best programming language is, and why. I think we all know that there is a high chance that lively debate will end with tears, rage, and broken friendships. Coders tend to take their programming languages very personally and in their battle to prove the dominance of their favorite language, the security card is often brought up.
Feeling the right mixture of brave and curious, we decided to address the debate over which programming language is the most secure head-on, and being as we’re in the business of open source security, we decided to write our latest WhiteSource report about how some of the top programming languages measure up when it comes to their security.
ANNUAL REPORT: THE STATE OF OPEN SOURCE VULNERABILITIES
Download Full Report
We dug through our open source vulnerabilities database, which aggregates information on open source vulnerabilities from multiple sources like the National Vulnerability Database (NVD), security advisories, GitHub and other popular open source projects issue trackers, to see if we could clearly crown one of the seven popular programming languages as the most secure.
Searching For The Most Secure Programming Language
First, we needed to decide which languages to take a closer look at. We managed to get through that potentially explosive debate by choosing to focus our attention on some of the most popular languages in use in the open source community over the past few years: C, Java, JavaScript, Python, Ruby, PHP, and C++.
We scoured our database to see the number of known open source security vulnerabilities in each language over the past ten years, as well as the breakdown of these vulnerabilities’ severity over time. In addition, we checked to see which CWEs are most common for each language.
Who Has The Most Vulnerabilities of Them All?
When looking at the total of reported open source vulnerabilities for each of the seven languages over the past 10 years, C took the top spot with nearly 50% of all of the reported vulnerabilities.
This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.
total reported open source vulnerabilities per language
Open Source Security Vulnerabilities Per Language Over Time
We then looked at the number of open source vulnerabilities over time. The data showed that every programming language had its own security highs and lows over the past ten years. However, there was one trend that stood out across all of the languages, and that’s the substantial rise in the number of known open source security vulnerabilities across all languages over the past two years. This rise can be explained by the rise in awareness of known security vulnerabilities in open source components, along with the continuously growing popularity of open source. As more resources have been invested in open source security research, the number of issues discovered has increased. The use of automated tools and the growing investment in bug bounty programs have further contributed to the sharp rise in the amount of disclosed open source security vulnerabilities.
Open source vulnerabilities over time per language
Open Source Vulnerabilities Over Time, per Language
Let’s Get Critical: High Severity Open Source Vulnerabilities Over Time
When we took a closer look and focused on high severity open source security vulnerabilities (scores above 7 according to CVSS v2), we saw that the percentage of critical vulnerabilities is declining in most of the languages covered in the report, except for JavaScript and PHP.
The decrease in the percentage of critical vulnerabilities could be a result of the concerted effort from security researchers to use automated tools to discover vulnerabilities in open source components. These tools are usually less capable of finding more complex and critical issues. While many of these tools are doing a good job of discovering vulnerabilities, many of the issues are not critical, and so we see a rise in the number of mostly medium vulnerabilities over the past few years in most of the programming languages that we studied.
High Severity Open Source Security Vulnerabilities Over Time, per Language Over Time
What’s the CWE?: Most Common CWEs per Programming Language
In order to learn as much as possible about each of the programming language’s strong and weak points security-wise, we also looked at the most common CWEs per language. Two CWEs reigned supreme and featured among the three most common 70% of the languages: Cross-Site-Scripting (XSS) also known as CWE-79 and Input Validation, otherwise known as CWE-20.
Nearly all languages also have quite a few top ten CWEs in common. In addition to XSS and Input Validation, other CWEs that are were prominent across most languages are Information Leak/ Disclosure (CWE-200), Path Traversal (CWE-22), CWE-264 Permissions, Privileges, and Access Control, which is replaced in more recent years with its more specific close relative — Improper Access Control (CWE-284).
And the Winner Of Most Secure Programming Language Is…No One and Everyone!
While the game of “my programming language is safer than yours” is certainly a fun way to pass time, and the debate over the most secure programming language will often include some interesting points, finding the answer will probably not help you create the most innovative or secure software out there. Nor will it help you enjoy a secure and agile delivery of said innovation.
Tomi Engdahl says:
Scott Shawcroft Is Squeezing Python Into Microcontrollers
https://spectrum.ieee.org/at-work/tech-careers/scott-shawcroft-is-squeezing-python-into-microcontrollers
As Python’s domination of the desktop and the cloud continues, two camps—MicroPython and CircuitPython—are working on hardware-centered versions of the interpreted language for embedded projects such as microcontroller-based gadgets.
Tomi Engdahl says:
Linus Torvalds “Nothing better than C”
https://m.youtube.com/watch?v=CYvJPra7Ebk
Tomi Engdahl says:
Design for Reliability: How Stress Simulations Can Help
https://www.eetimes.com/author.asp?section_id=36&doc_id=1334463
A method for simulating stress and faults early in the design phase to aid in the “Design for Reliability” of a product.
Reliability of a product is the probability for it to perform the intended functions without failure under stated conditions for a stated period of time. This number plays a pivotal role in the success of the product apart from other factors, such as efficiency, ease of use, etc. A product with poor reliability not only degrades the customer’s experience but also deteriorates the company’s reputation in addition to increased service and warranty costs.
This blog introduces a method of simulating stress and faults early in the design phase to aid in the “Design for Reliability” of a product. This can be achieved by using an industry-proven multi-domain system design simulation software.
The failure rate curve is comprised of three parts:
1. Decreasing failure rate early in the life
2. Constant failure rate during mid-life
3. Increasing failure rate at the end of life
Stress analysis
Stress on a component is a measure of the ratio of the actual quantity (voltage/current/power/temperature) applied on it when it is placed in an electrical circuit to its maximum rating. For instance, if the power rating of a resistor is 0.25 watt and the actual power dissipation in it when connected in an electrical circuit is 0.20 watt, the stress on the resistor is 80%. This value is normally estimated based on the experience of the electric circuit and calculations, if available.