The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.
Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.
Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.
Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.
Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.
Traditional approaches
C/C++
Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.
C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.
And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.
Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.
Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.
Java
Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones, ) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.
Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.
Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).
My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.
Increasing trends
Databases
Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.
There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).
SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).
If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.
RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.
Software quality
Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind
Safety
Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?
IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.
If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.
Security
The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.
Developing Secure Embedded Software white paper explains why some commonly used approaches to security typically fail:
MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY
Some techniques for building security to embedded systems:
Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed
Flood of new languages
Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. There are also many other relevant languages
One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages (Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again. Scripting languages are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.
Scripting languages have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used. High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.
There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.
Bash and other shell scipting
Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).
The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.
There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.
Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.
Lua
Lua is a lightweight cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.
Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.
There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.
PHP
PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.
Python
Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.
Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming, Many other paradigms are supported using extensions, including design by contract and logic programming.
Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.
Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.
Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.
For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriate. Examples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.
Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.
JavaScript and node.js
JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases! There is huge community of developers. JavaScript is a high-level language.
Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.
Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.
It doesn’t require a complicated IDE; all you really need is a terminal.
JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.
JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!
JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications
JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!
Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET
method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.
Future trends
According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:
One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.
Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.
This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.
Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.
Sources
Most important information sources:
New approaches to dominate in embedded development
A New Approach for Distributed Computing in Embedded Systems
New Approaches to Systems Engineering and Embedded Software Development
Embracing Java for the Internet of Things
Embedded Linux – Shell Scripting 101
Embedded Linux – Shell Scripting 102
Embedding Other Languages in BASH Scripts
PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog
JavaScript: The Perfect Language for the Internet of Things (IoT)
Anyone using Python for embedded projects?
JavaScript: The Perfect Language for the Internet of Things (IoT)
MICROCONTROLLERS AND NODE.JS, NATURALLY
Node.JS Appliances on Embedded Linux Devices
The smartest way to program smart things: Node.js
Embedded Software Can Kill But Are We Designing Safely?
DEVELOPING SECURE EMBEDDED SOFTWARE
1,687 Comments
Tomi Engdahl says:
Building Functional Safety and Security into Modern IIoT Enterprises and Ecosystems
https://www.mentor.com/embedded-software/resources/overview/building-functional-safety-and-security-into-modern-iiot-enterprises-and-ecosystems-69c7beee-1358-410c-8d03-a0fa2b2ae1a8
There is no question that safety and security cannot be emphasized enough in today’s world of Industrial IoT (IIoT), but you can build safe and secure RTOS, Linux®, and Android environments. IEC 61508, IEC 62062, ISO 13849, IEC 61511, and ISO 10218 are all safety standards in place today to maximize safety and minimize risk in industrial devices with embedded software. These standards help define a systematic approach to safety management with the incorporation of safety thought-processes in the product development process
Tomi Engdahl says:
7 Reasons Open Source Software Should Be Avoided
https://www.designnews.com/content/7-reasons-open-source-software-should-be-avoided/100858102757881?ADTRK=UBM&elq_mid=2234&elq_cid=876648
As much potential as open source software can provide, there are several reasons why embedded software developers should avoid it like the plague.
Reason #1 – Lacks a traceable software development life cycle
Open source software usually starts with an ingenious developer working out their garage or basement and they create something very functional and useful. Eventually multiple developers with spare time on their hands get involved. The software evolves but it doesn’t really follow a traceable design cycle or even follow best practices.
Reason #2 – Designed for functionality not robustness
Open source software is often written functionally. Access and write to an SD card. Communicate over USB. The issue here is that while it functions the code is generally not robust and expects that a wrench will never be thrown in the gears.
Reason #3 – Accidentally exposing confidential intellectual property
Developers often think that all open source software is free and comes with no hooks attached. The problem is that this isn’t the case. There are several different licensing schemes that open source software developers use. Some really do give away the farm; however, there are also licenses that require any modifications or even associated software to be released as open source.
Reason #4 – Lacking automated or manual tests
Yes, this one might be a stickler since there are so many engineers and clients I know that don’t use automated tests. A formalized testing process, especially automated tests are critical to ensuring that a code base is robust and has sufficient quality to meet its needs.
Reason #5 – Poor documentation or documentation that is lacking completely
Documentation has been getting better among open source projects that have been around for a long time or that have strong commercial backing. Smaller projects though that are driven by individuals tend to have little to no documentation.
Reason # 6 – Real-time support is lacking
There are few things more frustrating than doing everything you can to get something to work or debugged and you just hit the wall. When this happens, the best way to resolve the issue is to get support. The problem with open source is that there is no guarantee that you will get the support you need in a timely manner to resolve any issues.
Reason #7 – Integration is never as easy as it seems
The website was found, the demonstration video was awesome. This is the component to use. Look at how easy it is! The source is downloaded and the integration begins. Months later, integration is still going on. What appeared easy quickly turned complex because the same platform or toolchain wasn’t being used. “Minor” modifications had to be made. The rabbit hole just keeps getting deeper but after this much time has been sunk into the integration, it cannot be for naught.
Conclusions
By no means am I against open source software. It’s been extremely helpful and beneficial in certain circumstances. It’s important though not to just use software because it’s free and open source. Developers need to recognize their requirements, needs, and the robustness level that they require for their product and appropriately develop or source software that meets those needs rather than blindly selecting software because it’s “free.”
Tomi Engdahl says:
EMBEDDED SYSTEMS
WHITEPAPER
http://www.mentor.com
PREVENTING ZERO-DAYS WITH SELinux:
How to Stay one Step a Head of MaliciouS Software attackS
http://s3.mentor.com/public_documents/whitepaper/resources/mentorpaper_102376.pdf
Tomi Engdahl says:
CPU or FPGA for image processing: Which is best?
http://www.vision-systems.com/articles/print/volume-22/issue-8/features/cpu-or-fpga-for-image-processing-which-is-best.html?cmpid=enl_vsd_vsd_newsletter_2017-12-07
As more vision systems that include the latest generations of multicore CPUs and powerful FPGAs reach the market, vision system designers need to understand the benefits and trade-offs of using these processing elements. They need to know not only the right algorithms to use on the right target but also the best architectures to serve as the foundations of their designs.
Inline vs. co-processing
Before investigating which types of algorithms are best suited for each processing element, you should understand which types of architectures are best suited for each application. When developing a vision system based on the heterogeneous architecture of a CPU and an FPGA, you need to consider two main use cases: inline and co-processing. With FPGA co-processing, the FPGA and CPU work together to share the processing load. This architecture is most commonly used with GigE Vision and USB3 Vision cameras because their acquisition logic is best implemented using a CPU. You acquire the image using the CPU and then send it to the FPGA via direct memory access (DMA) so the FPGA can perform operations such as filtering or color plane extraction. Then you can send the image back to the CPU for more advanced operations such as optical character recognition (OCR) or pattern matching. In some cases, you can implement all of the processing steps on the FPGA and send only the processing results back to the CPU. This allows the CPU to devote more resources to other operations such as motion control, network communication, and image display.
In an inline FPGA processing architecture, you connect the camera interface directly to the pins of the FPGA so the pixels are passed directly to the FPGA as you send them from the camera. This architecture is commonly used with Camera Link cameras because their acquisition logic is easily implemented using the digital circuitry on the FPGA. This architecture has two main benefits. First, just like with co-processing, you can use inline processing to move some of the work from the CPU to the FPGA by performing preprocessing functions on the FPGA. For example, you can use the FPGA for high-speed preprocessing functions such as filtering or thresholding before sending pixels to the CPU. This also reduces the amount of data that the CPU must process because it implements logic to only capture the pixels from regions of interest, which increases overall system throughput. The second benefit of this architecture is that it allows for high-speed control operations to occur directly within the FPGA without using the CPU. FPGAs are ideal for control applications because they can run extremely fast, highly deterministic loop rates. An example of this is high-speed sorting during which the FPGA sends pulses to an actuator that then ejects or sorts parts as they pass by.
CPU vs. FPGA vision algorithms
With a basic understanding of the different ways to architect heterogeneous vision systems, you can look at the best algorithms to run on the FPGA. First, you should understand how CPUs and FPGAs operate. To illustrate this concept, consider a theoretical algorithm that performs four different operations on an image and examine how each of these operations runs when implemented on a CPU and an FPGA.
CPUs perform operations in sequence, so the first operation must run on the entire image before the second one can start.
Figure 3: Since FPGAs are massively parallel in nature, they can offer significant performance improvements over CPUs.
If you execute this algorithm only on the CPU, it has to complete the convolution step on the entire image before the threshold step can begin and so on. This takes 166.7 ms when using the NI Vision Development Module for LabVIEW and the cRIO-9068 CompactRIO Controller based on a Xilinx Zynq-7020 All Programmable SoC. However, if you run this same algorithm on the FPGA, you can execute every step in parallel as each pixel completes the previous step.
Running the same algorithm on the FPGA takes only 8 ms to complete. Keep in mind that the 8 ms includes the DMA transfer time to send the image from the CPU to the FPGA, as well as time for the algorithm to complete. In some applications, you may need to send the processed image back to the CPU for use in other parts of the application. Factoring in time for that, this entire process takes only 8.5 ms. In total, the FPGA can execute this algorithm nearly 20 times faster than the CPU.
So why not run every algorithm on the FPGA? Though the FPGA has benefits for vision processing over CPUs, those benefits come with trade-offs. For example, consider the raw clock rates of a CPU versus an FPGA. FPGA clock rates are on the order of 100 MHz to 200 MHz. These rates are significantly lower than those of a CPU, which can easily run at 3 GHz or more. Therefore, if an application requires an image processing algorithm that must run iteratively and cannot take advantage of the parallelism of an FPGA, a CPU can process it faster. The previously discussed example algorithm sees a 20X improvement by running on the FPGA.
Overcoming programming complexity
The advantages of an FPGA for image processing depend on each use case, including the specific algorithms applied, latency or jitter requirements, I/O synchronization, and power utilization.
Often using an architecture featuring both an FPGA and a CPU presents the best of both worlds and provides a competitive advantage in terms of performance, cost, and reliability. Unfortunately, one of the biggest challenges to implementing an FPGA-based vision system is overcoming the programming complexity of FPGAs. Vision algorithm development is, by its very nature, an iterative process.
Tomi Engdahl says:
With the adoption of the Industrial IoT comes a wave of new connected devices, many of which are small endpoint devices running real-time operating systems.
System engineers face greater challenges today when developing IIoT-capable, network-connected embedded devices. Besides the usual issues, they must deal with security issues, encryption standards, networking protocols and new technologies.
Source: https://www.techonline.com/electrical-engineers/education-training/tech-papers/4459166/Device-Security-for-the-Industrial-Internet-of-Things
Tomi Engdahl says:
Meeting ISO 26262 Software Standards
https://www.synopsys.com/software-integrity/resources/white-papers/ISO26262-guidelines.html?cmp=em-sig-eloqua&utm_medium=email&utm_source=eloqua&elq_mid=333&elq_cid=166673
The average car today contains up to 100 million lines of code.
Software controls everything from safety critical systems like brakes and power steering, to basic vehicle controls like doors and windows. Yet the average car today may have up to 150,000 bugs, many of which could damage the brand, hurt customer satisfaction and, in the most extreme case, lead to a catastrophic failure. Software development testing is designed to help developers, management and the business easily find and fix quality and security problems early in the software development lifecycle, as the code is being written, without impacting time-to-market, cost or customer satisfaction.
Tomi Engdahl says:
Creating Software Separation for Mixed Criticality Systems
https://www.mentor.com/embedded-software/resources/overview/creating-software-separation-for-mixed-criticality-systems-063e8993-cdc5-4414-a960-fc62db40c18c?contactid=1&PC=L&c=2017_12_19_esd_newsletter_update_v12_december
The continued evolution of powerful embedded processors is enabling more functionality to be consolidated into single heterogeneous multicore devices. Mixed criticality designs, those designs which contain both safety-critical and non-safety critical processes, can successfully leverage these devices and meet the regulatory requirements for IEC safety standards and the highest level of ISO.
Tomi Engdahl says:
Building Functional Safety and Security into Medical IoT Devices: IEC 62304 Conformance
https://www.mentor.com/embedded-software/resources/overview/building-functional-safety-and-security-into-medical-iot-devices-iec-62304-conformance-f433c2ce-d04e-4834-b62e-9d352af6a4c5?contactid=1&PC=L&c=2017_12_19_esd_newsletter_update_v12_december
Tomi Engdahl says:
Isolating Safety and Security Features on the Xilinx UltraScale+ MPSoC
https://www.mentor.com/embedded-software/resources/overview/isolating-safety-and-security-features-on-the-xilinx-ultrascale-mpsoc-a0acdb23-4116-4689-be74-f4ddfca02545?contactid=1&PC=L&c=2017_12_19_esd_newsletter_update_v12_december
It’s quickly becoming common practice for embedded system developers to isolate both safety and security features on the same SoC. Many SoCs are exclusively designed to take advantage of this approach and the Xilinx® UltraScale+™ MPSoC is one such chip.
Tomi Engdahl says:
Optimizing Machine Learning Applications for Parallel Hardware
https://www.mentor.com/embedded-software/events/optimizing-machine-learning-applications-for-parallel-hardware?contactid=1&PC=L&c=2017_12_19_esd_newsletter_update_v12_december
Machine learning applications have a voracious appetite for compute cycles, consuming as much compute power as they can possibly scrounge up. As a result, they are invariably run on parallel hardware – often parallel heterogeneous hardware—which creates development challenges of its own. They are quite often developed by a theoretician who views the world from an abstract, Matlab-level view, but must be implemented and optimized by programmers whose view of the world is constrained by real time considerations. Squeezed between the egos of theoreticians, real time constraints, and delivery schedules, these programmers face a daunting task.
Tomi Engdahl says:
Video about a C++ “flaw”
https://www.mentor.com/embedded-software/blog/post/video-about-a-c-flaw–6632107a-c873-4963-a477-0596f7bab100?contactid=1&PC=L&c=2017_12_19_esd_newsletter_update_v12_december
This time I am looking at an interesting aspect of C++, where there seems to be a small design flaw.
Tomi Engdahl says:
Linux Security Primer: SELinux and SMACK Frameworks
https://www.mentor.com/embedded-software/resources/overview/linux-security-primer-selinux-and-smack-frameworks-2d912771-0ebe-467c-a496-f6ec02681f91
Preventing Zero-Days with SELinux: How to Stay One Step Ahead of Malicious Software Attacks
https://www.mentor.com/embedded-software/resources/overview/preventing-zero-days-with-selinux-how-to-stay-one-step-ahead-of-malicious-software-attacks-1377d520-a0f1-4495-bcac-48d2c487d0f4
Tomi Engdahl says:
Creating Software Separation For Mixed Criticality Systems
https://semiengineering.com/creating-software-separation-for-mixed-criticality-systems/
Considerations when designing for systems that involve safety-critical and non-safety-critical processes.
The continued evolution of powerful embedded processors is enabling more functionality to be consolidated into single heterogeneous multicore devices. Mixed criticality designs, those designs which contain both safety-critical and non-safety critical processes, can successfully leverage these devices and meet the regulatory requirements for IEC safety standards and the highest level of ISO. This whitepaper will describe many important considerations using RTOS for mixed criticality systems.
https://www.mentor.com/embedded-software/resources/overview/creating-software-separation-for-mixed-criticality-systems-063e8993-cdc5-4414-a960-fc62db40c18c?cmpid=10168
Tomi Engdahl says:
How to Enable “Soft Monitoring” of SoCs
https://assets.emediausa.com/research/onchip-clock-and-process-monitoring-using-star-hierarchical-systems-measurement-unit-65443?lgid=3441165&mailing_id=3590162&engine_id=1&lsid=1&mailingContentID=108198&tfso=139369&success=true&templateid=34
Functional safety is one of the most critical priorities for system-on-chips (SoCs) that are involved in automotive, aerospace and industrial applications. These requirements are driven by standards such as ISO 26262 and are the backbone of the design and testing of automotive ICs. Synopsys’ STAR Hierarchical System’s Measurement Unit helps ensure the accuracy of on-chip clock frequency and duty cycle measurements for these types of applications. STAR Hierarchical System’s Measurement Unit has clock and process monitoring capabilities that track embedded sensors and monitors and can also record and confirm that measurements meet on-silicon criteria for performance-focused FinFET technology nodes including 16-nm and 7-nm processes.
Tomi Engdahl says:
The Myth of Perfect MISRA Compliance
https://www.techonline.com/electrical-engineers/education-training/tech-papers/4459109/The-Myth-of-Perfect-MISRA-Compliance
One of the principles learned during the development of the MISRA guidelines has been the primary importance of being able to enforce rules with static analysis tools. It has been amply demonstrated, that without the ability to implement automatic enforcement, coding rules are of marginal value. The market for tools, which provide enforcement, has grown in parallel with the adoption of MISRA Coding Guidelines.
Tomi Engdahl says:
Solutions Emerge to Tackle Many Facets of Embedded Security
http://intelligentsystemssource.com/solutions-emerge-to-tackle-many-facets-of-embedded-security/
As the awareness and urgency surrounding security increases, technology suppliers are responding with solutions to address complex secure system design challenges.
Crack open today’s top embedded system security issues for defense, and you’ll see a wide range of challenges and corresponding solutions. For military system developers, there’s perhaps no richer topic these days than that of developing secure systems. The problems are multi-faceted: How do you prevent intrusions by hackers? How to best encrypt that data once an intruder gets in? How do you ensured the components themselves haven’ been tampered with—or will be tampered with? Over the past 12 months, a myriad of technologies have been implemented at the chip, board and box level designed to help system developers build secure applications.
“Over the next 5 to 10 years, we expect third-party validation programs, like Federal Information Processing Standards (FIPS) 140-2 and Commercial Solutions for Classified (CSfC), to become mandatory,” said Bob Lazaravich, Technical Director at Mercury Systems, “We also anticipate the replacement of AES-256 encryption with either new algorithms or larger keys to address growing concerns about vulnerabilities from quantum computing.” Lazaravich also said he expects that new defense-grade storage products will incorporate stronger physical security. That includes security to the drive itself and security integrated through the device’s supply chain and manufacturing location.
At one time protection for data stored in modern encrypted and unpowered SSDs was a major concern. But those days are past. SDDs implemented in compliance with an appropriate National Information Assurance Partnership (NIAP) protection profile, unpowered SSDs are considered unclassified. Powered-on and authenticated devices still present significant security challenges. For instance, after a password authentication completes, how does a secure SSD determine that the authenticated user is still present?
Cryptography for Embedded Computing
While embedded system technologies and Information Technology (IT) have traditionally operated in separate sphere, in today’s networked, connected world, those disciplines are intersecting more and more—and security is among those overlapping points. Certainly, a lot of Information Assurance and Cryptography technology is focused on IT Enterprise kinds of systems. The Trusted Platform Module (TPM) is a good example of an IT Commercial and Enterprise technology that is becoming popular as a way to solve complex authentication and key management issues in military applications.
FPGA-Level Cryptography
Cryptography at the chip level was once primary the domain of custom, proprietary solutions. Bucking that trend, Microsemi and The Athena Group last month announced Athena’s TeraFire cryptographic microprocessor is included in Microsemi’s new PolarFire field programmable gate array (FPGA) “S class” family members. As the most advanced cryptographic technology offered in any FPGA, the TeraFire hard core provides Microsemi customers access to advanced security capabilities with high performance and low power consumption.
Tomi Engdahl says:
Embedded Computing Enables Smaller, Smarter UAV Designs
http://intelligentsystemssource.com/embedded-computing-enables-smaller-smarter-uav-designs/
Highly integrated, embedded technologies with low SWaP are giving UAV system developers an expanding set of options with which they can meet upcoming challenges.
The Department of Defense 2017 budget request illustrates that the military shows no signs of backing away from advancing their unmanned vehicle programs. This comes from the success of manned-unmanned defense teaming that has proven these programs to be highly successful in helping military operations become more agile, responsive and safe. From the Air Force and Army to the Marines and Navy, all branches are looking to add more air, sea and land unmanned systems, including smaller and smarter ones to their arsenal.
Tomi Engdahl says:
Which Languages Are Bug Prone
http://www.i-programmer.info/news/98-languages/11184-which-languages-are-bug-prone.html
Tomi Engdahl says:
Don’t Disable SELinux
Developers often recommend disabling security like SELinux support to get software to work. Not a good idea.
http://www.electronicdesign.com/embedded-revolution/don-t-disable-selinux
Tomi Engdahl says:
A Critical Vulnerability Spares Security of Microcontrollers
http://www.electronicdesign.com/embedded-revolution/critical-vulnerability-spares-security-microcontrollers
When Intel revealed that almost all its computer chips were exposed to exploits that could allow hackers to swipe their memory contents, the company denied that it alone was afflicted. The Meltdown vulnerability is specific to Intel, but the company said that it would work with rivals AMD and ARM to resolve another fault that also affects them, called Spectre.
ARM is the company behind an architecture that has been shipped in more than four billion chips installed in everything from smartphones to factories, and which is inside chips vulnerable to the Spectre flaw. The company recently revealed that the exploit does not affect its microcontrollers, but it could pose a threat to higher performance chips.
The firm stressed that the exploits would not work against Cortex-M designs, which are used in microcontrollers for the Internet of Things and which have been shipped in tens of millions of devices. On Wednesday, the company published a chart of vulnerable devices, which include several Cortex-R and Cortex-A products used in smartphones and other chips sold by Nvidia and Samsung.
“All future Arm Cortex processors will be resilient to this style of attack or allow mitigation through kernel patches,” the company said in a statement.
Spectre is harder to exploit than Meltdown, but it is also harder to prevent and could require chips to be redesigned. But the prognosis also appears to have improved.
United States Computer Emergency Readiness Team, part of the Department of Homeland Security, recently revised a note that advised companies to replace all hardware vulnerable to the exploits.
Vulnerability Note VU#584653
CPU hardware vulnerable to side-channel attacks
https://www.kb.cert.org/vuls/id/584653
Note: This Vulnerability Note is the product of ongoing analysis and represents our best knowledge as of the most recent revision. As a result, the content may change as our understanding of the issues develops.
Tomi Engdahl says:
Get Ready for a Wealth of Embedded Design Hardware and Software Options for 2018
http://www.electronicdesign.com/embedded-revolution/get-ready-wealth-embedded-design-hardware-and-software-options-2018
Senior Technology Editor Bill Wong examines the future of embedded development with his annual forecast.
Tomi Engdahl says:
The Intel Processor Flaw and Its Impact on Embedded Developers
http://www.electronicdesign.com/embedded-revolution/intel-processor-flaw-and-its-impact-embedded-developers
Intel has a problem with its processors, and from what we’ve found out, embedded applications could suffer a “Meltdown.”
Intel chips dominate the server and PC markets, but they’re also widely used in embedded applications. A serious flaw, called Meltdown, has been found in these chips, and the fix could have significant implications. The details of the flaw and fix are still under wraps. However, we do know some information about the issue and the potential fix. All of this comes on the heels of the Intel Management Engine problem that affected a large number of Intel processors.
The snag appears to be how the memory management unit (MMU) protects memory—a key to implementing a secure system. The issue relates to kernel memory and how it can be examined from a conventional application. The solution is to not include any kernel memory in the application’s virtual-memory (VM) space. Patches for Windows, Linux, and MacOS are in the works, and other operating systems that target the Intel platforms will likely have changes as well.
Developers will need to work with their software suppliers for these changes. Any operating system with virtual-memory or virtual-machine support running on processors with this flaw will require changes to address it.
The Meltdown bug is now documented as CVE-2017-7574. Two other major bugs, known as Spectre, have been reported as well. These include bounds check bypass (CVE-2017-5753) and branch target injection (CVE-2017-5715). Meltdown is found in Intel platforms while Spectre can affect AMD and ARM Cortex-A platforms.
To Share or Not to Share
The problem is a design tradeoff between keeping the kernel in its own address space and sharing some with an application. Keeping everything in the kernel’s own address space means only the kernel has access to it, but any calls from an application to the kernel now require a major state swap that incurs more overhead. It’s one reason why many microkernel approaches have a hard time challenging monolithic kernels like Linux in terms of performance.
The fix incurs additional overhead, which could potentially impact overall system performance. Numbers ranging from 5% to 30% have been tossed out, but we will have to wait for actual fixes to test those assertions. Even 5% can have an impact on embedded applications where certification, tuning, and other issues would be affected by even a small change. Likewise, changing the operating system would require recertification or testing for many critical applications.
Tomi Engdahl says:
Warp Speed Ahead
What can you do with orders of magnitude performance improvements?
https://semiengineering.com/warp-speed-ahead/
The computing world is on a tear, but not just in one direction. While battery-powered applications are focused on extending the time between charges or battery replacements, there is a whole separate and growing market for massive improvements in speed.
Ultimately, this is where quantum computing will play a role, probably sometime in the late 2020/early 2030 timeframe, according to multiple industry estimates. Still, although there has been some progress in room-temperature quantum computing, the bulk of that computing initially will be done in extreme cold inside of data centers.
Between these two extremes, there is a growing focus on new architectures, packaging, materials and ever-increasing density to deal with massive amounts of data.
“If you look at anything around big data, all of these systems will become smarter and smarter,” noted Synopsys chairman and co-CEO Aart de Geus. “Over time the desire is not to get 2X performance, but 100X. The only way to get there is not by using faster chips, but by using chips that can only do a single task. In other words, algorithm-specific. By simplifying the problem, you can make things much more efficient.”
And this is where computing is about to take a big leap. In the past, the focus was on how to get more speed out of general-purpose processors, whether those were CPUs, GPUs or MCUs. Increasingly, processors are being designed for specific tasks.
This puts new pressure on big chipmakers. Instead of spending years developing the next rev of a general processor, the future increasingly is about flexibility, choice, and an increasing level of customization. This is why Intel bought Altera, and it helps explains why all processor makers have been ramping up the number of chips they offer.
companies begin architecting their own chips, which is already happening. Apple, Amazon, Google, Microsoft, Facebook and Samsung today are creating chips for specific applications. It’s also why there is so much attention being focused on programmability and parallelism, whether that involves embedded FPGAs, DSPs, or hybrid chips that add some level of programmability into ASICs.
Tomi Engdahl says:
Achieving MPU security
https://www.embedded.com/design/safety-and-security/4460244/Achieving-MPU-security
Introduction
Encryption, authentication, and other security methods work fine to protect data and program updates passing through the Internet. That is, unless one end can easily be hacked to steal secret keys and possibly implant malware for future activation. Then, unbeknownst to the system operators, confidential information is being stolen daily and possible major service disruptions lie ahead.
A large number of Cortex-M MCU-based products have been shipped since the Cortex-M architecture was introduced in 2005. Many of these products are connected to the Internet. Many new products are currently under development using Cortex-M MCUs, and due to the financial incentives of the IoT, an even a larger percentage of them will be connected to the Internet. In the vast majority of cases, these embedded devices have little or no protection against hacking.
Most Cortex-M MCUs, both in the field and in development, have Memory Protection Units (MPUs). However, because of a combination of tight schedules to deliver products and difficulty using the Cortex-M MPU, these MPUs are either under-used or not used at all. The apparent large waste of memory due to the MPU requirements that MPU regions be powers-of-two in size and that they be aligned on size boundaries has been an additional impediment for adoption by systems with limited memories.
Yet for these MCUs, the MPU and the SVC instruction are the only means of achieving acceptable security.
All existing embedded systems use the Cortex-v7M architecture. The Cortex-v8M architecture, which was announced over a year ago, offers better security protection. Unfortunately, it is being adopted slowly by processor vendors and nearly all new MCUs still use the Cortex-v7M architecture. Hence, the latter will be with us for a long time to come. Consequently, this article presents a step-by-step process for porting existing systems to the Cortex-v7M MPU.
Tomi Engdahl says:
These 2017 Embedded Trends Will Thrive in 2018
http://www.electronicdesign.com/embedded-revolution/these-2017-embedded-trends-will-thrive-2018
What trends in 2017 will have staying power? Senior Technology Editor Bill Wong looks at some of the hottest embedded trends that should keep percolating this year.
RISC-V
RISC-V is an instruction-set architecture. That’s important because RISC-V requires a hardware implementation to be usable, but it doesn’t define an implementation. The standard actually defines a set of features that can be combined and implemented in hardware, allowing portability of applications between platforms.
Bolstered by support from the likes of Microsemi and its Mi-V Infrastructure (Fig. 1), RISC-V has become a key player in the FPGA space.
Machine Learning
Ok, machine learning (ML), and deep neural networks (DNNs) in particular, have been hot topics for a couple years.
ADAS, Radar, and LiDAR
Cameras and ML systems will have a major effect on how ADAS works in smart and self-driving cars, but radar and LiDAR will be complementary to visual technology.
Voice Control
Voice-control systems and smart speakers like Amazon’s Echo were a hot item at CES 2017.
Tomi Engdahl says:
Get Ready for a Wealth of Embedded Design Hardware and Software Options for 2018
http://www.electronicdesign.com/embedded-revolution/get-ready-wealth-embedded-design-hardware-and-software-options-2018
Senior Technology Editor Bill Wong examines the future of embedded development with his annual forecast.
There has never been a more exciting, confusing, and challenging time to develop embedded products. The Internet of Things (IoT) is a given, but tools ranging from machine learning (ML), persistent storage (PS), and mesh networking are changing how developers look at a problem. Approaches that were impractical a few years ago are becoming readily available. That is not to say that these paths are not fraught with peril for the uneducated. Likewise, adopting the latest hardware and software should not mean ignoring other issues like privacy and security. Insecure systems can render the best-intentioned device or service untenable.
Tomi Engdahl says:
C Programming Language ‘Has Completed a Comeback’
https://developers.slashdot.org/story/18/01/07/1941209/c-programming-language-has-completed-a-comeback
InfoWorld reports that “the once-declining C language” has “completed a comeback” — citing its rise to second place in the Tiobe Index of language popularity, the biggest rise of any language in 2017.
Although the language only grew 1.69 percentage points in its rating year over year in the January index, that was enough beat out runners-up Python (1.21 percent gain) and Erlang (0.98 percent gain).
C completes comeback in programming popularity
The once-faltering language wins Programming Language of the Year award from the Tiobe Index
https://www.infoworld.com/article/3245786/application-development/c-language-completes-comeback-in-programming-popularity.html
Tomi Engdahl says:
Artificial Intelligence and Deep Learning at the Edge
https://www.eeweb.com/profile/max-maxfield/articles/artificial-intelligence-and-deep-learning-at-the-edge
Increasingly, embedded systems are required to perform ‘AI and DL on the Edge’; i.e., the edge of the Internet where sensors and actuators interface with the real world.
Things are progressing apace with regard to artificial intelligence (AI), artificial neural networks (ANNs), and deep learning (DL). Some hot-off-the-press news is that CEVA has just unveiled its NeuPro family of processors for AI/DL at the edge
Like all of CEVA’s hardware offerings, NeuPro processors are presented in the form of intellectual property (IP) that designers can deploy on FPGAs or integrate into their System-on-Chip (SoC) devices.
We start by defining our ANN architecture and capturing it using an appropriate system like Caffe or Google’s TensorFlow. Next, we “train” our network using hundreds of thousands or millions of images. At this stage we need a lot of accuracy, which means we’re typically working with 32-bit floating-point values.
The next step is to convert our 32-bit floating-point network into a 16-bit or 8-bit fixed-point equivalent that is suitable for deployment in an FPGA or on an SoC (fixed-point representations are used to boost performance while lowering power consumption).
Tomi Engdahl says:
Software Design Patterns for Real Hardware
https://hackaday.com/2018/01/12/software-design-patterns-for-real-hardware/
Getting Cozy with the Hardware
The key ideas behind this are three-fold:
isolate behavior that can stand on it’s own.
hide (with abstraction) details that are unnecessary for the end-user.
don’t repeat yourself; share behaviors by following the first clause.
Polymorphism
With that in mind, let’s write out some requirements. Let’s say that we need to:
isolate device-specific behaviors that are needed to read a particular sensor.
provide a common software interface for reading the temperature regardless of temperature sensor type.
Bridges
What we need to write is a series of classes for our setup that somehow detach the thermocouple from the extra hardware that’s being used to read it. To do so, we need to decide what components are specifically necessary for reading this thermocouple and what components can be replaced without changing the end-to-end system behavior.
Tomi Engdahl says:
Verification Of Functional Safety
https://semiengineering.com/verification-of-functional-safety/
Part 1 of 2: How do you trade off cost and safety within an automobile? Plus, a look at some of the challenges the chip industry is facing.
Tomi Engdahl says:
The Benefits of C and C++ Compiler Qualification
http://www.electronics-know-how.com/article/2595/the-benefits-of-c-and-c-compiler-qualification
In embedded application development, the correct operation of the compilation toolset is critical to the functional safety of the application. Two options are available to build trust in the correct operation of the compiler: either by compiler qualification through testing, or application coverage testing at the machine code level. We argue that the first, compiler qualification, is much more efficient. In addition, separating compiler qualification from application development shortens the critical path to application deployment (time-to-market) because they are then independent of each other. Compiler qualification saves time and money.
Functional Safety standards such as ISO 26262 for the automotive industry describe if and how tools, such as software compilers, must be qualified if they are used in safety critical applications. Tool Qualification is the process that is described by a functional safety standard to develop sufficient confidence in using a tool.
To fulfill the essential requirement that the compiler makes no error in code generation, ISO 26262 requires an important choice to be made: it is necessary to either develop sufficient trust in the compiler through qualification, or develop application test procedures that are strong enough to detect any error in the generated machine code. In this paper, it is shown that the choice for compiler qualification is the more efficient one.
Summary
As a compiler user, you cannot rely on your compiler supplier to qualify the compiler for your specific use case in a functional safety critical domain. Nor can you rely on application testing to prove the compiler correct. Separating application testing from compiler (tool) qualification makes application deployment more efficient and hence, more cost effective. At Solid Sands, we are happy to guide you with the compiler qualification process.
Tomi Engdahl says:
Safety, Security And Open Source In The Automotive Industry
Is innovation outpacing security in automotive software?
https://semiengineering.com/safety-security-and-open-source-in-the-automotive-industry/
Today’s cars are as much defined by the power of their software as the power of their engines.
More and more vehicles are “connected,” equipped with Internet access, often combined with a wireless local area network to share that access with other devices inside as well as outside the vehicle. And whether we’re ready or not, we’ll soon be sharing the roads with autonomous vehicles.
Built on a core of open source
Driving the technology revolution in the automotive industry is software, and that software is built on a core of open source. Open source use is pervasive across every industry vertical, including the automotive industry. When it comes to software, every auto manufacturer wants to spend less time on what are becoming commodities — such as the core operating system and components connecting the various pieces together — and focus on features that will differentiate their brand. The open source model supports that objective by expediting every aspect of agile product development.
But just as lean manufacturing and ISO-9000 practices brought greater agility and quality to the automotive industry, visibility and control over open source will be essential to maintaining the security of automotive software applications.
Innovation may be outpacing security in cars
When you put new technology into cars, you run into security challenges. For example:
When security researchers demonstrated that they could hack a Jeep over the Internet to hijack its brakes and transmission, it posed a security risk serious enough that Chrysler recalled 1.4 million vehicles to fix the bug that enabled the attack.
For nearly half a decade, millions of GM cars and trucks were vulnerable to a remote exploit that was capable of everything from tracking vehicles to engaging their brakes at high speed to disabling brakes altogether.
The Tesla Model S’s infotainment system contained a four-year-old vulnerability that could potentially let an attacker conduct a fully remote hack to start the car or cut the motor.
4 Risks in Connected Cars
https://blog.blackducksoftware.com/4-risks-connected-cars
Tomi Engdahl says:
Secure Development Lifecycle for Hardware Becomes an Imperative
https://www.eetimes.com/author.asp?section_id=36&doc_id=1332962
Given recent events, its time for chip makers to take a page from the software vendor handbook and step up their game in heading off potentially costly threats.
A Secure Development Lifecycle (SDL) for hardware with appropriate hardware security products could have prevented the recent Meltdown and Spectre vulnerabilities affecting Intel, ARM and AMD processor architectures. An SDL is the process of specifying a security threat model and then designing, developing and verifying against that threat model.
Many in the software domain are familiar with SDL, which is a process invented by Microsoft to improve the security of software. To make this process as efficient as possible, the software domain is filled with widely deployed static and dynamic analysis tools to provide automation around security review for various stages of the development lifecycle.
Tomi Engdahl says:
Secure Development Lifecycle for Hardware Becomes an Imperative
https://www.eetimes.com/author.asp?section_id=36&doc_id=1332962
Hardware SDL is a necessity moving forward to identify and prevent vulnerabilities like Meltdown and Spectre. It will provide upper management with more insight into the associated risks and able to make informed business decisions about security.
The SDL can be applied easily to improve the security of modern semiconductor designs and, in general, uses following steps:
1. Specify security requirements
2. Design/Architecture
3. Implementation
4. Verification
5. Release and Response
Tomi Engdahl says:
The Pitfalls of Homegrown Update Mechanisms for Embedded Systems
https://www.eeweb.com/profile/ralphmender/articles/the-pitfalls-of-homegrown-update-mechanisms-for-embedded-systems
Performing secure and robust over-the-air (OTA) wireless updates for remotely-deployed embedded systems requires appropriate expertise and technology.
With the increasing number of embedded systems being connected, one oft-overlooked aspect is the software update mechanism. The focus is on applications and features, which is where developers should be spending their time, but this means that the update mechanism gets a backseat.
Many developers assume that the update mechanism won’t be that difficult; after all, “It’s just copying files over to the target.” The reality, as is often the case — especially in the case of over-the-air (OTA) wireless updates — is much more complex. Unfortunately, this simplified perception of an update mechanism has led many embedded teams into developing their own updater, which takes away from their time actually spent building their product.
Building an OTA update mechanism from scratch should be a remnant of the past as there are freely available open source options available, including the solutions from Mender.io
Robustness
A common scenario causing devices to brick (i.e., become completely unable to function) is when a loss of power or loss of network occurs during an update. One of the worst possible scenarios is to have one or more devices deployed remotely that — due to an interruption during an update — become unusable and bricked. The resiliency and reliability of the update process should be a chief concern given the dire consequences. Network or power loss is quite common with embedded systems in the field, which means this is a very real risk during an update process.
This is also one of the reasons why atomic installation of an update is required for embedded systems, whereby the update is either fully installed or not at all. Partial installations can create inconsistency in remotely deployed devices. Things can quickly become chaotic when a fleet of devices have different updates and the production devices do not match the test environment. Thus, it is a best practice in embedded systems to avoid non-atomic updates due to the lack of integrity they can produce.
While package-based updates are common in traditional Linux software (e.g., apt or yum), this approach is avoided in embedded Linux due to many issues. For example, there is difficulty managing a consistent set of packages installed across a fleet of devices.
The ability for reliable rollback is another key requirement. It is very common for the output of an embedded Linux CI build to be a complete root filesystem, thus having a dual bank approach is one of the simplest and most reliable ways to ensure the embedded system is robust with rollback to the other root filesystem.
Thus, the dual-root filesystem approach not only makes devices in the wild more resilient, but it also simplifies the build system by building all the packets in a reliable and predictable way.
Security
There are two primary security requirements with regard to the update mechanism. The first is Code signing (cryptographic validation), which ensures tight control over who can reprogram sensitive components on the target device. This is often overlooked
The other requirement with regards to security is ensuring you are using only encrypted communications between the deployment server and the target device. There should be bi-directionally authenticated communication between the client/server to avoid the risk of an update being modified while in transit
Over-the-air software updates for embedded Linux
Mender is an end-to-end open source updater for connected devices and IoT
https://mender.io/
Tomi Engdahl says:
Who Will Regulate Technology?
https://semiengineering.com/who-will-regulate-technology/
Why the whole tech industry needs to start thinking differently about what it creates.
Regardless, what’s at stake here is the tech industry’s ability to set its own agenda and to avoid problems that attract outside regulation, which in the case of complex systems and new technologies will not be anywhere near as informed as if those regulations are developed from within. It’s hard enough for engineers to understand what’s happening inside a chip, let alone explain it to a board of regulators appointed by elected officials. It’s hard enough to explain to different groups within the industry. There is a sharp contrast between how hardware and software engineers view problems, and how analog and digital engineers view problems and solutions.
So what exactly needs to be addressed? Top on the list is security. As more devices are connected, they need to adhere to some standard level of security for interoperability with other systems. This should be a checklist item, almost like UL certification or an EnergyStar rating for devices, and it needs be managed from within the tech industry. If something doesn’t adhere to known best practices for security, that should be evident to the consumer.
Second, international standards need be developed for privacy and ethics involving AI and quantum computing. What is acceptable behavior for machines? T
Third, there needs to be an infrastructure established to assess new developments and make recommendations as needed. The number of new markets for chips is exploding. It’s no longer just about chips for computers or mobile phones. It’s about ubiquitous technology that is connected to other technology.
Tomi Engdahl says:
Functional Safety: A Way Of Life
https://semiengineering.com/functional-safety-a-way-of-life/
Functional safety starts with cultivating the right culture within a company.
In the same vein, functional safety, which is a crucial piece of the process for semiconductor and IP companies, can be vastly improved with harmony between people, process, and product. Thinking about it in smaller parts helps in grasping some of the concepts, but the right approach is one that brings it all together, and cultivates the right culture within the company. I see people as the mind, thinking and consciously applying practices and learnings from past experiences. Process is the body, which provides the structure, flow, and ability to make progress along those lines. And finally, product is the soul, which is the ultimate goal that needs to be refined and perfected.
The main pitfall with a traditional way of thinking is that it cultivates a culture where each of these three are considered in isolation.
Tomi Engdahl says:
Verification Of Functional Safety
https://semiengineering.com/verification-of-functional-safety-2/
Part 2 of 2: How should companies go about the verification of functional safety and what tools can be used?
Tomi Engdahl says:
Making Sense Of Safety Standards
https://semiengineering.com/making-sense-of-safety-standards/
Why tool safety compliance matters and how vendors can make the process easier.
If you’re involved in the design or verification of safety-critical electronics, you’ve probably heard about some of the standards that apply to such development projects. If not, then you’re probably puzzled when you read about TÜV SÜD certifying that an EDA tool satisfies functional safety standards ISO 26262 (TCL3/ASIL D), IEC 61508 (T2/SIL 3) and EN 50128 (T2/SIL 3). The industry has quite an “alphabet soup” (more accurately, alphanumeric soup) of functional safety standards. In this post, we’ll try to sort it out.
The goal of all these standards is to define a rigorous development process for safety-critical hardware projects and to impose requirements on the robustness of the resulting design. A key part of this is recognizing that engineers use advanced software tools to develop complex hardware. Tools may malfunction, generate erroneous output, and ultimately introduce or fail to detect hardware faults that could cause hazardous events in the field. Functional safety standards demand that this risk be assessed and adequately minimized through tool qualification and other processes.
IEC 61508 is a baseline standard adapted and expanded for specific safety-critical applications. This standard defines off-line tools as those used exclusively for development, and divides them into three categories: T1 tools, for example text editors, do not generate any output that may influence the hardware design. T2 tools, for example coverage measurement tools, may fail to detect design defects. T3 tools, for example synthesis tools, may introduce errors in the hardware design. Verification and analysis tools generally are classified as T2.
Tomi Engdahl says:
Debugging Debug
Can time spent in debug be reduced?
https://semiengineering.com/debugging-debug/
There appears to be an unwritten law about the time spent in debug-it is a constant.
It could be that all gains made by improvements in tools and methodologies are offset by increases in complexity, or that the debug process causes design teams to be more conservative. It could be that no matter how much time spent on debug, the only thing accomplished is to move bugs to places that are less damaging to the final product. Or maybe that the task requires an unusual degree of intuition plus logical thinking.
Regardless of the explanation, data shows that time spent in debug has resisted any reduction in resources expended.
So how much time is really spent on debug?
“It is difficult to be precise, since debugging is pervasive and an integral part of every aspect of the development process,” says Harry Foster, chief scientist for verification at Mentor, a Siemens Business. “The same study showed that design engineers spend about 53% of their time involved in design activities, and about 47% of their time involved in verification activities. From a management perspective, debugging is insidious in that it is unpredictable. The unpredictability of the debugging task can ruin a well-executed project plan. Clearly, when you consider how debugging is required for all tasks involved in a product development life cycle, anything that can be done to optimize the debugging process is a win for the organization.”
It usually takes an innovation or paradigm shift in an area to have an impact.
To dig into the subject, we have to consider dividing the problem into two. “First there is the debug of a deep issue in the design,” says Doug Letcher, CEO of Metrics Technologies. “Perhaps the designer is building a new feature that doesn’t quite work correctly, and they need to debug that in a traditional sense. This may involve stepping through code or looking at a lot of waveforms. The second aspect of the debugging effort is debugging in the large where you are trying to stay on top of breaking changes as you are adding more tests or fixing bugs.”
Catching the bug early
It is better that bugs are never created. “Improvements can be made to debug, but a lot more must be done to avoid errors,” says Sergio Marchese, technical marketing manager for OneSpin Solutions. “When we fail with that, we need to be detecting errors sooner and in simpler contexts. Somewhat ironically, the good news is that safety and security requirements are indirectly forcing companies to take this direction.”
“For example, the number of bugs per 1,000 lines of code,” says Foster. “This metric has remained relatively constant for structural programing languages (in the order of 15 to 50 bugs/1,000 lines of code).
The interesting thing about a bug density metric is that the number of bugs per 1K lines of code is fairly consistent whether you are dealing with an RTL model or an High Level Synthesis (HLS) model. This is an argument for moving to HLS when it is possible.
If bugs cannot be avoided, then finding them early is helpful. “Once an error has made it through, it is crucial to detect it as soon as possible and within the simplest possible context,”
Companies must have processes in place for catching bugs. “As you add tests to fill coverage holes, you find new bugs and fixing those creates more bugs,” says Letcher.
Expect the unexpected
Continuous integration and frequent regression runs appear to be a methodology in place by advanced organizations. “We are seeing good return on the concept of regression-based debug,”
“Tools can apply big data analytics to aggregate and filter simulation log file data to measure coverage progress against pre-set targets across any number of axes, such as line, branch, FSM, code, and functional,”
A lot of data is available. “We are trying to solve this problem in a way that provides visibility into all of the data that is already there,”
Regressions are not just for functionality. “Power regression has evolved and is seeing increasing adoption,”
Picking the right tool
Not all tools are equally useful for all jobs. “When you move to emulation or prototyping, you shift into a software development mentality,” says Melling. “You are running and debugging software, and that is the focus in that kind of engine. With an emulator, software plays a role but is much more about system-level use-cases and workloads. I want to get a feel for power consumption during a particular workload. It is all about what kind of test I am running and that provides the context.”
What happens when a hardware bug is found in an emulator or prototype?
Vendors are rapidly attempting to bridge the divide between engines. “Selection between FPGA prototyping and emulation is a tradeoff between speed and debugging capabilities,”
New areas for debug
In the past few years, system-level verification has become a lot more visible. Until recently, system-level tests were hand written and orchestrating all of the necessary events in the system was complex and tedious.
But there is hope on the horizon. “The emerging Portable Stimulus was born out of that need to create automation and make it possible to generate that kind of complex test in a correct by construction fashion,”
New tools and technologies are constantly emerging. “A lot of big-data techniques are coming into play in the debug world where you want to analyze across large amounts of data and perform data analytics on top of it,” says Gupta. “Take the human factor out as much as possible.”
Utilizing big data techniques can enable new kinds of problems to be located. “Moving from TCL ( Tool command language ) to an object-oriented language like Python with a distributed database that can access shapes, instances, circuits, and timing paths in a MapReduce system (similar to Hadoop) makes queries sleek and quick,”
Time spent in debug is unlikely to change the next time the survey results come in. There appears to be a natural balance within the system, and new debug tools are kept in balance against new problems that need to be debugged. If anything, debug may be winning.
Tomi Engdahl says:
IoT Eats Embedded with Security, AI
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333000
The Internet of Things is eating the embedded systems market, and it’s hungry for more security and some AI sauce to go with it.
I talked to just three of the 30,000 engineers descending on Nuremberg for Embedded World this week, a small but significant sample. Michael Barr, CTO of the Barr Group, is presenting the results of his 2018 Embedded Systems Safety & Security Survey at the event.
The survey of 1,700 people found that 61% of all embedded designs are now at least occasionally connected to the internet. Surprise: They are not all secure.
The good news is that 67% of respondents said that security is a design consideration, up six points from the 2016 survey. But 22% said that security is not a product requirement; many admitted that they are not using best practices such as conducting regular code reviews — and less than half of all embedded engineers designing for the IoT encrypt their data.
Pressures to shave costs and get products to market fast can put security on the backburner. Even when it’s addressed, security is “a difficult problem because it’s a fragmented market with different operating systems, hardware configurations, and wired and wireless connections — there are a lot of attack surfaces and no one-size-fits-all solution,” said Barr.
His remedy boils down to getting educated, adopting best programming practices, using encryption, and erecting multiple barriers to attacks.
AI is just one dish on the IoT smorgasbord that major vendors are trying to lay out. “It takes many companies to create an IoT solution,” said Samsung’s Stansberry.
Tomi Engdahl says:
Embedded World: What to Look For and Why it Matters
https://www.eetimes.com/author.asp?section_id=36&doc_id=1332997
Here’s why next week’s Embedded World conference in Nuremberg, Germany, has become one of the key events in the electronic engineers’ calendar.
Tomi Engdahl says:
Turning Engineering and Science Students into Active Learners with Gap Analysis and Model-Based Design
https://www.mathworks.com/company/newsletters/articles/turning-engineering-and-science-students-into-active-learners-with-gap-analysis-and-model-based-design.html
Working in teams of three to five, students use Model-Based Design with MATLAB® and Simulink® to model, simulate, and implement real systems using an Arduino® or Raspberry Pi™ processor and repurposed hardware. Past projects have included a balloon for aerial thermography, an energy-neutral tramway shelter, a holonomic robot, and an irrigation system controlled by an autonomous weather station.
A key benefit of using MATLAB and Simulink is that students remain in the same software environment during all phases of their project, which reduces the time spent learning multiple software tools. In addition, the students realize the importance of mathematics and physics to engineering.
Tomi Engdahl says:
Moving from an Rtos to Linux? (Practical Insights Nobody’s Telling You)
https://www.mentor.com/embedded-software/resources/overview/moving-from-an-rtos-to-linux-practical-insights-nobody-s-telling-you–e25c4c82-eeb7-4d2d-8868-7b9520269426?uuid=e25c4c82-eeb7-4d2d-8868-7b9520269426&contactid=1&PC=L&c=2018_02_27_esd_newsletter_update_feb_2018
There is much written about considerations of moving from RTOS to Linux for embedded projects. Based on
pragmatic experience of helping customers through the decision-making process and the actual transition, this paper provides practical information, so developers can be fully aware of the trade-offs of moving to OSS and the often unmentioned hidden costs of managing a Linux distribution.
Tomi Engdahl says:
The Commercial RTOS Business is Doomed
https://www.eeweb.com/profile/mike-barr/articles/the-commercial-rtos-business-is-doomed
Those who depend for their livelihood on operating system licensing fees from designers of embedded systems should start looking for other sources of income.
Nearly two decades years ago, I was the moderator of an interesting Embedded Systems Conference ESC) panel discussion titled, “The Great RTOS Debate: Buy or Roll Your Own?” At that time, near the turn of the century, the market for commercial real-time operating systems (RTOSes) was growing rapidly year over year.
The big trend then was away from custom-written “proprietary” kernels toward commercial RTOSes that typically licensed with a per-unit royalty. From 1997 until their merger in 2000, Wind River and Integrated System together dominated this part of the market. According to surveys taken at the time, either VxWorks or pSOS was the operating system of choice for about one in four new embedded system designs.
As embedded Linux entered the market in full and the aforementioned merger took place, the market was divided roughly as follows: 39% no OS, 31% commercial RTOS, 18% proprietary OS, and 12% Linux.
The selection of operating systems by embedded systems designers has changed considerably since then. According to a preliminary analysis of data collected in Barr Group’s 2018 Embedded Systems Safety & Security Survey there are still quite a few new systems that run “no operating system” on their primary processor. However, this is down from 39% to just 22%. Use of proprietary operating systems is also down about half over the intervening 18 years, from 18% to just 8%.
The most popular category of actual operating system is now Linux at 22%,
Tomi Engdahl says:
Finding Faulty Auto Chips
The road to zero defects requires some new tactics.
https://semiengineering.com/finding-faulty-auto-chips/
The next wave of automotive chips for assisted and autonomous driving is fueling the development of new approaches in a critical field
called outlier detection.
Outliers or faulty chips arise for several reasons, including the advent of latent reliability defects. These defects do not appear
when a device is shipped, but they are somehow activated in the field and could end up in a system.
Tomi Engdahl says:
Testing Advanced System Requires Its Own Innovation
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333038
Full validation and test often requires devising its own set of innovations when the system being evaluated is both unique and sophisticated.
Tomi Engdahl says:
Baremetal Rust On the Horizon
https://hackaday.com/2018/03/12/baremetal-rust-on-the-horizon/
Rust Programming Langauge has grown by leaps and bounds since it was announced in 2010 by Mozilla. It has since become a very popular language owing to features such as memory safety and its ownership system. And now, news has arrived of an Embedded Devices Working Group for Rust aiming at improving support for microcontrollers.
Rust is quite similar to C++ in terms of syntax, however Rust does not allow for null or dangling pointers which makes for more reliable code in the hands of a newbie. With this new initiative, embedded development across different microcontroller architectures could see a more consistent and standardized experience which will result in code portability out of the box. The proposed improvements include IDE and CLI tools for development and setup code generation. There is also talk of RTOS implementations and protocol stack integration which would take community involvement to a whole new level.
This is something to be really excited about because Rust has the potential to be an alternative to C++ for embedded development as rust code runs with a very minimal runtime. Before Arduino many were afraid of the outcome of a simple piece of code but with rust, it would be possible to write memory-safe code without a significant performance hit.
https://internals.rust-lang.org/t/announcing-the-embedded-devices-working-group/6839
Tomi Engdahl says:
Commercial RTOSes Are Alive and Well, Thank You!
https://www.eeweb.com/profile/blamie/articles/commercial-rtoses-are-alive-and-well-thank-you
According to Express Logic, reports of the death of commercial RTOSes continue to be grossly exaggerated.
Tomi Engdahl says:
The FDA reports that 75% of medical devices submitted for regulatory approval fail on the first attempt, resulting in product release delays and costing companies money in redesign and lost opportunity.