The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.
Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.
Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.
Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.
Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.
Traditional approaches
C/C++
Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.
C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.
And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.
Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.
Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.
Java
Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones, ) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.
Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.
Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).
My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.
Increasing trends
Databases
Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.
There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).
SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).
If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.
RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.
Software quality
Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind
Safety
Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?
IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.
If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.
Security
The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.
Developing Secure Embedded Software white paper explains why some commonly used approaches to security typically fail:
MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY
Some techniques for building security to embedded systems:
Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed
Flood of new languages
Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. There are also many other relevant languages
One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages (Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again. Scripting languages are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.
Scripting languages have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used. High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.
There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.
Bash and other shell scipting
Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).
The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.
There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.
Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.
Lua
Lua is a lightweight cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.
Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.
There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.
PHP
PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.
Python
Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.
Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming, Many other paradigms are supported using extensions, including design by contract and logic programming.
Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.
Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.
Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.
For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriate. Examples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.
Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.
JavaScript and node.js
JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases! There is huge community of developers. JavaScript is a high-level language.
Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.
Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.
It doesn’t require a complicated IDE; all you really need is a terminal.
JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.
JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!
JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications
JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!
Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET
method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.
Future trends
According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:
One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.
Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.
This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.
Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.
Sources
Most important information sources:
New approaches to dominate in embedded development
A New Approach for Distributed Computing in Embedded Systems
New Approaches to Systems Engineering and Embedded Software Development
Embracing Java for the Internet of Things
Embedded Linux – Shell Scripting 101
Embedded Linux – Shell Scripting 102
Embedding Other Languages in BASH Scripts
PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog
JavaScript: The Perfect Language for the Internet of Things (IoT)
Anyone using Python for embedded projects?
JavaScript: The Perfect Language for the Internet of Things (IoT)
MICROCONTROLLERS AND NODE.JS, NATURALLY
Node.JS Appliances on Embedded Linux Devices
The smartest way to program smart things: Node.js
Embedded Software Can Kill But Are We Designing Safely?
DEVELOPING SECURE EMBEDDED SOFTWARE
1,687 Comments
Tomi Engdahl says:
Friday Hack Chat: JavaScript on Microcontrollers
http://hackaday.com/2017/08/23/friday-hack-chat-javascript-on-microcontrollers/
Microcontrollers today are much more powerful and much more capable than the 8051s from back in the day. Now, they have awesome peripherals and USB device interfaces. It’s about time a slightly more modern language was used to program these little chips.
During this Friday’s Hack Chat, we’re going to be talking about JavaScript on microcontrollers. [Gordon Williams] will be joining us to talk about Espruino. This is a tiny JavaScript interpreter that runs on the little embedded chips, has a debug interface, and allows you to program your board on any platform without any external programming hardware.
[Gordon] is the key developer of Espruino, and so far he’s launched a full-sized Espruino, and a pico Espruino on Kickstarter, both with amazing success. The software stack has been extremely popular as well — it’s been ported to the ESP8266 and dozens of other microcontrollers that will soon be in the Internet of Things.
https://www.espruino.com/
Tomi Engdahl says:
Training As A Strategic Weapon
For leading-edge designs, success can depend on how a team is trained.
https://semiengineering.com/training-as-a-strategic-weapon/
When you’re operating in this kind of environment, repeatability helps – the reuse of a memory subsystem, for example. However, one cannot count on repeatability as a single strategy for risk reduction. Rather, one must leverage past lessons learned in a way that allows extrapolation to new problem-solving. For example, leveraging SerDes design knowledge to build new chip-to-chip interfaces. This combination of baseline reuse and extrapolated problem-solving creates a very new environment to address HOW to design.
Tomi Engdahl says:
Friday Hack Chat: JavaScript on Microcontrollers
http://hackaday.com/2017/08/23/friday-hack-chat-javascript-on-microcontrollers/
Microcontrollers today are much more powerful and much more capable than the 8051s from back in the day. Now, they have awesome peripherals and USB device interfaces. It’s about time a slightly more modern language was used to program these little chips.
[Gordon] is the key developer of Espruino, and so far he’s launched a full-sized Espruino, and a pico Espruino on Kickstarter, both with amazing success. The software stack has been extremely popular as well — it’s been ported to the ESP8266 and dozens of other microcontrollers that will soon be in the Internet of Things.
Espruino Hack Chat
https://hackaday.io/event/26310-espruino-hack-chat
Javascript for microcontrollers is here! Wondering how that works? Know tons about this? This is the chat to discuss it.
Tomi Engdahl says:
Linux is becoming more common in embedded systems
Which embedded applications are based on? This question was addressed in the AspenCore User Survey. According to the results, the popularity of open source solutions grows at the expense of custom-made applications such as Microsoft Embedded.
56% of respondents were from North America and 25% from Europe
22 percent of designers used Linux in their project. The FreeRTOS operating system chose 20 percent and own 19 percent of its built-in platform. Other linux versions as well as Android are popular.
Windows Embedded 7 was trusted by 8 percent of developers. When asked about the choices of the following projects, the popularity of Linux and FreeRTOS will continue to grow. The performance of embedded Windows will continue to deteriorate according to these results.
Source: http://etn.fi/index.php?option=com_content&view=article&id=6746&via=n&datum=2017-08-30_15:47:44&mottagare=31202
Tomi Engdahl says:
Is Tomorrow’s Embedded-Systems Programming Language Still C?
https://systemdesign.altera.com/tomorrows-embedded-systems-programming-language-still-c/
What is the best language in which to code your next project? If you are an embedded-system designer, that question has always been a bit silly. You will use, C—or if you are trying to impress management, C disguised as C++. Perhaps a few critical code fragments will be written in assembly language. But according to a recent study by the Barr Group, over 95 percent of embedded-system code today is written in C or C++.
And yet, the world is changing. New coders, new challenges, and new architectures are loosening C’s hold—some would say C’s cold, dead grip—on embedded software. According to one recent study the fastest-growing language for embedded computing is Python, and there are many more candidates in the race as well. These languages still make up a tiny minority of code. But increasingly, the programmer who clings to C/C++ risks sounding like the assembly-code expert of 20 years ago: their way generates faster, more compact, and more reliable code. So why change?
A Wave of Immigration
One major driving force is the flow of programmers into the embedded world from other pursuits. The most obvious of these is the entry of recent graduates. Not long ago, a recent grad would have been introduced to programming in a C course, and would have done most of her projects in C or C++. Not any more. “Now, the majority of computer science curricula use Python as their introductory language,” observes Intel software engineering manager David
Stewart. It is possible to graduate in Computer Science with significant experience in Python, Ruby, and several scripting languages but without ever having used C in a serious way.
Other influences are growing as well. Use of Android as a platform for connected or user-friendly embedded designs opened the door to Android’s native language, Java. At the other extreme on the complexity scale, hobby developers migrating in through robotics, drones, or similar small projects often come from an Arduino or Raspberry-Pi background. Their experience may be in highly compact, simple program-generator environments or small-footprint languages like B#.
The pervasiveness of talk about the Internet of Things (IoT) is also having an influence, bringing Web developers into the conversation. If the external interface of an embedded system is a RESTful Web presence, they ask, shouldn’t the programming language be JavaScript, or its server-side relative Node.js? Before snickering, C enthusiasts should observe that node.js, a scalable platform heavily used by the likes of PayPal and Walmart in enterprise-scale development, has the fastest-growing ecosystem of any programming language, according to the tracking site modulecounts.com.
The momentum for a choice like Node.js is partly cultural, but also architectural. IoT thinking distributes an embedded system’s tasks between the client side—attached to the real world, and often employing minimal hardware—and, across the Internet, the server side. It is natural for the client side to look like a Web app supported by a hardware-specific library, and for the server side to look like a server app. Thus to a Web programmer, an IoT system looks like an obvious use for JavaScript and Node.js.
Tomi Engdahl says:
Isolating Safety and Security Features on the Xilinx UltraScale+ MPSoC
https://www.mentor.com/embedded-software/resources/overview/isolating-safety-and-security-features-on-the-xilinx-ultrascale-mpsoc-a0acdb23-4116-4689-be74-f4ddfca02545?contactid=1&PC=L&c=2017_08_31_esd_newsletter_update_v8_august
It’s quickly becoming common practice for embedded system developers to isolate both safety and security features on the same SoC. Many SoCs are exclusively designed to take advantage of this approach and the Xilinx® UltraScale+™ MPSoC is one such chip.
Tomi Engdahl says:
Develop Software Earlier: Panelists Debate How to Accelerate Embedded Software Development
A lively and open discussion explored tools and methodologies to accelerate embedded software development.
http://s3.mentor.com/public_documents/whitepaper/resources/mentorpaper_101868.pdf
Lauro:
What are the technologies to develop software and wh
at are the most effective ways to
use them to start and finish software development earl
ier?
Russ:
When people think about starting to develop software
earlier, they often don’t think about
emulation. Software can be developed on emulation. E
mulation represents the earliest cycle-
accurate representation of the design capable of runnin
g software. An emulator can be available
very early in the design cycle, as soon as the hardwa
re developers have their design to a point
where it runs, albeit not fully verified, and not ready
to tape-out. At that point, they can load
memory images into the design and begin running it.
Today’s emulators support powerful debug
tools that give much better debug visibility into th
e system, and make it practical to start
software development on emulation, sooner than ever be
fore.
Jason:
There are six techniques for pre-silicon software develo
pment: FPGA prototyping;
Emulation; Cycle-Accurate/RTL simulation; Fast instruc
tion set simulation; Fast models and an
emulation hybrid; operating system simulation (instruct
ion set abstracted away). Most projects
adopt two or three since it is too difficult to learn,
setup and maintain all of them.
When you start a project, you write code, and the first
thing that occupies your mind is how to
execute that code. Software developers are creative abo
ut finding ways to execute their
software. It could be on the host machine, a model,
or a prototype
Tomi Engdahl says:
Competing on Speed: Bringing Intelligence into the Customer Experience
https://www.aricent.com/tech-vision-2017/overview
Combined, these trends are the basis for creating revolutionary products that learn what their customers want before they know themselves.
Energize The Core: Focus on what matters
Sustainable growth requires investment in core products to keep them relevant and refreshed, as well as in R&D to develop the next generation of products and services that will replace the core.
Digital Services Supercycle: Generate predictable growth from platforms
The digital economy is being built on platforms. The imperative is to keep pace with rapidly evolving platforms and to attract and empower a new generation of digital developers to build your products.
AI Stimulus: Optimal is the new functional
Artificial intelligence will soon be at the core of just about every product and service. The critical question leaders must ask is: “What is the problem we are trying to solve with AI?”
Frictionless Design: Effortless and surprising experiences
Friction is why customers change brands. Good design eliminates friction, opening possibilities for more natural interactions. Designing for security is critical for building trust and reducing friction.
Tomi Engdahl says:
Improving Data Security
Why hardware encryption is so important for embedded storage.
https://semiengineering.com/improving-data-security/
For industrial, military and a multitude of modern business applications, data security is of course incredibly important. While software based encryption often works well for consumer and some enterprise environments, in the context of the embedded systems used in industrial and military applications, something that is of a simpler nature and is intrinsically more robust is usually going to be needed.
Self encrypting drives utilize on-board cryptographic processors to secure data at the drive level. This not only increases drive security automatically, but does so transparently to the user and host operating system. By automatically encrypting data in the background, they thus provide the simple to use, resilient data security that is required by embedded systems.
Embedded vs. enterprise data security
Both embedded and enterprise storage often require strong data security. Depending on the industry sectors involved this is often related to the securing of customer (or possibly patient) privacy, military data or business data. However that is where the similarities end. Embedded storage is often used in completely different ways from enterprise storage, thereby leading to distinctly different approaches to how data security is addressed.
Hardware-based full disk encryption
For embedded applications where access control is far from guaranteed, it is all about securing the data as automatically and transparently as possible. Full disk, hardware based encryption has shown itself to be the best way of achieving this goal.
Full disk encryption (FDE) achieves high degrees of both security and transparency by encrypting everything on a drive automatically. Whereas file based encryption requires users to choose files or folders to encrypt, and also calls for them to provide passwords or keys to decrypt them, FDE works completely transparently. All data written to the drive is encrypted, yet, once authenticated, a user can access the drive as easily as an unencrypted one. This not only makes FDE much easier to use, but also means that it is a more reliable method of encryption, as all data is automatically secured. Files that the user forgets to encrypt or doesn’t have access to (such as hidden files, temporary files and swap space) are all nonetheless automatically secured.
While FDE can be achieved through software techniques, hardware based FDE performs better, and is inherently more secure. Hardware based FDE is implemented at the drive level, in the form of a self encrypting SSD. The SSD controller contains a hardware cryptographic engine, and also stores private keys on the drive itself.
Tomi Engdahl says:
Microcontroller Forth cross compiler
Forth cross compiler for tiny microcontrollers
https://hackaday.io/project/26328-microcontroller-forth-cross-compiler
Forth cross compiler for 8051, AVR, MSP430, PIC, and STM8 microcontrollers.
The compiler is suitable for parts with as little as 1K program memory and 64 bytes RAM. The kernel code occupies 100-500 bytes, and it’s recommended to reserve about 24 bytes for the stacks. At this size, only a bare minimum of Forth words are supported. There is no resident interpreter or compiler.
The assemblers, compiler, and kernel are written in Forth and are all very simple. The user is encouraged to make modifications as see fit.
Tomi Engdahl says:
MCUs Get Expanded Benchmark
http://www.eetimes.com/document.asp?doc_id=1332268&
The EEMBC group expanded its benchmark for ultra-low power microcontrollers, adding a test for peripherals to its existing benchmark for cores. At press time, the group had more than a dozen results, mainly from Ambiq and STMicroelectronics, that it planned to post on its website.
The ULPMark-PeripheralProfile includes 10 one-second tasks that use a combination of four peripherals — analog-to-digital converters, real-time clocks, serial peripheral interfaces, and pulse-width modulators. All of the peripheral and core tests essentially monitor work per joule.
Many microcontrollers “may use the same ARM cores, but how they are implemented varies greatly,” said Markus Levy, EEMBC’s president.
Though standard benchmarks are based on 3.0-V parts, vendors also can publish low-volt results that, at 1.8 V, show significant improvements.
EEMBC sells the benchmark software for $2,500. For an extra $1,000, users get Arduino and Raspberry Pi test boards as well as a new energy-monitoring board designed by STMicroelectronics that will be available at the end of the month.
Tomi Engdahl says:
CircuitPython – a Python implementation for teaching coding with microcontrollers
https://github.com/adafruit/circuitpython
Adafruit CircuitPython is an open source derivative of MicroPython for use on educational development boards designed and sold by Adafruit.
CircuitPython, a MicroPython derivative, implements Python 3.x on microcontrollers such as the SAMD21 and ESP8266.
Tomi Engdahl says:
Why Hardware Emulation’s OS is Like a Computer System
http://www.eetimes.com/author.asp?section_id=36&doc_id=1332292&
Mentor’s Charley Selvidge has been thinking that the operating system of a hardware emulator is a natural evolution of the way software systems are built for emulators.
All of this comes in handy as he explains the landscape of hardware emulation, something about which he knows a thing or two. In the late 1990s, Charley was a founder of Virtual Machine Works, which was located a stone’s throw from MIT in Cambridge, Massachusetts. VMW, as it was known, was acquired by IKOS Systems in 1998, which subsequently became part of Mentor in 2002.
He moves his analogy to emulators and notes: “They consist of a hardware execution platform at the bottom for running a model of a digital chip and a set of application-oriented tasks to run on the emulator.” These tasks often have high-level objectives, such as characterizing the amount of power a chip is consuming or processing a software application that runs on a processor inside the chip. In either case, the entire chip needs to be considered as part of the task.
Undeniably, he adds, these are high-level and complex tasks routinely performed by an emulator. A set of intermediate services standard to emulation inside of an operating system could insulate high-level tasks from the low-level, machine-specific details associated with emulation.
For this reason, Charley affirms, operating systems are an interesting concept for an emulator.
Hardware and software scalability in hardware emulation
All emulators are based on some kind of modeling component; that is, a device that can model a piece of a chip. A multitude of these modeling components are assembled together in a small, medium, or large number to build systems of various sizes. On top of this underlying hardware is a software compilation system. An emulation compiler reads in a database or model of an integrated circuit and writes out a datasteam that configures the array of modeling components in the emulator to form an image of the chip.
Typically, integrated circuits are designed via computer programs that execute a description of the circuit written in one of a few computer languages, generically called hardware description languages (HDLs). The most commonly used HDLs are Verilog, SystemVerilog, and VHDL. The circuit description defines the behavior of the circuit. These descriptions are synthesized into a real integrated circuit and compiled into a model that runs on an emulator.
According to Charley, with a model for a chip, the designer would load it onto the emulator, a machine-specific task performed by the emulator’s OS software.
Tomi Engdahl says:
CPU or FPGA for image processing: Which is best?
http://www.vision-systems.com/articles/print/volume-22/issue-8/features/cpu-or-fpga-for-image-processing-which-is-best.html?cmpid=enl_vsd_vsd_newsletter_2017-09-18
As multicore CPUs and powerful FPGAs proliferate, vision system designers need to understand the benefits and trade-offs of using these processing elements.
This increase in performance means designers can achieve higher data throughput to conduct faster image acquisition, use higher resolution sensors, and take full advantage of some of the latest cameras on the market that offer the highest dynamic ranges. An increase in performance helps designers not only acquire images faster but also process them faster. Preprocessing algorithms such as thresholding and filtering or processing algorithms such as pattern matching can execute much more quickly. This ultimately gives designers the ability to make decisions based on visual data faster than ever.
As more vision systems that include the latest generations of multicore CPUs and powerful FPGAs reach the market, vision system designers need to understand the benefits and trade-offs of using these processing elements. They need to know not only the right algorithms to use on the right target but also the best architectures to serve as the foundations of their designs.
Tomi Engdahl says:
Isolating Safety and Security Features on the Xilinx UltraScale+ MPSoC
https://www.mentor.com/embedded-software/resources/overview/isolating-safety-and-security-features-on-the-xilinx-ultrascale-mpsoc-a0acdb23-4116-4689-be74-f4ddfca02545?contactid=1&PC=L&c=2017_09_21_esd_isolation_for_safety_wp_all_b
It’s quickly becoming common practice for embedded system developers to isolate both safety and security features on the same SoC. Many SoCs are exclusively designed to take advantage of this approach and the Xilinx® UltraScale+™ MPSoC is one such chip.
Tomi Engdahl says:
FPGA Clocks for Software Developers (or Anyone)
https://hackaday.com/2017/09/21/fpga-clocks-for-software-developers-or-anyone/
It used to be that designing hardware required schematics and designing software required code. Sure, a lot of people could jump back and forth, but it was clearly a different discipline. Today, a lot of substantial digital design occurs using a hardware description language (HDL) like Verilog or VHDL. These look like software, but as we’ve pointed out many times, it isn’t really the same. [Zipcpu] has a really clear blog post that explains how it is different and why.
[Zipcpu] notes something we’ve seen all too often on the web. Some neophytes will write sequential code using Verilog or VHDL as if it was a conventional programming language. Code like that may even simulate. However, the resulting hardware will — at best — be very inefficient and at worst will not even work.
We did mildly disagree with one statement in the post: “…no digital logic design can work without a clock.”
Clocks for Software Engineers
http://zipcpu.com/blog/2017/09/18/clocks-for-sw-engineers.html
Tomi Engdahl says:
Key Considerations for Software Updates for Embedded Linux and IoT
http://www.linuxjournal.com/content/key-considerations-software-updates-embedded-linux-and-iot
The Mirai botnet attack that enslaved poorly secured connected embedded devices is yet another tangible example of the importance of security before bringing your embedded devices online. A new strain of Mirai has caused network outages to about a million Deutsche Telekom customers due to poorly secured routers. Many of these embedded devices run a variant of embedded Linux; typically, the distribution size is around 16MB today.
Unfortunately, the Linux kernel, although very widely used, is far from immune to critical security vulnerabilities as well. In fact, in a presentation at Linux Security Summit 2016, Kees Cook highlighted two examples of critical security vulnerabilities in the Linux kernel: one being present in kernel versions from 2.6.1 all the way to 3.15, the other from 3.4 to 3.14. He also showed that a myriad of high severity vulnerabilities are continuously being found and addressed—more than 30 in his data set.
Although the processes and practices of development teams clearly have a critical impact on the (in)security of software in embedded products, there is a clear correlation between the size of the software project’s code base and the number of bugs and vulnerabilities as well. Steve McConnell in Code Complete states there are 1–25 bugs and vulnerabilities per 1,000 lines of code
Seasoned software developers always seek to reduce the size of the code base through refactoring and the reuse of functionality in libraries, but with the never-ending demand for more features and intelligence in every product, it is clear that the amount of software in embedded devices will only grow. This also necessarily implies that there will be more bugs and vulnerabilities as well.
In the first question, we simply asked if software updates are being deployed to their embedded products today and, if so, which tools were used
45.5% of the respondents said that updates were never being deployed to their products. Their only way to get new software into customers’ hands was to manufacture hardware with the new software and ship the hardware to the customers.
Roughly the other half, 54.5%, said that they did have a way to update their embedded products, but the method was built in-house. This also includes local updates, where a technician would need to go to a device physically and update the software from external media, such as a USB stick. Yet another category was devices enabled for remote updates, but where you could update only one at the time, essentially precluding any mass updates. Finally, some had the capability to deploy mass updates to all devices in a highly automated fashion.
One of the key findings here was that virtually nobody reused a third-party software updater—they all re-invented the wheel!
You can broadly classify embedded updaters into image- or package-based. Image-based updaters will work on the block level and can replace an entire partition or storage device. Package-based updaters work on the file level, and some of them, like RPM/YUM, are well known in Linux desktop and server environments as well.
Image-based updaters have, in general, a clear preference in the embedded space. The main reason for this is that they typically provide atomicity during the update process. Atomicity means that 1) an update is always either fully applied or not at all, and 2) no other component except the software updater can ever see a partial update. This property is very important for embedded updaters
Package-based approaches generally suffer from not being able to implement atomic updates, but they have some advantages as well. The installation time of an update is shorter, and the amount of bandwidth used also can be smaller than for image-based updates.
The Embedded Environment
People familiar with Linux desktop and server systems might ask why we are not just using the same tools and processes that we know from these systems, including package managers (such as rpm, dpkg), VMs and containers to carry out software updates. To understand this, it is important to see in which aspects an embedded device is different with regards to applying software updates.
Unreliable Power
I already touched on this property of an embedded system, and this is a widely known issue: an embedded device can, in general, lose power at any time.
Unreliable Network
Embedded devices typically are connected using some kind of wireless technology. Although Wi-Fi is used in some devices, it is more common to use wireless standards that have longer range but lower data rates, for example 3G, LoRa, Sigfox and protocols based on IEEE 802.15.4 (low-rate wireless personal area networks).
It is tempting to assume that high-speed wireless networks will be generally adopted by embedded devices as technology evolves
Expensive Physical Access
Once a large-scale issue that cannot be fixed remotely occurs, the cost of remediating it is typically very high. The reason is that embedded devices are typically widely distributed geographically.
For example, a manufacturer of smart energy grid devices can install these devices in thousands of homes in several countries. If there is a critical issue with an update to the Linux kernel that cannot be fixed remotely, the cost of either sending a service technician to all those homes or asking customers to send devices back to the vendor can be prohibitive.
Five-to-Ten-Year Device Lifetime
Technology moves very fast, and it’s typical to replace common consumer electronics devices like smartphones and laptops every two to three years.
However, more expensive consumer devices like high-end audio systems and TVs are replaced less frequently. Industrial devices that do not directly interact with humans typically have even longer lifetimes. For example, robots used on factory floors or energy grid devices easily can reach a ten-year lifetime.
In conclusion, in the embedded environment, people need to be very wary of the risk of “bricking” devices.
Key Criteria for Embedded Software Updaters
Robust and Secure
Atomic Updates
Consistent Deployments
Authenticity Checks before Updates
Sanity Checks after Updates
Integration with Existing Development Workflow
Bandwidth
Downtime during Update
Deployment Management
Conclusion
Many design trade-offs need to be considered in order to deploy software updates to IoT devices. Although historically most teams have decided to implement their homegrown updaters, the recent appearance of several open-source software updaters for embedded Linux means that we should be able to stop re-inventing the wheel.
Resources
SWUpdate is a very flexible client-side embedded updater for full image updates, licensed GPL version 2.0+.
Mender, the project the author of this article is involved in, focuses on ease of use and consists of a client updater and management server with a UI and is licensed under Apache License 2.0.
Tomi Engdahl says:
Sponsored Content
Authentication Flash: Closing the Security Gap Left by Conventional NOR Flash ICs
http://www.eetimes.com/document.asp?doc_id=1332289&
In response to demand from security-conscious OEMs, the manufacturers of modern microcontrollers and systems-on-chip (SoCs) commonly equip their products with a broad range of security capabilities: standard, off-the-shelf 32-bit MCUs for mainstream, non-financial applications will today often feature a hardware cryptographic accelerator, a random number generator (RNG) and secure memory locations.
But serial Flash memory – the location in which much of an OEM’s precious intellectual property (IP) is stored – has traditionally been more vulnerable than the SoC or microcontroller. Security weaknesses in the companion Flash memory to an MCU or SoC expose OEMs to the commercially damaging risk of product theft due to the cloning of reverse engineered PCB designs. This article explains how Authentication Flash can be uniquely and securely paired to an authorized host controller.
Today’s security loophole
A fundamental security requirement for every reputable OEM is to prevent the possibility of theft or cloning of the OEM’s IP, including application code that is stored in external serial NOR Flash.
Of course, much of the value embedded in an electronics end product is not secret. Take the example of a smart home Internet of Things (IoT) thermostat: a painstaking tear-down analysis of the thermostat’s board assembly will enable all the components to be precisely identified and the board layout to be faithfully replicated by any factory that wishes to clone the product. The hardware design is not secret.
Tomi Engdahl says:
James Somers / The Atlantic:
Experts shed light on how model-driven engineering can help programmers solve problems and reduce software errors more effectively than traditional programming
The Coming Software Apocalypse
A small group of programmers wants to change how we code—before catastrophe strikes
https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/
t’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.
“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.”
Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”
The software did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing.
Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break.
Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination.
This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
The attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.
Like everything else, the car has been computerized to enable new features.
Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.
As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
“Software engineers don’t understand the problem they’re trying to solve, and don’t care to.”
What made programming so difficult was that it required you to think like a computer.
“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.”
“There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it.
There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.
Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,”
“Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant.”
Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?
Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it. It was like taking off a blindfold. Granger called the project “Light Table.”
In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
“A lot of those things seemed like misinterpretations of what I was saying,”
Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
“Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”
Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car.
“The people know how to code. The problem is what to code.”
“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.
Of course, for this approach to succeed, much of the work has to be done well before the project even begins.
The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous. In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”
Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming.
Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,”
Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort.
traditional projects begin with a massive requirements document in English, which specifies everything the software should do
The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.
The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why;
“it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements.
Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way
Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.
It is a pattern that has played itself out before.
You could do all the testing you wanted and you’d never find all the bugs.
when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time
No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
“Few programmers write even a rough sketch of what their programs will do before they start coding.”
TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer.
The language was invented by Leslie Lamport, a Turing Award–winning computer scientist.
For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.”
“The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems.
Newcombe isn’t so sure that it’s the programmer who is to blame.
Most programmers who took computer science in college have briefly encountered formal methods.
“I needed to change people’s perceptions on what formal methods were,”
Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’”
In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough
“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code.
“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.
“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,”
One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
“When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”
Tomi Engdahl says:
We’re Using the Word Firmware Wrong
https://hackaday.com/2017/09/29/were-using-the-word-firmware-wrong/
I had an interesting discussion the other day about code written for an embedded system. I was speaking with Voja Antonic about ‘firmware’. The conversation continued forward but I noticed that he was calling it ‘software’. We later discussed it and Voja told me he thought only the parts of the code directly interacting with the microcontroller were firmware; the rest falls under the more generic term of software. It really had me wondering where firmware stops being firmware and is merely software?
My go to sources are generally Merriam-Webster and Oxford English dictionaries and both indicate that firmware is a type of software that is indelible:
Permanent software programmed into a read-only memory.
–Oxford English Dictonary
computer programs contained permanently in a hardware device (such as a read-only memory)
–Merriam-Webster Dictonary
According to this definition, I have never written a single bit of firmware. Everything I have written has been embedded software. But surely this is a term that must change with the times as technology progress so I kept digging.
firmware is a type of computer program that provides the low-level program control for the device’s specific hardware.
–Wikipedia
There we are. I’ve still been using the term wrong but this fits what Voja put forth in our conversation.
Tomi Engdahl says:
Linux kernel long term support extended from two to six years
https://www.theregister.co.uk/2017/10/03/linux_kernel_long_term_support_extended_from_two_to_six_years/
Google wants Android devices to survive four OS upgrades, even if LTS releases make Linus a bit grumpy
Long-term-support (LTS) editions of the Linux Kernel will henceforth be supported for six years, up from the current two.
News of the extension emerged at the “Linaro Connect” conference at which Googler Ilyan Malchev announced it, saying he had Linux royalty Greg Kroah-Hartman’s permission to break the news.
In his talk, Malchev explained that silicon-makers have to pick a version of the Linux kernel with which to work, but that commercial necessities mean they may do so knowing that there will be perhaps just a year of support remaining.
“The end result is that LTS cannot cover the device’s lifecycle,” he said. “And LTS is where all the critical bug fixes from upstream trickle down.”
“What Google wants to see is when a device is launched it gets upgraded four times to new versions of Android. That is basically the lifespan of a phone, but you get lucky if you get one of these upgrades.”
A six-year support window, by contrast, will mean years of upgradability for Android devices, because each fifth Linux kernel receives LTS status. With the kernel on an eight-week release cycle, that means a new LTS version about every nine months.
Keynote: Iliyan Malchev (Google) – SFO17-400K1
http://connect.linaro.org/resource/sfo17/sfo17-400k1/
Tomi Engdahl says:
We’re Using the Word Firmware Wrong
https://hackaday.com/2017/09/29/were-using-the-word-firmware-wrong/
I had an interesting discussion the other day about code written for an embedded system. I was speaking with Voja Antonic about ‘firmware’. The conversation continued forward but I noticed that he was calling it ‘software’. We later discussed it and Voja told me he thought only the parts of the code directly interacting with the microcontroller were firmware; the rest falls under the more generic term of software. It really had me wondering where firmware stops being firmware and is merely software?
Tomi Engdahl says:
Toward System-Level Test
What’s working in test, what isn’t, and where the holes are.
https://semiengineering.com/toward-system-level-test/
The push toward more complex integration in chips, advanced packaging, and the use of those chips for new applications is turning the test world upside down.
Most people think of test as a single operation that is performed during manufacturing. In reality it is a portfolio of separate operations, and the number of tests required is growing as designs become more heterogeneous and as they are used in markets such as automotive and industrial markets where chips are expected to last 10 to 20 years. In fact, testing is being pushed much further forward into the design cycle so that test strategies can be defined early and built into the flow. Testing also is becoming an integral part of post-manufacturing analysis as a way of improving yield and reliability, not just in the chip, but across an entire system in which that chip and other chips are being used.
Under the “test” banner are structural, traffic and functional tests, as well as built-in self-test to constantly monitor components. The problem is that not all of the results are consistent, which is why there is a growing focus on testing at a system level.
“From a system point of view, the focus is on the traffic test,” said Zoe Conroy, test strategy lead at Cisco. “But just putting a sensor in the corner of a die doesn’t measure anything. You need to put it right in the middle of a hotspot. The challenge is understanding where that is because hotspots found during ATE are different than the hotspots found during a traffic test. You also need to understand how memory is used in functional mode because what we’ve found at 28nm is that the whole memory is not being tested.”
Problems are showing up across the test spectrum as existing technologies, flows and expertise are applied to new problems—or at least more complex problems.
“With deep learning and machine learning, Nvidia is selling a lot of boards and systems into the data center,”
“You can’t test everything at the same time,” said Derek Floyd, director of business development for power, analog and controller solutions at Advantest. “No one will pay for it. Tests need to be multi-domain, but that’s very different. ATE is predictably deterministic. It’s a nice clean environment. With system-level test, you’re adding in things like cross-talk, jitter, parametric effects, and you need to do code monitoring. But you don’t necessarily get the access you want inside of the chip, so you look at the limits and what is the most critical thing to isolate in a design.”
Defining system-level test
System-level test is the ability to test a chip, or multiple chips in a package, in the context of how it ultimately will be used. While the term isn’t new, the real-world application of this technology has been limited to a few large chipmakers.
That is beginning to change, and along with that the definition is beginning to evolve. Part of the reason is the growing role that semiconductors are playing in various safety-critical markets, such as automotive, industrial and medical. It’s also is partly due to the shift away from a single processsing element to multiple processor types within a device, including a number of accelerators such as FPGAs, eFPGAs, DSPs and microcontrollers. But even within a variety of mobile devices, the cloud, or in machine learning/AI, understanding the impact of real-world use cases on a chip’s performance—and such physical effects as thermal migration and its effect on electromigration and mean time to failure—are becoming critical metrics for success.
Tomi Engdahl says:
Authentication Flash: Closing the Security Gap Left by Conventional NOR Flash ICs
https://www.eetimes.com/document.asp?doc_id=1332289&
In response to demand from security-conscious OEMs, the manufacturers of modern microcontrollers and systems-on-chip (SoCs) commonly equip their products with a broad range of security capabilities: standard, off-the-shelf 32-bit MCUs for mainstream, non-financial applications will today often feature a hardware cryptographic accelerator, a random number generator (RNG) and secure memory locations.
But serial Flash memory – the location in which much of an OEM’s precious intellectual property (IP) is stored – has traditionally been more vulnerable than the SoC or microcontroller. Security weaknesses in the companion Flash memory to an MCU or SoC expose OEMs to the commercially damaging risk of product theft due to the cloning of reverse engineered PCB designs. This article explains how Authentication Flash can be uniquely and securely paired to an authorized host controller.
Today’s security loophole
A fundamental security requirement for every reputable OEM is to prevent the possibility of theft or cloning of the OEM’s IP, including application code that is stored in external serial NOR Flash.
Of course, much of the value embedded in an electronics end product is not secret. Take the example of a smart home Internet of Things (IoT) thermostat: a painstaking tear-down analysis of the thermostat’s board assembly will enable all the components to be precisely identified and the board layout to be faithfully replicated by any factory that wishes to clone the product. The hardware design is not secret.
The application code is secret – or rather, it ought to be. An electronics system, however, is only as strong as its weakest link. Today, the main SoC or MCU is normally strongly protected by encryption, anti-tampering and secure storage capabilities implemented in hardware and software. So if an attacker wishes to clone the product’s application code, it’s most likely entry point is an external Flash memory IC.
For this reason, OEMs today commonly ‘protect’ their code storage hardware with a unique identifier (UID) stored in partitioned memory space in the Flash IC. In truth, however, a UID offers only a trivial barrier to attack. Any engineer with some security knowledge will be able to locate and identify its UID and easily disable the pairing between the MCU and code storage hardware. Once the pairing is removed, the OEM’s root of trust is broken. The code stored on the device can be copied, and cloning of the thermostat design can begin in earnest.
The solution: secure, dynamic authentication
The remedy for this problem is easy to design in theory: the UID needs to be different every time the memory is challenged by the host. But the advantage of the fixed UID used today is its ease of implementation: it just needs to be programmed once into the Flash memory, and once into the host controller; then the two values may be simply compared to authenticate the Flash device.
Symmetric encryption of memory ID
This is the problem that Winbond has set out to solve with its W74M family of Authentication Flash ICs (see Figure 1). Winbond is best known for its broad portfolio of serial NOR and NAND Flash memory ICs: it is the world’s top producer of serial Flash, with around a 30% share of the market. In 2016, Winbond shipped 2.1 billion units of its SpiFlash® serial Flash ICs.
The root key is, however, never directly transmitted between host and memory (the ‘challenger’ and ‘responder’). Instead, an encrypted message (a Hash-based Message Authentication Code, or HMAC) is generated by a combination of the root key and a dynamic element such as a random number; this combination is then processed through an encryption algorithm, the SHA-256.
To authenticate the W74M memory, the host controller compares the value of the memory’s HMAC against the value it computes by use of its root key and the same random number processed through SHA-256. If the values match, normal memory operations can proceed.
Because the HMAC is generated in part by a dynamic element, such as a random number, the value of the HMAC is different every time it is generated.
Tomi Engdahl says:
For Software Developers, Hardware Emulation Rules!
https://www.mentor.com/products/fv/techpubs/download?id=94829&contactid=1&PC=L&c=2017_10_24_veloce_rizzatti_software_rules_wp
This article will discuss hardware emulation’s ability to:
Confirm hardware and software interact correctly
Integrate newly designed hardware and software early to resolve bugs
Shave months off a project development schedule
Handle sequential process, undeterred by design sizes
At speeds of several hundreds of kilohertz or megahertz, it can boot Linux in less than one hour.
Hardware emulation can trace a software bug into the hardware or a hardware bug in the software’s behavior with the necessary speed, performance, and capacity to handle complicated debugging scenarios, something no other verification tool can do.
Tomi Engdahl says:
NXP to Bridge MCU & AP with ‘Crossovers’
https://www.eetimes.com/document.asp?doc_id=1332492
To use an MCU or not (and to opt for an apps processor instead)? This is an eternal question for embedded system designers, and one that NXP Semiconductors hopes to answer by launching a new high-end MCU.
NXP believes that its ARM Cortex-M7-based processor, the i.MX RT series, will fill the cost and real-time deterministic operations gaps in applications processing. It will also solve issues microcontrollers have not yet been able to address — higher performance and richer user interfaces. NXP calls it a “crossover processor.”
Not everyone in the analyst community is entirely sold on this “crossover” idea, since the lines between MCUs and apps processors (AP) are already blurring.
But NXP is sticking by its concept.
More specifically, NXP’s crossover processor is “the highest performing ARM Cortex-M7 based device with real time operation and an applications processor-level of functionality,” according to the company. At 600MHz, it is 50 percent faster than any other Cortex-M7 product and more than twice as fast as existing Cortex-M4 products, the company said. The i.MX RT 1500 also offers an interrupt latency as low as 20 nanoseconds, which NXP claims as “the lowest among all ARM Cortex-based products in the world.”
Who will benefit?
So, what’s the target market for crossover processors? NXP believes everything from smart connected home appliances, healthcare devices to drones and factory automation equipment will benefit.
Lees observed, “IoT has put things on fire.” On one hand, traditional MCUs are designed with no wireless connectivity, display support, security or massive data processing in mind. However, compared to MCUs, apps processors remain much more costly and harder to program. IoT devices, in Lees’ opinion, have exposed embedded system designers’ needs that have been addressed neither by apps processors nor MCUs.
Tomi Engdahl says:
Evolution Of The MCU
https://semiengineering.com/evolution-of-the-mcu/
As selling prices plunge, microcontroller companies are looking for new ways to achieve economies of scale.
Rising complexity has been inducing MCU makers to move to the next process nodes, where more memory, connectivity and processing can be crammed into the same space. This is Moore’s Law applied to a different market, and for 32-bit MCUs the leading-edge node today is 40nm. Companies are working on 32/28nm versions, as well.
“The problem is that microcontroller companies develop dozens or even hundreds of SKUs (stock-keeping units), in part because the pins assigned to serial I/Os vary,” said Geoffrey Tate, CEO of Flex Logix. “Some are SPI (serial peripheral interface), some are UART (universal asynchronous receiver/transmitter). Or they provide all the hardware and bond out differently. But at 40nm, mask costs are going up, so dozens of variations cost a lot of money. It takes a certain number of look-up tables to program a serial I/O.”
One way around that is to add flexibility into the microcontroller itself with an embedded FPGA, so these devices can be programmed for a variety of markets rather than developing a new MCU for each application.
A third approach is to be more efficient with verification, reducing the amount of time needed on the back end of the MCU design flow.
“This is why Portable Stimulus is so interesting,” said Frank Schirrmeister, senior group director for product management and marketing for emulation, FPGA-based prototyping and hardware/software enablement at Cadence. “It makes it easier to understand. Some of these MCUs are growing up to be more system-like, so what some of the big microcontroller companies are doing is selling those designs with customized software development around it. That can be used to validate it.”
Tomi Engdahl says:
Move Data Or Process In Place?
https://semiengineering.com/move-data-or-process-in-place/
Part 1: Moving data is expensive, but decisions about when and how to move data are very context-dependent. System and chip architectures are heading in opposite directions.
Should data move to available processors or should processors be placed close to memory? That is a question the academic community has been looking at for decades. Moving data is one of the most expensive and power-consuming tasks, and is often the limiter to system performance.
Within a chip, Moore’s Law has enabled designers to physically move memory closer to processing, and that has remained the most economical way to approach the problem. At the system level, different cost functions are leading to alternative conclusions. (Part two will examine decisions at the system level.)
Tomi Engdahl says:
How To Handle Concurrency
https://semiengineering.com/how-to-handle-concurrency/
System complexity is skyrocketing, but tool support to handle concurrency and synchronization of heterogeneous systems remains limited.
The whole industry has been migrating to heterogeneous architectures over the past couple years because they are more efficient. They use less power, and there is less of an emphasis on putting blocks to sleep and waking them up.
“Homogeneous processing is not the answer,” says Kurt Shuler, vice president of marketing at ArterisIP. “That only does one thing well. You can have a chip with six or more different types of software acceleration and slice up the processing. But the key is what you do in hardware versus software.”
There are several levels to the problem. “We need to separate the true system architect from the SoC architect,” says Drew Wingard, CTO at Sonics. “The system architect is responsible for the whole thing including the software and has a wider pallet of choices than the chip architect. We see different choices being made from the chip people in system companies versus the chip people within semiconductor companies.”
This is because the semiconductor company has to build something that will service multiple customers and multiple system architects. “There is extra work to make the device more general purpose,” adds Wingard. “It may require a reduction in abstraction, a reduction in the pallet of choices that the chip architect has available to them so that they can look conventional to a larger group of people. The system architect in a systems company can target a very specific product or service. They can make more tradeoffs and they do not need to make it as general purpose.”
Each user is looking for different solutions. “In the beginning people needed simulation,” says Simon Davidmann, CEO of Imperas. “Then they needed support for heterogeneous systems, then they needed debug environments, and then they needed verification tools to help them improve the quality and get more confidence in its correctness. Today we are seeing them want tools that help confirm that a system is sort of secure.”
Execute and observe
The starting point for everything is an executable model. “One of the tools put in place to help system architects are virtual prototypes,”
The utility of the virtual prototype is to provide a platform for simulation and debug. “Eclipse or GDB work fine for single processor but when there are multiple, they are each in a separate window and they are not well controlled,” says Davidmann. “With symmetric multi-processing, GDB allows you to see threads, but there was no good way to control and debug when heterogeneous processing was added.”
Debug needs to span both the hardware and software. “The user interface of the system debugger needs to be connected into the HW debugger,”
System-level verification
Once it’s possible to execute the system mode, that needs to be migrated into a verification task. There are several ways to do that.
“You can add assertions into the system so you get a feel for how the operating system is running,”
But there are other views. “With anything involving performance or these types of analysis you don’t want the actual software to run,” says Schirrmeister. “You need to be a lot more targeted. At some point you may need to run actual software, but not to begin with. It is about effectively creating stimulus to get you into a stress situation for the software partitioning. Portable Stimulus (PS) is advancing scenario-driven verification so that you can create tests that stimulate software tasks on certain processors causing specific transactions, and you will see if the hardware reacts appropriately.”
The Portable Stimulus Working Group has been discussing a range of alternatives between these two extremes. “The Hardware Software Interface (HSI) layer will enable various abstractions of software to be substituted within a test,”
Conclusions
Most aspects of system design and verification remain ad-hoc. While there may only be a few people who make the decisions about partitioning of software and hardware, scheduling, synchronization, and architecture, the effects of those decisions are felt by a lot of people with few tools available to help them.
Relief is on the way, but the EDA industry has been burned repeatedly in this area and investment is tentative. The predominant approach is to find ways to address the ‘effect’ that design teams are facing and provide small extensions for architects to better understand the ‘cause’. This means it may take a long time for good tools to become available. Until that happens, simulation of virtual prototypes remains the core of the toolbox.
Tomi Engdahl says:
Architectures Battle for Deep Learning
https://www.eetimes.com/author.asp?section_id=36&doc_id=1332538&
Chip vendors implement new applications in CPUs. If the application is suitable for GPUs and DSPs, it may move to them next. Over time, companies develop ASICs and ASSPs. Is Deep learning is moving through the same sequence?
In the brief history of deep neural networks (DNNs), users have tried several hardware architectures to increase their performance. General-purpose CPUs are the easiest to program but are the least efficient in performance per watt. GPUs are optimized for parallel floating-point computation and provide several times better performance than CPUs. As GPU vendors discovered a sizable new customer base, they began to enhance their designs to further improve DNN throughput. For example, Nvidia’s new Volta architecture adds dedicated matrix-multiply units, accelerating a common DNN operation.
Even these enhanced GPUs remain burdened by their graphics-specific logic. Furthermore, the recent trend is to use integer math for DNN inference, although most training continues to use floating-point computations. Nvidia also enhanced Volta’s integer performance, but it still recommends using floating point for inference. Chip designers, however, are well aware that integer units are considerably smaller and more power efficient than floating-point units, a benefit that increases when using 8-bit (or smaller) integers instead of 16-bit or 32-bit floating-point values.
Unlike GPUs, DSPs are designed for integer math and are particularly well suited to the convolution functions in convolutional networks (CNNs). Vector DSPs use wide SIMD units to further accelerate inference calculations. For example, Cadence’s C5 DSP core includes four SIMD units that are each 2,048 bits wide; as a result, the core can complete 1,024 8-bit integer multiply-accumulate (MAC) operations per cycle. That works out to more than 1 trillion MACs per second in a 16nm design. MediaTek has licensed a Cadence DSP as a DNN accelerator in its newest smartphone processors.
Tomi Engdahl says:
How to Lock Down Authentication Flash to Prevent Theft
https://www.arrow.com/en/research-and-events/articles/2017/10/27/how%20to%20lock%20down%20authentication%20flash%20to%20prevent%20theft
Tomi Engdahl says:
Security For Embedded Electronics
There are specific security precautions for embedded devices and systems.
https://semiengineering.com/security-for-embedded-electronics/
The embedded systems market is expected to enjoy steady growth in the near future—provided those systems can be adequately secured.
One of the biggest challenges for embedded devices and systems, especially those employed in the Internet of Things, is adequately protecting them from increasingly sophisticated hacking. This is a new tool for criminal enterprises, and a very lucrative one because it can be done remotely with little fear of being caught. Even when hackers are caught, they rarely are prosecuted, which has not gone unnoticed by criminal enterprises. A lack of reprisal has allowed them to recruit some of the best and brightest programmers.
The disruption that can be caused by unsecured IoT devices was dramatically demonstrated a year ago, when cyberattacks on Dyn DNS (now Oracle Dyn Global Business Unit) shut down some highly popular websites for much of one day. While no attacks of similar scale have occurred since then, cybersecurity experts expect there are more to come because the motivation will be financial rather than social disruption just for the heck of it.
Transparency Market Research forecasts the worldwide embedded systems market will rise to $233.19 billion in four years, marking a compound annual growth rate of 6.4% from 2015 to 2021. The automotive industry will take a higher profile in embedded systems, TMR predicts, representing 18.3% of the embedded systems market by 2021.
Complexity and numbers
At least part of the issue stems from growing complexity. That makes it harder to build, debug and test devices, but it also makes it more difficult to secure them.
“Complexity is a massive issue for everyone in the IoT,” said Haydn Povey, CTO of Secure Thingz. “Whether you’re building a power station or a car, it’s made up of layers upon layers of components and systems. Ownership needs to be embedded in all of those, from the ground up. There is a need for identity to be injected early. There needs to be management of identities. We need to own each of the components in a system over the lifecycle. And we need to be able to manage these subsystems, integrate them, integrate the security. There are so many pieces, so many aspects of ownership, such complex code in the system, that it’s a real challenge.”
Tomi Engdahl says:
ARM GCC Cross Compilation in Visual Studio
https://blogs.msdn.microsoft.com/vcblog/2017/10/23/arm-gcc-cross-compilation-in-visual-studio/
In Visual Studio 2017 15.5 Preview 2 we are introducing support for cross compilation targeting ARM microcontrollers. To enable this in the installation choose the Linux development with C++ workload and select the option for Embedded and IoT Development. This adds the ARM GCC cross compilation tools and Make to your installation.
Our cross compilation support uses our Open Folder capabilities so there is no project system involved. We are using the same JSON configuration files from other Open Folder scenarios and have added additional options to support the toolchains introduced here.
ARM’s online compiler lets you select your target platform and configures your project accordingly. I’m using an ST Nucleo-F411RE, but any board supported by the compiler should be fine.
What’s next
Download the Visual Studio 2017 Preview, install the Linux C++ Workload, select the option for Embedded and IoT Development and give it a try with your projects.
We have prototyped debugging support and expect to add that in a future release. It uses the same launch.vs.json format as other Open Folder scenarios, so it is possible to use in this release if you know what needs to be specified for your board and gdbserver.
We’re actively working on additional support for embedded scenarios. Your feedback here is very important to us. We look forward to hearing from you and seeing the things you make.
https://www.visualstudio.com/vs/preview/
Tomi Engdahl says:
The Seven Properties of Highly Secure Devices
https://www.microsoft.com/en-us/research/wp-content/uploads/2017/03/SevenPropertiesofHighlySecureDevices.pdf
Tomi Engdahl says:
Making The Case For Digital Exploration
https://semiengineering.com/making-the-case-for-digital-exploration/
Why faster simulation is essential to meet the demands for more customized and complex products.
Simulation has been established as a proven, effective means of streamlining the product development process. It allows companies to analyze product behavior earlier to evaluate more design iterations in the concept/design stage to optimize products, components and systems. However, simulation is often still siloed away in the domain of expert analysts, preventing companies from fully capitalizing on its benefits.
http://www.ansys.com/resource-library/white-paper/digital-exploration?tli=en-us
Tomi Engdahl says:
Video about writing maintainable embedded code
https://www.mentor.com/embedded-software/blog/post/video-about-writing-maintainable-embedded-code-b83c2816-4b16-42fd-9de8-9f61efdce936?cmpid=10168
How do embedded software developers spend their time?
https://www.youtube.com/watch?v=M_0k0oUdUQo
Tomi Engdahl says:
C++ for Embedded Development
https://www.youtube.com/watch?v=wLq-5lBc7x4
C++ for Embedded Development – Thiago Macieira, Intel Traditional development lore says that software development for constrained devices requires writing code in C, as applications written with C++ will always be bigger, require more resources and will run slower than their C counterparts. Nothing could be farther from the truth. While it is true that many C++ applications are big and demand a lot of resources, that is not a limitation of the language itself. This session will begin by giving the motivation of why C++ would be interesting in constrained-device development: it will briefly discuss what features of the language may make software safer and how such software can be even more efficient than those written in C. The presenter will then explain what aspects of the language developers should be specially aware of are and will provide information for developers to be able to develop their software in C++, without undue cost.
Tomi Engdahl says:
Tiny Tensor Brings Machine Deep Learning to Micros
https://hackaday.com/2017/11/13/tiny-tensor-brings-machine-deep-learning-to-micros/
We’ve talked about TensorFlow before — Google’s deep learning library. Crunching all that data is the province of big computers, not embedded systems, right? Not so fast. [Neil-Tan] and others have been working on uTensor, an implementation that runs on boards that support Mbed-OS 5.6 or higher.
Mbed of course is the embedded framework for ARM, and uTensor requires at least 256K of RAM on the chip and an SD card less than (that’s right; less than) 32 GB. If your board of choice doesn’t already have an SD card slot, you’ll need to add one.
Of course, you can install TensorFlow on a Raspberry Pi, too, but that’s not really a proper microcontroller.
AI inference library based on mbed and TensorFlow
https://github.com/neil-tan/uTensor
Tomi Engdahl says:
11 Myths About Software Tracing
Though it can be rather challenging to gain visibility into real-time systems during development and debugging, it’s nonetheless essential.
http://www.electronicdesign.com/test-measurement/11-myths-about-software-tracing
A traditional debugger allows you to inspect the system state once the system is halted, i.e., after an error has been detected, but doesn’t reveal the events leading to the error. To see this information, you need tracing.
Tracing means that the software’s behavior is recorded during run-time, allowing for later analysis of the trace. A hardware-generated trace (in the processor) gives you all details regarding the control-flow and doesn’t impact the execution of the traced system. However, it requires a special trace debugger unit and trace-enabled hardware in general. Software-generated tracing works on any hardware platform, and tracing can be active continuously during long testing sessions. Software-based tracing can even be deployed in production systems in many cases.
The software trace will use CPU cycles and RAM of the traced system, though, but this is often a reasonable tradeoff given the value of the resulting traces. Unless you have very strict timing requirement, down to the microsecond level, software tracing is simple, straightforward, and a perfectly viable solution. Let’s explore some of the myths surrounding this approach.
1. Tracing requires an advanced trace debugger.
This isn’t necessarily true; there are different kinds of tracing. Tools like the Linux Trace Tool (LTTng) and Percepio’s Tracealyzer rely on software-based tracing for systems using an RTOS, allowing for the capture of all relevant events in the RTOS kernel and any extra events you add in your application code. This kind of tracing doesn’t require any special hardware and can be used on essentially any system.
2. Tracing is complicated to set up.
Software-based tracing is generally quite easy to set up
3. Tracing is the last resort when debugging.
Some think of tracing as a fire extinguisher: When you can’t find the bug in other ways, you try tracing. However, a good tracing tool is better viewed as insurance.
Event tracing is a valuable complement to traditional debugging tools, helping you to “zoom in” on the cause of complicated issues, so that you know where to focus your further debugging efforts. No more frustrating debug sessions that are done blindly.
4. Traces are overwhelming and hard to understand.
Trace logs can certainly be difficult to interpret, especially without proper visualization. It’s also a fact that most of the data consists of repetitive patterns, many of which are irrelevant to the issue at hand. Finding the needle in this proverbial haystack can be an arduous task.
More advanced trace tools have dozens of graphical views that will let you view the system from different perspectives and at various levels of abstraction, allowing you to “drill down” on anomalies and understand the cause.
5. RTOS tracing is just for kernel hackers.
Totally not true. Application-focused developers can benefit a lot from RTOS-level tracing, since you can log custom events and data (e.g., inputs, outputs, state transitions) to facilitate analysis of algorithms and state machines in your application. This kind of tracing is very fast and therefore can also be used in time-critical code, unlike traditional “printf” calls.
Tracealyzer can visualize run-time interactions between application tasks in the form of dependency graphs (who is talking to who) and filtered timelines that gives a better picture of the application design, a complement to analyzing the source code.
6. Trace tools are expensive.
On one hand, that is true. Good trace tools cost good money, especially compared to those developer tools you can download for free. On the other hand, being able to find bugs earlier in the development process can save you a lot of money.
7. Tracing is only for hard real-time, safety-critical systems.
Not true. Less-critical embedded systems are often larger and have more code and more complex run-time behavior compared to safety-critical real-time systems. Even though the consequences of a failure are less severe, their behavior can be much more difficult to predict
8. Tracing is mainly used for analyzing the code coverage of test cases.
Tracing is indeed used for analyzing the code coverage, but for that you use hardware-based tracing on the instruction level. Software-based tracing is a quite different thing, focusing on relevant software events only and with any relevant data included.
9. Software-based tracing is intrusive and slows down the system.
Once again (see Myth #3 above), there’s some truth to this. Tracing does add a bit of code to your application, typically a few hundred clock cycles per event.
10. Tracing requires special trace support in the processor and on the board.
This is definitely true for hardware-based tracing, but software-based tracing does not require anything particular from the target system. Most systems can allow for tracing in “snapshot mode”, i.e., when storing the data to an internal RAM buffer.
11. Tracing to a RAM buffer requires lots of RAM.
Not true. Many trace tools encode events in a very compact format. For example, Percepio’s trace recorder uses only 4 to 8 bytes per event, and it supports ring-buffer mode where older data is overwritten when the buffer becomes full.
COMMENT:
provided that you leave a few unused io pins with pull ups on the product under test this gives you the abilty to provide streaming io or just plan toggle lines to push out to oscilloscope so you can interrogate on board peripherals.
Tomi Engdahl says:
LTTng is an open source tracing framework for Linux.
http://lttng.org/
http://lttng.org/docs/v2.10/#doc-getting-started
This is a short guide to get started quickly with LTTng kernel and user space tracing.
Tomi Engdahl says:
Software Designers: Build Your Own
When the talent you need is in short supply or not affordable, build your own experts.
https://www.designnews.com/business/software-designers-build-your-own/56758469757855?ADTRK=UBM&elq_mid=2183&elq_cid=876648
As a leading, full-service product development consultancy, our firm is always in competition for talent. Often, much larger (and richer) firms compete with us for the same limited pool of top-tier candidates.
The talent we wanted was in short supply and they either would not be interested in us and, if they were, we could not afford them.
His advice? Build your own. More specifically, knowing the unusual diversity of our design team, he recommended we build our team from within. Given that we saw no other choice, indeed, this is what we did. The results have been outstanding.
Other companies can accomplish what our company did – build its own team of UI/UX experts. Here are some factors that can help:
1. Have a design team that understands user experience.
The good news is that design thinking is deeply embedded in current product designer thinking in both the hardware and software worlds.
2. Have experienced hardware-oriented industrial designers.
There is no substitute for experience in order to build a team with industrial design “know how.” If the team has a bent towards product and technology rather than the softer side of design, these team members can apply their experience and wisdom into a new product development area. Many experienced industrial designers enjoy the ability to broaden their work experience across both hardware and software design
3. Bring in entry level industrial design talent.
Many university industrial design programs are now training their students across hardware/software boundaries; teaching user experience in all types of product categories.
4. Invest in teaching your hardware designers the “tools of the trade.”
There are processes and tools employed in the software world which are different from the hardware world. At our firm, we invested in training our industrial design team members in the specialized tools used for software design. Tools such as wireframing software applications are extremely helpful and improve efficiency and, in comparison to many other applications routinely used, such tools are not particularly challenging to learn.
5. Co-locate your designers and software developers.
Prior articles have addressed the value of having software developers in tight collaboration with software designers. This close collaboration results in the end product having both the look/feel/flow of a high-quality application while also helping software developers better understand and thus realize the product design vision.
As our firm has found, it was easier to build our own capabilities and develop them to a world class level than to “bang our heads against the wall” trying to compete with the large, high profile corporations for the few experienced and high-salaried UX/UI & GUI designers in the market.
Tomi Engdahl says:
What Sort of Testing Do My Applications Need?
http://www.securityweek.com/what-sort-testing-do-my-applications-need
As you start to get an idea of what your application portfolio looks like, you then need to start determining the specific risks that applications can expose your organization to. This is typically done through application security testing – identifying vulnerabilities in an application so that you can make risk-based decisions about mitigation and resolution.
The challenge lies in the fact that there is no “one size fits all” approach to application security testing. You cannot constantly perform exhausting testing on all applications – you simply will not have the resources. And, you will be limited in the types of testing you can do based on the type, language, and framework of the application, as well as the availability of source code. To most effectively begin the application security testing process, you need to determine the depth of testing you want to accomplish.
A valuable resource in looking at enumerating these concerns is the OWASP Application Security Verification Standard (ASVS). The OWASP ASVS provides three levels of assurance that can be applied to an application – Opportunistic, Standard, and Advanced – and provides specific guidelines of what analysis should be performed for verifications at each level.
Some examples that can provide insight into possible thought processes when evaluating testing strategies for specific applications are included below:
• High-risk web application developed in house: Because you have access to the source code and both SAST and DAST tools that are well-suited to testing the application, use a combination of static and dynamic testing with both manual and automated components. Also consider annual 3rd party manual assessments to get an external opinion and access to testing techniques your in-house team might not have mastery of.
• Lower-risk applications developed in house: Again, because you have access to both the source code and running environments, use either automated static or dynamic testing prior to each new release.
• Mid-risk applications developed by third party: Due to the fact that you don’t have access to source code, but do have an internal pre-production environment, rely on vendor maturity representation and an annual dynamic assessment performed by a trusted 3rd party.
• Developed by large packaged software vendor: Given that you currently have no budget to do independent in-depth research, rely on vendor vulnerability reports and use patch management practices to address risk.
Category:OWASP Application Security Verification Standard Project
https://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project
Tomi Engdahl says:
What Can The Philosophy of Unix Teach Us About Security?
http://www.securityweek.com/what-can-philosophy-unix-teach-us-about-security
For security vendors, this shift in philosophy has a number of consequences:
● Don’t expect to be the center of the universe. I’ve actually seen vendors try to position themselves as the center of the security workflow on many occasions. Give it up. No security team is going to rip up their existing workflow and make you the center of the universe.
● If your solution is not open, keep on walking. As I described above, the concept of pipes is thriving in the security world. If you’re not familiar with the philosophy of Unix, becoming familiar with it would likely help you understand the evolving role of vendors in the eyes of security teams. If your solution can’t be dropped in behind one of the pipes I need a solution for, it just isn’t going to be an easy sell.
● Do your part to end swivel chair. Of course, every solution needs to come with its own console and easy-to-use GUI. But don’t expect it to get much use – at least not by security analysts. Security teams already have too much to do, even if they are working out of a single, unified work queue. If your solution can’t log to and integrate with the unified work queue, it just isn’t going to work.
● Understand where you add value. One of the most important things a security vendor can do is to learn what life is like day to day inside a security program. Only by learning how security practitioners work and where their pain points and needs are can you truly understand where you add value.
Tomi Engdahl says:
Integration of vision in embedded systems
http://www.vision-systems.com/articles/print/volume-22/issue-1/features/integration-of-vision-in-embedded-systems.html?cmpid=enl_vsd_vsd_newsletter_2017-11-27
Embedded vision architectures enable smaller, more efficient vision solutions optimized for price and performance
Embedded computer systems usually come into play, when space is limited and power consumption has to be low. Typical examples are mobile devices, from mobile test equipment in factory settings to dental scanners. Embedded vision is also a great solution for robotics, especially when a camera has to be integrated into the robot’s arm.
Furthermore, the embedded approach allows reducing the system costs compared to the classic PC-based setup. Let’s say you spend $1,700 on the system with a classic camera, a lens, a cable and a PC. An embedded system with the same throughput would cost $300, because each piece of hardware is cheaper
So be it smart industrial wearables, automated parking systems, or people counting applications, there are several embedded system architectures available for integrating cameras into your embedded vision system.
Camera integration into the embedded world
In the machine vision world, a typical camera integration works with a GigE or USB interface, which more or less is a plug-and-play solution connected to a PC (or IPC). Together with a manufacturer’s software development kit (SDK) it is easy to get access to the camera and this principle can be transferred to an embedded system
Utilizing a single-board computer (SBC), this basic integration principle remains the same (Figure 3). Low-cost and easy to obtain SBCs contain all the parts of a computer on a single circuit board SoC, RAM, storage slots, IO Ports (USB 3.0, Gig-E, etc.).
Popular single-board computers like Raspberry Pi or Odroid have compatible interfaces (USB/ Ethernet). There are also industry-proven single-board computers available from companies such as Toradex (Horw, Switzerland; http://www.toradex.com) or Advantech (Taipei, Taiwan; http://www.advantech.com) that provide these standard interfaces.
More and more camera manufacturers provide their software development kit (SDK) also in a version working on an ARM platform, so that users can integrate a camera in a familiar way as on a Windows PC.
Specialized embedded systems
Embedded systems can be specialized to an even higher level, when the processing technology needs to be even more stripped down, for certain applications. That is why many systems are based on a system on module (SoM). These very compact board computer modules only contain a processor (to be precise: typically, a system on chip, SoC), microcontrollers, memory and other essential components.
Special image data transfer
A direct camera-to-SoC-connection for the image data transfer can be achieved by an LVDS-based connection or via the MIPI CSI2 standard. Both methods are not clearly standardized from the hardware side. This means there are no connectors specified, not even numbers of lanes within the cable. As a result, in order to connect a specific camera, a matching connector must usually be designed-in on the carrier board and is not available in standardized form on an off-the-shelf single-board computer.
CSI2, a standard coming from the mobile device industry, describes signal transmission and a software protocol standard. Some SoCs have CSI interfaces and there are drivers available for selected camera modules and dedicated SoCs. However, they are not working in a unified manner and there are no generic drivers. As a result, the driver may need to be individually modified and the data connection to the driver can require further adaptation on the application software side in order to enable the image data collection. So, CSI2 does not represent a ready-to-use solution working immediately after installation.
Camera configuration
Another aspect of these board-to-board connections is the camera configuration. Controlling signals can be exchanged between SoC and camera via various bus systems, e.g. CAN, SPI or I²C. As yet, no standard has been set for this functionality.
Embedded vision can be an interesting solution for certain applications; several applications based on GigE or-more typically-on USB, can be developed using single-board computers. Given that these types of hardware are popular and offer a broad range in price, performance and in compliance with quality standards (consumer and business), this is a reasonable option for many cases.
For a more direct interface, LVDS or CSI2-based camera-to-SoC connections are possible for image data transfer.
Tomi Engdahl says:
Different Approaches To Security
https://semiengineering.com/different-approaches-to-security/
Platform approaches, better understanding of security holes and new technologies could help deter attackers.
Everyone acknowledges the necessity for cybersecurity precautions, yet the world continues to be challenged by an invisible, inventive army of hackers.
The massive data breach at Equifax was only the latest in a series of successful cyberattacks on the credit monitoring firm.
And to drive home the importance of cybersecurity, Arm distributed a “Security Manifesto” to all attendees at the keynote session with Segars, Aiken, and others.
In the manifesto and her keynote remarks, Aiken emphasized the role of ordinary people in maintaining proper cybersecurity hygiene. “People are their own first line of defense, but not everybody behaves responsibly,” she wrote. “We are not all IT experts, and security is not always built into devices and systems by default.”
“As technology providers we must embrace our responsibilities under what we are calling the ‘Digital Social Contract’ and endeavor to protect users no matter what,” Segars wrote. “The approaches and thinking we set out in this Manifesto can make a difference, and I can see a world where we will have put hackers out of business.”
Those are lofty ambitions, to be sure, but they are also prerequisites for such markets as the IoT, the industrial IoT, and medical technology to live up to their full potential.
“[Cryptographer] Bruce Schneier said a couple of decades ago that if you think the technology can solve your security problems, then you don’t understand the problem and you don’t understand the solution,”
Platform approach
One way to reduce security issues is to utilize a platform approach—basically updating the platform as necessary, rather than trying to secure every device. Most of the major chip vendors have security built into their architectures, but Arm has extended that with a common framework for the IoT with its Platform Security Architecture.
“That’s something we need to start building in,” said Ian Smythe, senior director of marketing programs for Arm’s CPU Group. “Security is multilayer. We have a range of IP within that security layer. We have a history in security.”
Arm has enlisted Amazon, Cisco and Google in supporting the Platform Security Architecture, which will become available to IoT device developers in the first quarter of next year.
Tomi Engdahl says:
Video about writing maintainable embedded code
https://www.mentor.com/embedded-software/blog/post/video-about-writing-maintainable-embedded-code-b83c2816-4b16-42fd-9de8-9f61efdce936?contactid=1&PC=L&c=2017_11_30_esd_newsletter_update_v11_november
Tomi Engdahl says:
Embedded software article: RTOS Revealed #14
https://www.mentor.com/embedded-software/blog/post/embedded-software-article-rtos-revealed-14-9fefe28b-7c67-4e07-8a96-7708d82fdc86?contactid=1&PC=L&c=2017_11_30_esd_newsletter_update_v11_november
Partition memory: introduction and basic services
Tomi Engdahl says:
Creating Software Separation for Mixed Criticality Systems
https://www.mentor.com/embedded-software/resources/overview/creating-software-separation-for-mixed-criticality-systems-063e8993-cdc5-4414-a960-fc62db40c18c?contactid=1&PC=L&c=2017_11_30_esd_newsletter_update_v11_november
The continued evolution of powerful embedded processors is enabling more functionality to be consolidated into single heterogeneous multicore devices. Mixed criticality designs, those designs which contain both safety-critical and non-safety critical processes, can successfully leverage these devices and meet the regulatory requirements for IEC safety standards and the highest level of ISO. This whitepaper will describe many important considerations using RTOS for mixed criticality systems.
Tomi Engdahl says:
Embedded and Machine Learning Tools
https://www.mentor.com/embedded-software/machine-learning/?cmpid=12264&contactid=1&PC=L&c=2017_11_30_esd_newsletter_update_v11_november
Tomi Engdahl says:
Home> Community > Blogs > BenchTalk
Majestic microcontroller mixup
https://www.edn.com/electronics-blogs/benchtalk/4459095/Majestic-microcontroller-mixup
Reader Jay Carlson has composed a massive missive comparing 21 sub-$1 microcontrollers, and it’s glorious. Not just a simple look at chip specs, Jay dives deep into performance, and perhaps most importantly, ecosystems, devboards, and devtools. After all, any reasonably experienced engineer can get a handle on the hardware by scanning the datasheet, but the knowledge gleaned from spending a day or three with the devtools? Priceless.
https://jaycarlson.net/microcontrollers/