New approaches for embedded development

The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.

Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.

Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.

Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.

Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.

Traditional approaches

C/C++

Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.

C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.

And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.

Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.

Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.

Java

Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones,) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.

Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.

Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.

 

Increasing trends

Databases

Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.

There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).

SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).

If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.

RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.

Software quality

Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind

Safety

Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?

IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.

If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.

Security

The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.

Developing Secure Embedded Software white paper  explains why some commonly used approaches to security typically fail:

MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY

Many organizations are only now becoming aware of the need to incorporate security into their software development lifecycle.

Some techniques for building security to embedded systems:

Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed

Flood of new languages

Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device.  There are also many other relevant languages

One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages ​​(Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again.  Scripting languages ​​are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.

Scripting languages ​​have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used.
High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.

There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.

Bash and other shell scipting

Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).

One of the most useful tools when developing from within a Linux environment is the use of shell scripting. Scripting can help aid in setting up environment variables, performing repetitive and complex tasks and ensuring that errors are kept to a minimum. Since scripts are ran from within the terminal, any command or function that can be performed manually from a terminal can also be automated!

The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.

There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.

Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.

Lua

Lua is a lightweight  cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.

Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.

There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.

PHP

PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.

Python

Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming,  Many other paradigms are supported using extensions, including design by contract and logic programming.

Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.

Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.

Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.

For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriateExamples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.

Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.

JavaScript and node.js

JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases!  There is huge community of developers. JavaScript is a high-level language.

Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.

Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.

It doesn’t require a complicated IDE; all you really need is a terminal.

JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.

JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!

JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications

JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!

Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.

Future trends

According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:

One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.

Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.

This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.

Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.

But the shift will happen as hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot.

 

Sources

Most important information sources:

New approaches to dominate in embedded development

A New Approach for Distributed Computing in Embedded Systems

New Approaches to Systems Engineering and Embedded Software Development

Lua (programming language)

Embracing Java for the Internet of Things

Node.js

Wikipedia Node.js

Writing Shell Scripts

Embedded Linux – Shell Scripting 101

Embedded Linux – Shell Scripting 102

Embedding Other Languages in BASH Scripts

PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog

PHP

Python (programming language)

JavaScript: The Perfect Language for the Internet of Things (IoT)

Node.js for Embedded Systems

Embedded Python

MicroPython – Embedded Pytho

Anyone using Python for embedded projects?

Telit Programming Python

JavaScript: The Perfect Language for the Internet of Things (IoT)

MICROCONTROLLERS AND NODE.JS, NATURALLY

Node.js for Embedded Systems

Why node.js?

Node.JS Appliances on Embedded Linux Devices

The smartest way to program smart things: Node.js

Embedded Software Can Kill But Are We Designing Safely?

DEVELOPING SECURE EMBEDDED SOFTWARE

 

 

 

1,687 Comments

  1. Tomi Engdahl says:

    Addressing IC Security Threats Before And After They Emerge
    https://semiengineering.com/addressing-ic-security-threats-before-and-after-they-emerge/

    Experts at the Table: No chip will ever be completely secure, but that’s not necessarily a problem.

    Reply
  2. Tomi Engdahl says:

    TinyGo on Arduino Uno: An Introduction
    Run Golang on this old but still popular 8-bit AVR microcontroller.
    https://www.hackster.io/alankrantas/tinygo-on-arduino-uno-an-introduction-6130f6

    Reply
  3. Tomi Engdahl says:

    Low power: a chip and system-design primer
    https://www.edn.com/low-power-a-chip-and-system-design-primer/?utm_content=buffer416d1&utm_medium=social&utm_source=edn_facebook&utm_campaign=buffer

    Due to the growing use of electronics worldwide, reducing power consumption must begin at the microchip level. Power-saving techniques that engineers have designed in at the chip level have a far-reaching impact, especially when involving microcontrollers that serve as the engines behind most of these electronic devices.

    From a system-design perspective, identifying which microcontrollers are truly low-power requires designers to navigate through the myriad claims of various semiconductor vendors. Because of the varying and confusing metrics vendors use, this objective is a complicated task. This article briefly describes the main factors that you need to consider when analyzing competitive microcontroller alternatives. At a basic level, you can define microcontroller power consumption as the sum of active-mode power and standby

    the device’s total power consumption is the sum of its active-mode power, standby power, and wake-up power

    Reply
  4. Tomi Engdahl says:

    5 Keys To Successfully Managing Legacy Code
    Legacy code can be challenging to maintain. The trick to efficiently maintaining that code is to understand that there is more to it than simply managing the code.
    https://www.designnews.com/electronics-test/5-keys-successfully-managing-legacy-code/11813242162342?ADTRK=InformaMarkets&elq_mid=12331&elq_cid=876648

    Reply
  5. Tomi Engdahl says:

    Embedded Linux Developers Need More Tools
    https://www.eeweb.com/profile/maurizio-di-paolo-emilio/news/embedded-linux-developers-need-more-tools

    Percepio, a company specializing in the development of tools for the visualization of trace software for embedded systems and IoT devices, presented its latest solutions at Embedded World: DevAlert, a cutting-edge cloud-based service for the coordination of IoT devices, and new support for embedded Linux systems in Tracealyzer v4.4. This latest Tracealyzer version includes stunning visualization and analysis capabilities designed for embedded Linux application developers and is packaged in an intuitive, modern user interface.

    “DevAlert is useful for any kind of bugs, as long as they can be detected in the runtime software. Common examples are failed asserts/sanity checks, hard fault exceptions, stack overflow, and heap exhaustion and memory leaks.” Said by Johan Kraft CEO and Founder at Percepio.

    “DevAlert lets developers know”, continued Johan,” about missed bugs, in seconds after they caused an error for the very first time. DevAlert also provides visual trace diagnostics that makes it easy to understand and fix the bugs and rapidly push out an over-the-air update, thereby minimize the number of affected customers. Otherwise, bugs may remain in devices for months or years, affecting many customers. “

    Reply
  6. Tomi Engdahl says:

    One team is Agile and the other is Waterfall. How can they join forces to quickly develop products?

    Reply
  7. Tomi Engdahl says:

    Embedded systems engineers attitude “don’t touch it if it is working” is ok in those very rare cases where those systems are isolated, don’t communicate with other systems and can’t be accidentially connected to Internet or contanimated with malware ladden USB sticks.

    Reply
  8. Tomi Engdahl says:

    Google reveals Pigweed, open source modules for embedded development, not an OS
    https://9to5google.com/2020/03/19/google-pigweed-embedded-development/

    Last month, Google was found to have filed a trademark for an “operating system” by the name of “Pigweed.” Today, Google is officially taking the wraps off of Pigweed, a collection of open source libraries or “modules” for developers who work on embedded devices — not an operating system.

    Pigweed: A collection of embedded libraries
    https://opensource.googleblog.com/2020/03/pigweed-collection-of-embedded-libraries.html

    We’re excited to announce Pigweed, an open source collection of embedded-targeted libraries, or as we like to call them, modules. Pigweed modules are built to enable faster and more reliable development on 32-bit microcontrollers.

    Pigweed is in early development and is not suitable for production use at this time.
    Getting Started with Pigweed
    As of today, the source is available for everyone at pigweed.googlesource.com under an Apache 2.0 license.

    Reply
  9. Tomi Engdahl says:

    Hardware Is SEXY!
    https://www.hackster.io/news/hardware-is-sexy-6d3f7c646be1

    What makes me excited about embedded technology! It’s time to focus on user experience, hardware as a service, and personalization.

    Reply
  10. Tomi Engdahl says:

    Object-Oriented Programming — The Trillion Dollar Disaster
    Why it’s time to move on from OOP
    https://medium.com/better-programming/object-oriented-programming-the-trillion-dollar-disaster-92a4b666c7c7

    Reply
  11. Tomi Engdahl says:

    We Ruined Status LEDs; Here’s Why That Needs To Change
    https://hackaday.com/2020/02/20/we-ruined-status-leds-heres-why-that-needs-to-change/

    Ah, the humble status LED. Just about every piece of home electronics, every circuit module, and anything else that draws current seems to have one. In the days of yore, a humble indicator gave a subtle glow from behind a panel, and this was fine. Then the 1990s happened, and everything got much much worse.
    It’s Not The Technology, It’s How You Use It

    A status LED, most would agree, is there to indicate status. It need only deliver enough light to be seen when observed by a querying eye. What it need not do is glow with the intensity of a dying star, or illuminate an entire room for that matter. But, in the desperate attempts of product designers to appear on the cutting edge, the new, brighter LED triumphed over all in these applications.

    The pain this causes to the user is manifold. The number of electronic devices in the home has proliferated in past decades, the vast majority of which each have their own status LED. Worse, many of these are used in the bedroom, be it laptops, phone chargers, televisions, or others. With the increased brightness of these indicators, many of which are on all the time, the average sleeping space is lit like a Christmas tree.

    The fad of using blue LEDs for power indicators only makes this problem worse. The human eye features special receptors sensitive to blue light that are not only used for vision.T

    Reply
  12. Tomi Engdahl says:

    Google’s Pigweed For ARM Development Is A Nice Surprise
    https://hackaday.com/2020/03/21/googles-pigweed-for-arm-development-is-a-nice-surprise/

    Setting up an environment for Embedded Development was traditionally a pain and so vendors provide integrated development environments to help bridge the gap. Google has open-sourced their version of an embedded targeted environment designated as embedded-targeted libraries which they trademarked Pigweed.

    Google trademarked Pigweed with the U.S. Patent and Trademark Office in February and it popped up on the Google Open Source Blog along with some details.

    Pigweed: A collection of embedded libraries
    https://opensource.googleblog.com/2020/03/pigweed-collection-of-embedded-libraries.html

    We’re excited to announce Pigweed, an open source collection of embedded-targeted libraries, or as we like to call them, modules. Pigweed modules are built to enable faster and more reliable development on 32-bit microcontrollers.

    Reply
  13. Tomi Engdahl says:

    Product Engineering: How to Create Outstanding Software Product
    https://perfectial.com/blog/product-engineering/

    Creating an end-to-end software product is a complicated process that involves a cycle of actions and decisions. Sometimes you have an idea for a product but no clue where to go with it. A Software Development Company, with the help of Product Engineering Services, can help you to evaluate your idea, suggest quickest implementation scenarios, and create a map for product development.

    Software Product Engineering is a service that involves all stages of product creation: design, development, testing, deploying. But, the goal of Product Engineering is more challenging than simply delivering the final product – it’s to ensure that the product is functional and satisfies the needs of its end-user. Product Engineer is concerned with establishing whether the product will survive in the real world after the launch, which they determine by analyzing how it complies with the market requirements. Within the IT context, the product can be a piece of software, an app, or a business system. Product Engineering is dealing with the following specifics of the product:

    Quality
    Usability
    Functionality
    Durability

    Reply
  14. Tomi Engdahl says:

    OWASP API Security Project
    https://owasp.org/www-project-api-security/

    API Security Top 10 2019

    Here is a sneak peek of the 2019 version:

    API1:2019 Broken Object Level Authorization

    APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface Level Access Control issue. Object level authorization checks should be considered in every function that accesses a data source using an input from the user.

    API2:2019 Broken User Authentication

    Authentication mechanisms are often implemented incorrectly, allowing attackers to compromise authentication tokens or to exploit implementation flaws to assume other user’s identities temporarily or permanently. Compromising system’s ability to identify the client/user, compromises API security overall.

    API3:2019 Excessive Data Exposure

    Looking forward to generic implementations, developers tend to expose all object properties without considering their individual sensitivity, relying on clients to perform the data filtering before displaying it to the user.

    API4:2019 Lack of Resources & Rate Limiting

    Quite often, APIs do not impose any restrictions on the size or number of resources that can be requested by the client/user. Not only can this impact the API server performance, leading to Denial of Service (DoS), but also leaves the door open to authentication flaws such as brute force.

    API5:2019 Broken Function Level Authorization

    Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions, tend to lead to authorization flaws. By exploiting these issues, attackers gain access to other users’ resources and/or administrative functions.

    API6:2019 Mass Assignment

    Binding client provided data (e.g., JSON) to data models, without proper properties filtering based on a whitelist, usually lead to Mass Assignment. Either guessing objects properties, exploring other API endpoints, reading the documentation, or providing additional object properties in request payloads, allows attackers to modify object properties they are not supposed to.

    API7:2019 Security Misconfiguration

    Security misconfiguration is commonly a result of unsecure default configurations, incomplete or ad-hoc configurations, open cloud storage, misconfigured HTTP headers, unnecessary HTTP methods, permissive Cross-Origin resource sharing (CORS), and verbose error messages containing sensitive information.

    API8:2019 Injection

    Injection flaws, such as SQL, NoSQL, Command Injection, etc., occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s malicious data can trick the interpreter into executing unintended commands or accessing data without proper authorization.

    API9:2019 Improper Assets Management

    APIs tend to expose more endpoints than traditional web applications, making proper and updated documentation highly important. Proper hosts and deployed API versions inventory also play an important role to mitigate issues such as deprecated API versions and exposed debug endpoints.

    API10:2019 Insufficient Logging & Monitoring

    Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, extract, or destroy data. Most breach studies demonstrate the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.

    Reply
  15. Tomi Engdahl says:

    How to implement a secure software development lifecycle
    https://dev.solita.fi/2020/04/08/secure-software-development-lifecycle.html
    Have you ever found yourself wondering if the system you are
    implementing is secure enough? I have. Quite often actually. It is not
    an easy question to answer unless you are prepared. This blog post is
    about how to prepare yourself for that question. The short answer is
    the Secure Software Development Lifecycle which I will call SSDLC from
    this point onwards.

    Reply
  16. Tomi Engdahl says:

    Although not originally designed for embedded software development, the C language allows a range of programming styles from high-level application code down to direct low-level manipulation of hardware registers. As a result, C has become the most popular programming language for embedded systems today.

    Reply
  17. Tomi Engdahl says:

    Development teams are pressured to push new software out quickly. But with speed comes risk. Anyone can write software, but if you want to create software that is safe, secure, and robust, you need the right process.

    Reply
  18. Tomi Engdahl says:

    Stochastic Gradient Descent on Your Microcontroller
    Stochastic gradient descent saves you critical memory on tiny devices while still achieving top performance!
    https://www.hackster.io/news/stochastic-gradient-descent-on-your-microcontroller-116faecec30e

    Reply
  19. Tomi Engdahl says:

    A new release of tiny Lisp system has occurred — working on all sorts of microcontrollers down to Arduino UNO and BCC micro:bit, but also on a lot more powerful boards

    New features in uLisp 3.2
    http://www.ulisp.com/show?33Z7

    Reply
  20. Tomi Engdahl says:

    Inevitable Bugs
    https://semiengineering.com/inevitable-bugs/

    What differentiates an avoidable bug from an inevitable bug? Experts try to define the dividing line.

    Are bug escapes inevitable? That was the fundamental question that Oski Technology recently put to a group of industry experts. The participants are primarily simulation experts who, in many cases, help direct the verification directions for some of the largest systems companies. Are bug escapes inevitable? That was the fundamental question that Oski Technology recently put to a group of industry experts. The participants are primarily simulation experts who, in many cases, help direct the verification directions for some of the largest systems companies.

    At last year’s discussion, a formal capability maturing model was presented. This year, Oski built on top of that by presenting a formal signoff playbook. One of the first questions they asked was, “Are functional bug escapes inevitable?” They were somewhat taken back by the responses they received. While it’s generally accepted that it is impossible to remove every bug from a design, the inevitability of bugs has to be tied to design complexity and the familiarity of a design. Bugs in general may be inevitable, but that’s not necessarily true for any particular bug.

    Once that a bug is found, possibly very late in the verification process or in silicon, is it possible to look back and come to any conclusions about how the bug had escaped to that point? Participants were quick to point out that all verification is finite in terms of the time and resources that can be expended on it. That in itself implies that bugs are inevitable. You accepted that when the timeline or budget was set. In addition, routing out some bugs may be beyond the ability of formal, or simulation. At some point you have to accept that bugs will escape into silicon, and the hope is that those problems will be handled by software workarounds. So the scope of the question was narrowed to high-impact bugs that need to be found in project time.

    The reason is these bugs tend to be found late stage or in silicon. Tracking down the root cause of the problem may involve multiple blocks, but because of where you are in the project timeline, there is a desire to minimize the disruption to the system. There was acceptance that this type of fix is often not a true fix but a hardware workaround. That translates into making the fix as localized if possible, even if it means giving up a small amount of performance.

    Can we predict where this type of bug will exist? In many cases we can. It is likely the most complex control block that is fielding asynchronous events and has interfaces on both sides running on different clocks. While most teams know this block will likely cause trouble, and they would always want their most experiences team members working on it, they never have enough time to explore all corner cases.

    This is where use cases become important. While there might be a bug buried somewhere in a weird combination of states, if that will never show up in the use cases that will be exercised in real life, you may not care about it. Some people may call these corner cases, and if they can never happen they are not important. But things that can happen in the real use cases are important, and it’s also important that the use cases are well defined.

    The discussion identified three fundamental questions:

    Can we predict the blocks where the high impact bugs of a design will be located?
    Can we find all bugs in those blocks?
    How do we know when we are done?

    When put this way, there was immediate dissent. This is basically a fallacy because it implies that bugs are findable from some golden source, and therefore you’re verifying to an architecture definition. In the architecture definition, we have to be perfect in order for there to be no bugs. Therefore, there are bugs.

    Oski’s Shirley provided a history recap of how the industry had gotten to where it is today, and the important element of this is that when frequency scaling and power-efficiency scaling ended, we pivoted to parallelism. That, in turn, caused an explosion in state space. Parallelism can exist in the processor space, in communications lanes in all aspects of the system. This received some pushback, because adding more cores does not necessarily increase complexity. It is the out-of-order, superscalar machine that adds complexity. Complexity is in the pipeline of those machines. To get a 1% increase in single-thread performance causes the state space to far more than just adding more cores.

    Other people agreed with the original statement, saying that we have added different levels of complexity when we are putting in hundreds or thousands of cores and trying to control the parallelism, the synchronization of those cores, as well as doing resource sharing, and having multiple lanes and multiple ports so that packets are distributed across multiple lanes. In the end, everything has to come together.

    But the focus of the discussion remained on the single cores and the complex blocks within those cores. It is felt that these blocks are still a tougher verification problem than SoC-level verification, even though it’s possible to have IP bugs that only show up when they interact with the full SoC. T

    The discussion again came back to the notion of “inevitability” of bugs. It is the word that was troubling to many. It was suggested that justifiable, or disappointed but not surprised, might be more appropriate terms. A typical reaction is that given the amount of work, we thought it would not have escaped, but it did. At the same time, we cannot be surprised because we didn’t put in unlimited work. It is not inevitable because if we had kept at it, we would have found it.

    This comes down to having defined a good verification methodology that is the most likely to find the bugs you care about within project time. When bugs do escape, you conduct a post mortem and decide if the methodology needs to be changed in the future. Perhaps it was a coverage hole, or the wrong tool being used to tackle a class of potential problems.

    There are some areas of a design that we’d be shocked if we found bugs because they were so simple, and that is when we are disappointed most. This can sometimes be related to bad staffing decisions. Teams do identify the most likely places for bugs to exist, the thorniest issues they’re going to face on a chip, and they put their best people on that. If they forget about the simple stuff, that’s when avoidable bugs occur.

    Can we say there are certain bugs that simulation would not find? Even deploying emulation, which can provide more thoroughly checking, may be unable to reach them. If the bug is 20 levels deep and requires many combinations of things to happen, bugs will escape even if you suspect they are there. Emulation may get you to prototype faster, but it is not as good when used as a simulation enhancer. There are times when you have to use a more advanced methodology, like a formal, which has a chance of finding it.

    Bugs are inevitable if you do not have the right set of tools or the right set of people. So if a simulation team could not have found this bug, because it is such an extreme corner case, then should it be considered inevitable? Some teams have found issues using formal that were missed during simulation. This brings up ROI, which can be a thorny issue because when this analysis is done after the fact. It is fairly easy to show how simulation could have caught it, given more time or resources. But how much more?

    This requires a careful examination of the simulation methodology. Could a constrained random solver have gotten you to that particular scenario?

    You now have similar startup costs for both simulation and formal. The cost equation usually favors simulation only because it’s already there. When you start from scratch, the equation is different.

    But there are problems with this, as well. Few systems are 100% verified with formal. There may be some core blocks in a design that are formally verified, but for most designs, simulation also will be used. A complete verification methodology will require both infrastructures to be built, but the split between the two may be very different.

    Unless pipelines are of sufficient depth and the requirements for a bug to appear involve the injection of asynchronous events, then bugs should be considered disappointing, not inevitable. This is because constrained random should be capable of providing good coverage. There’s a set of conditions that a constrain random instruction sequence generator should be able to generate in a reasonable amount of time and simulation resources that you should expect to have for a project like this.

    Methodologies are constantly in flux. At the lowest level it is adjusting the coverage model as the team becomes more familiar with the design. Nobody ever gets 100% functional coverage, but you have to constantly assess if the goal is what you need it to be. When bugs are found — at any stage in the development processes — you are constantly assessing if you need to make adjustments. The bar is getting higher and higher. Your methodology has to adapt to bugs that you’re constrained random does not find. What may have been a superbug just became a regular bug.

    All methodologies have an Achilles heel. They all assume the verification environment is perfect, which is never the case. Stimulus may be perfect, but there could be bugs in the checkers, or inadequate coverage models.

    But one important element in the ROI case is how quickly a suspected bug can be analyzed. Is it a bug in the design or testbench? How long does it take to understand what is happening and to get to the root cause?

    Using the right tool is an important aspect of a methodology. For example, some people have tried to wrap their heads around the notion of using simulation to detect deadlock, while others look to solve that using formal and still others strive to have better design principles that have avoid the issue.

    If you have expected concurrency, you can handle that fairly well. It is when there’s unexpected concurrency, which normally comes from asynchronous events, that you are likely to have problems. When that happens deep inside a design it becomes increasingly difficult to generate all of the combinations, and these can change when minor changes are made to the design. This also can create unexpected issues if the design is reused with small modifications. The impact of those changes has to be carefully considered.

    In conclusion, the group felt that there are a class of blocks are easier to close with formal compared to simulation. However, companies have been having success with simulation, and they should carefully consider how they continue to verify the highest-risk blocks. ROI is an important consideration.

    It takes time, but when formal finds a bug and provides a concise counter-example that has identified the root cause, designers tend to be impressed. But they need to see something that will build that belief. So long as formal does not find false negatives, it will gain credibility. Oski asserts that its false negative rate is just 9.1%. When this is compared to typical simulation false negative rates, which can be up to 60% in the early stages of the verification process, this number is very low.

    Reply
  21. Tomi Engdahl says:

    Future-proofing your industrial products for Industry 4.0 has increased the need for flexible industrial and process control systems

    new software configurable inputs & outputs (IOs) now available

    Software I/O (SWIO) to make existing/legacy technology useful for today’s needs and challenges

    https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&partnerref=EM3&eventid=2289122&sessionid=1&key=B4654435601764812B9BF1E5B8176346&regTag=&sourcepage=register

    Reply
  22. Tomi Engdahl says:

    HTTaP
    Test Access Port over HTTP
    https://hackaday.io/project/20170-httap
    HTTaP was first published in the french GNU/Linux Magazine n°173 (july 2014) “HTTaP : Un protocole de

    contrôle basé sur HTTP” as a simpler alternative to WebSockets.
    The project #micro HTTP server in C is designed to implement this protocol. This is where you’ll find the
    low-level details discussions.
    This project documents the protocol itself, its definitions and evolutions, to help other clients and
    servers interoperate.
    Think of HTTaP as a WebAPI for hardware and logic circuits.
    For example it can embed/encapsulate SCPI commands over Ethernet or Wifi instead of RS232 or USB. No need to install stupid Windows drivers or lousy (binary, non-free and obfuscated) applications !
    The client is usually a web browser running JavaScript code to perform high-level work. The code can come from the HTTaP server or any other source such as the local filesystem, Internet… One client can talk simultaneously to different servers but one server (at a given pair of TCP/IP address and port) can serve nly one client at a time, to prevent race conditions.
    HTTaP messages are very simple : just GET or PUT values to certain places, using JSON notation. This is intentionally simple but limited so actual work is achieved through convoluted sequences of small atomic messages.

    micro HTTP server in C
    Connect your browser to your smart devices, using a minimalist HTTP compliant server written in POSIX/C
    https://hackaday.io/project/20042-micro-http-server-in-c
    Standard addresses provide well-known points that provide enough informations to discover/explore the system, its hierarchy and capabilities, through individual client requests.

    Reply
  23. Tomi Engdahl says:

    Suositun 32-bittisen koodit Githubiin
    https://etn.fi/index.php/13-news/10732-suositun-32-bittisen-koodit-githubiin

    STMicroelectronics on julkistanut suositun 32-bittisten ohjainpiiriensä suunnitteluohjelmistoa Githubissa. Tällä ST haluaa avata sulautettujen ohjelmistojen käytön laajemmalle yleisölle. Githubissa koodeja voidaan myös päivittää nopeammin ja tehokkaammin.

    ST julkistaa pilvipohjaisessa palvelussa kaikkiaan yli tuhat STM32Cube-alkuperäiskoodia. Tämä antaa kehittäjille mahdollisuuden tallentaa, hallita, seurata ja hallita koodiaan entistä helpommin.

    Reply
  24. Tomi Engdahl says:

    BiST Vs. In-Circuit Sensors
    https://semiengineering.com/bist-vs-in-circuit-sensors/

    Hybrid solutions emerging as reliability concerns increase and coverage becomes more difficult.

    Monitoring the health of a chip post-manufacturing, including how it is aging and performing over time, is becoming much more important as ICs make their way into safety-critical applications such as the central brain in automobiles.

    Faced with longer lifespans and a growing body of functional safety rules, systems vendors need to be able to predict when a part will fail. But as sensing automotive IC failures becomes codified into standards, the newness of everything is hitting all at once — advanced-node designs, AI for predictive maintenance, zero defect tolerance and processing all the data needed to diagnose ICs’ health.

    There are several key approaches being used for this:

    Built-in self-test (BiST), a technology that has been around for a couple decades. While this technology has a proven track record, but it also takes up valuable space on a die.
    In-circuit sensors, which take up less room on die and can monitor the chip non-stop during operation.
    Increased testing, inspection and metrology to identify potential defects before chips are released into the market.

    Each of these approaches has strong points and weaknesses, and generally all three are required to ensure quality over time.

    Reply
  25. Tomi Engdahl says:

    5 Elements to a Secure Embedded System – Part #4 Secure Bootloaders
    Bootloaders are often developed at the last minute despite the fact that they are often complicated to implement and create critical functionality for the system.
    https://www.designnews.com/electronics-test/5-elements-secure-embedded-system-part-4-secure-bootloaders/158742285862829?ADTRK=InformaMarkets&elq_mid=13147&elq_cid=876648

    As you may recall, the five elements that every developer should be looking to implement are:

    Hardware based isolation
    A Root-of-Trust (RoT)
    A secure boot solution
    A secure bootloader
    Secure storage

    The main focus last time was that the system needs to have secure boot which boots the system in stages and developers a Chain-of-Trust at each stage. In today’s post, we will continue the discussion with a look at secure bootloaders.

    Reply
  26. Tomi Engdahl says:

    The Physically Unclonable Function Delivers Advanced Protection
    https://www.electronicdesign.com/technologies/embedded-revolution/article/21131283/the-physically-unclonable-function-delivers-advanced-protection?utm_source=EG+ED+Analog+%26+Power+Source&utm_medium=email&utm_campaign=CPS200508052&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Part 4 of the Cryptography Handbook series takes a detailed look at the physically unclonable function, or PUF, which generates a unique key to support crypto algorithms and services.

    Reply
  27. Tomi Engdahl says:

    An Introduction to MicroPython and Microcontrollers
    Microcontrollers don’t have to be programmed in C. MicroPython works just fine.
    https://www.electronicdesign.com/technologies/embedded-revolution/article/21131360/an-introduction-to-micropython-and-microcontrollers

    There are currently around 600 programming languages to choose from, so picking the one that’s right for you can be pretty difficult. But if you’re looking for a language that’s incredibly popular, has a low barrier of entry, and can easily be integrated with a wide variety of other languages, then Python is arguably your best bet right now. Python is the second most in-demand programming language as of 2020, and it might even end up taking the crown from JavaScript one day in the near future.

    But while Python can be used for anything from web hosting and software development to business applications and everything in between, it can’t run on microcontrollers, which somewhat limits its capabilities. Luckily, an intrepid programmer took care of this little issue a few years back when he came up with MicroPython. Just as its name suggests, MicroPython is a compact version of the popular programming language that was designed to work hand-in-hand with microcontrollers.

    In this tutorial, we’re going to teach you everything you need to know about microcontrollers and discuss the benefits of using MicroPython over other boards.

    Reply
  28. Tomi Engdahl says:

    CircuitPython Macro Pad Is One Build That Won’t Bite
    https://hackaday.com/2020/05/13/circuitpython-macro-pad-is-one-build-that-wont-bite/

    Have you built a macro keypad yet? This is one of those projects where the need can materialize after the build is complete, because these things are made of wishes and upsides. A totally customized, fun build that streamlines processes for both work and play? Yes please. The only downside is that you actually have to like, know how to build them.

    Suffer no more, because [Andy Warburton] can show you exactly how to put a macro pad together without worrying about wiring up a key switch matrix correctly. [Andy]’s keypad uses the very affordable Seeeduino Xiao, a tiny board that natively runs Arduino code. Since it has a SAMD21 processor, [Andy] chose to run CircuitPython on it instead. And lucky for you, he wrote a separate guide for that.

    Build a simple USB HID Macropad using Seeeduino Xiao & CircuitPython
    https://makeandymake.github.io/2020/05/02/seeeduino-xiao-circuitpython-usb-hid-macro-keypad.html

    Reply
  29. Tomi Engdahl says:

    NXP CTO: Understand Distributed Networks – and Be Curious!
    https://www.eetimes.com/nxp-cto-understand-distributed-networks-and-be-curious/

    As we move more and more to a heavily connected, on-demand world, how do engineering skills need to change? This is uppermost on the mind of Lar Reger, CTO, as he explores the needs of his company and the electronics industry.

    He’s very conscious of the fact that the prospect of 75 billion connected devices by 2025 presents challenges related to trust, privacy, security and managing risk. Reger believes these are vital

    The skills needed to design for trust, security and safety will hence be paramount, and will require understanding de-centralized networked systems and devices.

    Lars Reger: Fifteen years ago, I had a complete manual world around me with a couple of electronic helpers. When I was coming home from vacation, I would call my neighbors, and ask them to switch on the heating in my house. That was my smart home. Today, in an on-demand world, I take out my mobile phone and call for an Uber now. Or I can [remotely] switch on or off my heating.

    We are moving towards a world that anticipates and automates. So, like Queen said 30 years ago, “I want it all. And I want it now.” But of course, this all happens without you having to take action, since in theory the world will automatically adjust around you, because of the 75 billion smart connected devices around us. A smart connected device includes your smart watch, your intelligent speaker, your building robot and your autonomous cars.

    This puts us in an era of decentralized devices. Engineers have traditionally always followed Moore’s Law: with microcontroller, new technology node, more transistors, so more powerful and less energy consuming. That is basically the boring trend that every node was following. And what can the new technology node give me?

    If you want to go to the 75 billion smart connected devices and if they are the mighty steering of our planet — and by the way, also the way to an eco-friendly world — then I need engineers who understand networking systems, distributed systems. A lot of us at the moment are becoming somewhat expert in decentralized, distributed systems by learning about Covid-19.

    Engineers need to understand how to build these systems on a much lower compute performance, much smaller energy footprint. They’re networked with high quality sensing, whether designing a Nest thermostat or a self-driving car. Now, what engineers have done so far is optimize these tiny little devices or the big computer systems. But what is coming and what engineers are not effectively trained in saying is, “How do you enable the end user to trust his devices?”

    “Trust your device” is the big enabler or disabler for entire markets.”

    Lars Reger: Trust had not been of paramount importance in the past because your devices were not connected. While functional safety might have been understood, the security part was of low importance. Now you need to have engineers who do privacy by design or security by design and functional safety by design, because otherwise you can hardly qualify and build.

    Take a car: the most expensive and most complicated of these smart connected devices. If you try to change one component in your car and you would have to requalify your complete autonomous electric driven connected vehicle: that’s an undoable task, in terms of manpower and money. You need to have an architectural understanding to break up the smart connected device into sub-domains that are very strictly separate from each other. How do I separate the powertrain of my car from the gateway, from the connectivity domain, and from the autonomy domain? And if I changed the autonomy domain, how do I enable this without having to requalify the powertrain? This type of super-disciplined thinking is important.

    This disciplined thinking is what each and every engineer needs to have – and if they did, that would of course, be the game changer, with millions and tens of million dollars of impacts on our big projects.

    Richard Quinnell: Is there is a fundamental starting point that we need?

    Lars Reger: Look, I mean, show me the university that has lectures on functional safety and functional security in their curriculum. In the automotive industry, if you have really mastered this, you have ISO 26262, which is basically the functional safety processes, and how we treat systems.

    In layman’s terms, this means carrying out risk assessments starting from the system level. How do you assess a car and how do you make sure that this thing never, ever causes a fatality? Then you break it down into sub-systems, for example the braking system, to make sure this never, ever causes a fatality.

    The skill you need is in building functional safety into the process. But this really must work down the scale to the component level of a semiconductor.

    driving down the system cost. Using system capabilities but going really from a high-level risk assessment in a structured way down to the component.

    In the same manner, industry is looking at the security of systems.

    . So, every microcontroller has a certain level of security and you have surveillance systems in the aisles. You are supervising the data running on your wiring harnesses. The combination of all of that will make your castle secure.

    The same thinking needs to be applied on distributed systems in a car with 200 control units, a wiring harness and connectivity to the outside world. It’s this this type of thinking the engineers need.

    they can start talking about enabling chip security. The first question is what security do you need? With the house analogy, do you need to have a front door lock type of security or a kitchen / living room type of security, or do you want to protect your wine cellar? The point is, what are the security levels, and do I understand the system? And from there it’s then possible to fold this back into the specific requirements then scale it, using the right products from our portfolio.

    Richard Quinnell: Is there anything you wish your customers knew before they came to you?

    Lars Reger: The similarity between functional safety and security. What do I mean? What I am describing is going from a system risk assessment down to the component level. You need to ask what could go wrong in the system, on that level. Functional security is very much the same. You now bring in connectivity and ask what could go wrong on the level of my fridge, on the level of the connectivity box of my fridge, on the cooling system of my fridge and so on. In principle, you follow the same reasoning, but only a very few people in the industry really get that [similarity].

    While we need to be refocused on safety and security, what I also see coming up is the mix of different capabilities in silicon. Emerging are what we call crossover microcontrollers in which you have a very tiny, very energy efficient microcontroller, and a big fat microprocessor sitting next to that on the same piece of silicon.

    Give people the right toys to play and innovate

    Nitin Dahad: So as a CTO, your job is to look at the future and look at what’s coming up in terms of both technologies and applications. How would you enable your successors to take on that kind of role?

    Lars Reger: The one thing that I can do as a CTO is invite people to play with NXP. That is, if I would only have one sentence. That is the job that I have to do. Give the people the right Lego blocks, the right toys to play. You have 26,000 customers. And I have no clue in overseeing what type of innovations they are doing with our chips.

    So, my successors will need one thing. They will need the ability to instill curiosity in the people that they talk to, the willingness to play with our stuff.

    Reply
  30. Tomi Engdahl says:

    Design For Narrowband IoT
    https://semiengineering.com/design-for-narrowband-iot/

    The challenge of creating chips for ultra-low-power applications with long lifetimes and always-on circuitry.

    Reply
  31. Tomi Engdahl says:

    ZRAM BOOSTS RASPBERRY PI PERFORMANCE
    https://hackaday.com/2020/05/20/zram-boosts-raspberry-pi-performance/

    Linux is a two-edged sword. On the one hand, there’s so much you can configure. On the other hand, there’s so much you can configure. It is sometimes hard to know just what you should do to get the best performance, especially on a small platform like the Raspberry Pi. [Hayden James] has a suggestion: enable ZRAM and tweak the kernel to match.

    Although the post focuses on the Raspberry Pi 4, it applies to any Linux system that has limited memory including older Pi boards. The idea is to use a portion of main memory as a swap file. At first, that might seem like a waste since you could use that memory to, you know, actually run programs. However, the swap devices are compressed, so you get more swap space and transfers from these compressed swap devices and main memory are lightning-fast compared to a hard drive or solid state disk drive.

    Raspberry Pi Performance: Add ZRAM and these Kernel Parameters
    Last updated May 20, 2020 | Published May 19, 2020 by Hayden James, in Blog Linux
    https://haydenjames.io/raspberry-pi-performance-add-zram-kernel-parameters/

    Reply
  32. Tomi Engdahl says:

    4 Things Today’s Engineer Must Know
    https://www.eetimes.com/4-things-todays-engineer-must-know/

    Daniel Cooley, senior vice president and chief strategy officer at Silicon Labs, did not disappoint us during a recent chat. He went straight to four big industry-wide topics that he believes are changing the face of the tech world:

    AI
    security
    the roles that tech companies play in the real world (will be scrutinized by governments and consumers)
    technology stack (what happens in the cloud matters to chip designers)

    Reply
  33. Tomi Engdahl says:

    ROS 2 for Industrial Applications
    https://www.sealevel.com/2020/04/27/ros-2-for-industrial-applications/

    Started in 2007, the Robotic Operating System (ROS) is an open-source, collaborative framework of software libraries and tools that assist in building robot applications. ROS isn’t an operating system in the traditional sense and instead works on top of an existing operating system to provide structured communications.

    The system was originally built to aid single robot projects for academia research applications. However, as ROS has grown in popularity, applications have expanded. The system has become more common in the industrial sector for uses in manufacturing, agriculture, commercial cleaning and government agencies.

    ROS 2 was created to address this shift. The updated system runs separately from ROS 1. As the older program is still used widely, the significant changes in ROS 2 could cause disruptions or functions to break entirely. However, ROS 2 is not meant to replace ROS 1, and is instead designed to be interoperable.

    ROS 2 applies to teams of multiple robots, includes real-time system control, works within degraded network connectivity conditions and provides tools for life cycle management and static configurations.

    Specific changes from ROS 1 to ROS 2 include but are not limited to:

    ROS 1 uses programming language C++03. ROS 2 uses C++11 ad some C++14. C++17 might be implemented in the future.
    ROS 1 uses a custom serialization format, transport protocol and central discover. ROS 2 uses Data Distribution Service for serialization, transport and discovery.
    ROS 1 only supports a CMAKE build system. ROS 2 can support other build systems.
    ROS 1 can only use a small subset of Python setup.py files. ROS 2 can use all such files.
    ROS 2 has a more flexible environment setup.

    ROS 2 launched its first alpha build in 2015 and continues to update with new features in conjunction with community input and a future road map. ROS 1 also continues to receive updates.

    Reply
  34. Tomi Engdahl says:

    Tube channel, there are available Arduino, NODE Mcu, ESP32, proteus and microPython tutorials.
    https://www.youtube.com/c/voidloopRobotechAutomation

    Reply
  35. Tomi Engdahl says:

    The automotive industry is undergoing a lot of change, with conflicting pressure to cut costs, and innovate faster, all while breaking new ground with technologies that are quite different to those that have gone before.  The AUTOSAR Classic Platform is now widely used and proven in traditional ECU’s. Today, as computing power increases, with it the need to process more data, has resulted in the AUTOSAR Adaptive Platform becoming increasingly important.  The complex new functions associated with features in the ADAS domain, and others, bring with them new types of engineers with different skill sets, when combined with the traditional automotive engineering, there is a need to rapidly build up proof of concept tests to confirm development paths. 

    Mentor engineering teams have developed a set of Python bindings to support the rapid prototyping of new functions, running on an AUTOSAR Adaptive Platform ECU, enabling a pathway to support full production development of the software after the concept is proven.
    https://www.mentor.com/embedded-software/events/rapid-prototyping-using-python-api-and-autosar-adaptive?contactid=1&PC=L&c=2020_06_03_embedded_python_autosar_webina

    Reply
  36. Tomi Engdahl says:

    Open Source IP is Best Not Forgotten
    https://www.designnews.com/design-hardware-software/open-source-ip-best-not-forgotten/73838276563075?ADTRK=InformaMarkets&elq_mid=13355&elq_cid=876648

    Software and hardware intellectual property must be management – even by engineers – to avoid legal entanglements and broken designs.

    Further, the report revealed that 99% of the 1,250 commercial codebases audited contained open-source code, with open source comprising 70% of the code overall. According to a press release, what was, “more notable is the continued widespread use of aging or abandoned open source components, with 91% of the codebases containing components that either were more than four years out of date or had seen no development activity in the last two years.

    The four main findings were:

    Open-source adoption continues to soar. (36%).
    Outdated and “abandoned” open-source components are pervasive.
    The use of vulnerable open-source components is trending upward again.
    Open-source license conflicts continue to put intellectual property at risk.

    Open source software is no different from any other software in that its use is governed by a license that describes the rights conveyed to users and the obligations those users must meet.

    The Open Source Initiative (OSI), a nonprofit corporation that promotes the use of open source software in the commercial world, defines open source with 10 criteria and lists 82 OSI-approved licenses, with nine being “popular, widely used, or having strong communities.”

    Analyses from the OSSRA report indicates that the 20 most popular licenses cover approximately 98% of the open source in use.

    Reply
  37. Tomi Engdahl says:

    What Makes A Chip Tamper-Proof?
    Identifying attacks and protecting against them is still difficult, but there has been progress.
    https://semiengineering.com/what-makes-a-chip-tamper-proof/

    The cyber world is the next major battlefield, and attackers are busily looking for ways to disrupt critical infrastructure.

    There is widespread proof this is happening. “Twenty-six percent of the U.S. power grid was found to be hosting Trojans,” said Haydn Povey, IAR Systems’ general manager of embedded security solutions. “In a cyber-warfare situation, that’s the first thing that would be attacked.”

    But not all attacks are software-based. Some are very physical. In particular, the Internet of Things (IoT) represents a huge number of new ways to get onto sensitive networks. “The IoT market isn’t talking about tampering. But because there are so many new IoT devices, especially for industrial, there has been an increase in physical attacks,” said Mike Dow, senior product manager of IoT security at Silicon Labs. To address this, anti-tampering features are appearing on a broad range of chips.

    Protecting secrets
    Security for connected devices involves cryptographic functions for encrypting messages and ensuring that all parties in any communication are who they say they are. But such functions require cryptographic keys, certificates, and other artifacts, some of which must remain secret to be effective. Attackers have increasingly turned to physical attacks in an attempt to retrieve these secrets and defeat the security. The purpose of anti-tampering efforts is to protect those secrets.

    In some cases, however, the goal may not be to steal secrets, but rather to disable or sabotage a system.

    Reply
  38. Tomi Engdahl says:

    A Screed on Buttonless Phones
    https://www.electronicdesign.com/technologies/embedded-revolution/article/21133197/a-screed-on-buttonless-phones?utm_source=EG+ED+Analog+%26+Power+Source&utm_medium=email&utm_campaign=CPS200602054&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    As mechanical buttons become more archaic, designers continue to plot out new ways to eliminate them on nearly all devices. But as the shift toward sleek buttonless devices continues, creating a user experience on par with physical buttons is becoming more complicated than anticipated.

    Reply
  39. Tomi Engdahl says:

    Digitalization
    Technology is changing software testing
    https://www.etteplan.com/stories/technology-changing-software-testing?utm_campaign=unspecified&utm_content=unspecified&utm_medium=email&utm_source=apsis-anp-3&pe_data=D43445A477046455B45724541514B71%7C26757479

    Industrial devices and machinery have become so complicated that even software designers are struggling to keep up with the development. However, software testing tools have also developed. Nowadays, testing can be carried out on a digital twin quickly, reliably and safely.

    As aptly depicted in digital twin developer Mevea’s illustration, the stacking of technologies has led to entities that leave software designers scratching their heads.

    Software testing with a virtual device considerably speeds up product development

    Traditionally, software testing has had to be carried out on the finished, physical machine in its actual environment. Unfortunately, this means testing is slow and the product’s finalization is delayed, which is not in the customer’s interest.

    Technology has already, however, revolutionized software testing. Although the software and electronics that are being tested are real, testing them on the actual physical machine or in the machine’s actual environment is not necessary. The machine and its environment can be virtualized and tested in a modelled environment using a digital twin.

    Testing software using a digital twin offers numerous benefits, one great example of which is time savings: since a physical machine and it’s environment are not needed testing in the digital world can accelerate the process even by months compared to before. In the best-case scenario, this can significantly speed up the product’s time-to-market.

    Reply
  40. Tomi Engdahl says:

    Want to improve your product’s competitiveness by 5 or 50 percent?
    https://www.etteplan.com/stories/want-improve-your-products-competitiveness-5-or-50-percent?utm_campaign=unspecified&utm_content=unspecified&utm_medium=email&utm_source=apsis-anp-3&pe_data=D43445A477046455B45724541514B71%7C26757479

    Product costs are an essential element of any product’s competitiveness. Redesigning an existing product can reduce the product’s costs by even more than 50 percent. The minor savings potential related to sourcing is also worth seizing.

    A product’s competitiveness is influenced by numerous factors, the most important of which are the level of quality required for the product, production costs, flexibility in production, risks, lead time and efficiency and, naturally, environmental impacts, to name a few.

    Products that are structurally flexible and have the ability to adapt to changing requirements are the ones that succeed the best on the competitive markets. When product design contributes to the product’s flexibility, capacity can be easily scaled up, for example, by increasing the degree of automation. Rapid changes can then be made smoothly if necessary.

    Cost-competitive products are designed such that the number of different parts (with drawings) to be manufactured is minimized, the product is structurally optimized (use of materials and required strength and weight) and the use of different materials for the product is minimized. In addition, the product has been designed for manufacture/assembly, taking into account the various requirements and opportunities related to the manufacturing methods.

    Now that we have identified the factors that influence a product’s competitiveness, we can move on to concrete means for improving its competitiveness. Product costs can be influenced in two ways: through sourcing and through design.

    Roughly 5 percent savings potential through sourcing

    When companies set out to find ways to lower their product costs, their focus is most often on sourcing, on an approach we refer to as down pricing. Its objective is simple: to procure the product’s current structures, components or the product’s design as cost-effectively as possible.

    Many companies favor the down pricing approach for its ease of use: tendering for existing equipment and parts is straightforward – it mostly only requires comparing costs and delivery terms. But, as you can guess, easy solutions seldom produce very revolutionary results. By some estimates, in Western countries, down pricing measures can usually bring approximately 5 percent savings in product costs. In BCC countries, savings can be as high as 30–50 percent; however, this is on the condition of continuous quality control.

    Savings as high as 10–50 percent through design

    When it comes to reducing a product’s overall costs, it is often more sensible to allocate resources to the actual product design rather than to tendering. The aim of the down costing approach is to redesign a cost-effective product by reducing the number of parts manufactured for the product, which, in turn, lowers the material and installation costs. This approach delves deeper into the product’s core and into the changes that could be made to it through redesign.

    The product can be sensibly designed to have lower manufacturing, installation, quality and product development costs. The improvements translate to faster lead times, faster time to market, and a reduction in various direct and indirect costs throughout the product’s life cycle.

    Design-based down costing naturally entails a slightly larger investment than sourcing-based down pricing, as redesigning a product demands time and expertise – and money. It thus also requires a commitment from the company’s senior management to provide the opportunity and resources necessary for the redesign.

    Reply
  41. Tomi Engdahl says:

    Open-Source Security: The Good, the Bad, and the Ugly
    Some form of open-source software is in almost every commercial product, which is good and bad from a security standpoint.
    https://www.electronicdesign.com/altembedded/article/21133709/opensource-security-the-good-the-bad-and-the-ugly

    Synopsys’s Security and Risk Analysis (OSSRA) Report reveals some interesting results for developers, including that almost all of the 1250 audited codebases included open-source software of some sort (see figure). This is good for open-source projects and for sharing code, but it can be bad if the open-source software contains security errors. The ugly aspect is that tracking open-source software use and keeping software up-to-date can be a challenge, especially if companies don’t know what open-source software is in their products.

    Tracking a project’s software components is important regardless of whether the code is open source or not. Commercial software used within a project is usually easier to track since a contract is usually involved along with service and support. Open-source software is more of a challenge because one open-source project often depends on other open-source projects. Thus, the issue can cascade into a significant amount of code involved in a project.

    Keeping up-to-date can be a challenge by itself—trying to find problems and get fixes is often problematic since there’s often no support services or contract involved. This alone is a good reason to support your favorite software project. Many software projects have commercial support of the project as well as paid support services, so using some projects is no different between using open- or closed-source software.

    https://www.synopsys.com/software-integrity/resources/analyst-reports/2020-open-source-security-risk-analysis.html?cmp=pr-sig

    Reply
  42. Tomi Engdahl says:

    moving to Linux for embedded projects. There are 5 things you must consider…

    Validate that Linux will meet your operating system needs, there is a time and place for an RTOS
    Be careful not to turn your proprietary IP into OSS. Know which OSS licenses you are allowed to use and how you are allowed to use them
    Don’t let your OSS get out of control. Define a plan for managing Linux and your development tools
    Don’t forget about security
    If your device has a safety function, define your risks and follow certification guideline

    Reply
  43. Tomi Engdahl says:

    New method ensures complex programs are bug-free without testing
    https://techxplore.com/news/2020-06-method-complex-bug-free.html

    A team of researchers have devised a way to verify that a class of complex programs is bug-free without the need for traditional software testing. Called Armada, the system makes use of a technique called formal verification to prove whether a piece of software will output what it’s supposed to. It targets software that runs using concurrent execution, a widespread method for boosting performance, which has long been a particularly challenging feature to apply this technique to.

    Concurrent programs are known for their complexity, but have been a vital tool for increasing performance after the raw speed of processors began to plateau. Through a variety of different methods, the technique boils down to running multiple instructions in a program simultaneously. A common example of this is making use of multiple cores of a CPU at once.

    Formal verification, on the other hand, is a means to demonstrate that a program will always output correct values without having to test it with a full range of possible inputs. By reasoning about the program as a mathematical proof, programmers can demonstrate that bugs or errors are impossible and that its execution is airtight. This overcomes the shortcoming common to all programs, even without concurrency, that testing something exhaustively can be either impractical or actually impossible.

    “Fundamentally, unless you try all the possible inputs, you may miss something,” says Prof. Manos Kapritsos, co-author on the paper. “And in practice, people do miss things. The systems we’re talking about are very complex, there’s no way that they can exhaustively try all the behaviors of the system.”

    “To verify that multi-threaded programs are correct, we have to reason about the huge number of interleavings that are possible when multiple methods run at the same time.”

    To date, a variety of proof methods have been designed to deal with different types of concurrency.

    Armada works by passing a system designed with concurrency through a series of transformations until it’s broken down into a much simpler representation. The developer just has to prove that each simplified step really is representative of the more complex program from the previous step. To do this, the developer uses Armada’s high-level syntax to describe the simpler program and indicate one of the proof methods needed to support the transformation.

    “After every transformation, you want to reason that the system maintains its correctness or is equivalent to the previous one,” Kapritsos explains.

    In the end, the developer has a simple, high-level specification for the entire system. They haven’t made any changes to the system itself, just reasoned about its functionality in increasingly abstract steps that are each still representative of the functioning of the whole program.

    “Part of the goal is to support high performance,”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*