New approaches for embedded development

The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.

Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.

Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.

Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.

Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.

Traditional approaches

C/C++

Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.

C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.

And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.

Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.

Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.

Java

Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones,) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.

Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.

Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.

 

Increasing trends

Databases

Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.

There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).

SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).

If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.

RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.

Software quality

Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind

Safety

Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?

IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.

If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.

Security

The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.

Developing Secure Embedded Software white paper  explains why some commonly used approaches to security typically fail:

MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY

Many organizations are only now becoming aware of the need to incorporate security into their software development lifecycle.

Some techniques for building security to embedded systems:

Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed

Flood of new languages

Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device.  There are also many other relevant languages

One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages ​​(Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again.  Scripting languages ​​are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.

Scripting languages ​​have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used.
High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.

There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.

Bash and other shell scipting

Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).

One of the most useful tools when developing from within a Linux environment is the use of shell scripting. Scripting can help aid in setting up environment variables, performing repetitive and complex tasks and ensuring that errors are kept to a minimum. Since scripts are ran from within the terminal, any command or function that can be performed manually from a terminal can also be automated!

The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.

There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.

Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.

Lua

Lua is a lightweight  cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.

Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.

There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.

PHP

PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.

Python

Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming,  Many other paradigms are supported using extensions, including design by contract and logic programming.

Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.

Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.

Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.

For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriateExamples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.

Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.

JavaScript and node.js

JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases!  There is huge community of developers. JavaScript is a high-level language.

Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.

Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.

It doesn’t require a complicated IDE; all you really need is a terminal.

JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.

JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!

JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications

JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!

Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.

Future trends

According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:

One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.

Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.

This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.

Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.

But the shift will happen as hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot.

 

Sources

Most important information sources:

New approaches to dominate in embedded development

A New Approach for Distributed Computing in Embedded Systems

New Approaches to Systems Engineering and Embedded Software Development

Lua (programming language)

Embracing Java for the Internet of Things

Node.js

Wikipedia Node.js

Writing Shell Scripts

Embedded Linux – Shell Scripting 101

Embedded Linux – Shell Scripting 102

Embedding Other Languages in BASH Scripts

PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog

PHP

Python (programming language)

JavaScript: The Perfect Language for the Internet of Things (IoT)

Node.js for Embedded Systems

Embedded Python

MicroPython – Embedded Pytho

Anyone using Python for embedded projects?

Telit Programming Python

JavaScript: The Perfect Language for the Internet of Things (IoT)

MICROCONTROLLERS AND NODE.JS, NATURALLY

Node.js for Embedded Systems

Why node.js?

Node.JS Appliances on Embedded Linux Devices

The smartest way to program smart things: Node.js

Embedded Software Can Kill But Are We Designing Safely?

DEVELOPING SECURE EMBEDDED SOFTWARE

 

 

 

1,686 Comments

  1. Tomi Engdahl says:

    The Arduino Guide to Low Power Design
    Learn the basics of low-power design using Arduino hardware and software.
    https://docs.arduino.cc/learn/electronics/low-power

    Reply
  2. Tomi Engdahl says:

    Gerrit Niezen’s Compact 5V Boost Regulator Offers 800mA at 5V, Sub-1µA “True Shutdown”
    Designed around a readily-available part, this handy board drains neither your wallet nor your battery.
    https://www.hackster.io/news/gerrit-niezen-s-compact-5v-boost-regulator-offers-800ma-at-5v-sub-1-a-true-shutdown-b87a06af187d

    Reply
  3. Tomi Engdahl says:

    Arduino’s Latest Open Source Report Highlights Major Growth, Community Contributions
    https://www.hackster.io/news/arduino-s-latest-open-source-report-highlights-major-growth-community-contributions-08945cdbf400

    New boards, new software, and a 25 percent boost in community-provided libraries among the highlights from Arduino’s 2021.

    Reply
  4. Tomi Engdahl says:

    Take Control of Your RISC-V Codebase
    Jan. 25, 2022
    Delivering more complex software at an ever-increasing pace raises the risks of software errors, which can affect product quality as well as cause security issues. This becomes even more of a reality with the relatively new RISC-V codebase.
    https://www.electronicdesign.com/technologies/embedded-revolution/article/21215008/iar-systems-take-control-of-your-riscv-codebase

    Reply
  5. Tomi Engdahl says:

    Why your mission-critical application needs a real-time database management system
    https://www.logic.nl/why-your-mission-critical-application-needs-a-real-time-database-management-system/?utm_medium=email

    You might be familiar with some conventional database management systems and the general meaning of it. In the most general sense, database management enables users to define, create, maintain and control access to the database.

    If we translate this into the sphere of critical systems, like avionics and aircraft navigation systems, driver assistance systems, critical medical equipment or financial systems, the number of (embedded) databases that meet the requirements crucial for mission-critical applications becomes significantly smaller.

    Today we have the Internet of Things (IoT) generating huge volumes of data, and in order to extract value from that data it must be shared. So, while embedded database systems in the past only had to manage data at rest, modern embedded database systems must also manage data in flight. That consequently brings new challenges for those who want to use embedded database system management solutions in their critical applications.

    Reply
  6. Tomi Engdahl says:

    What is traceability and why is it important?
    https://www.logic.nl/what-is-traceability-and-why-is-it-important/?utm_medium=email

    Traceability is a sub-discipline of requirements management within software development and systems engineering and it mainly serves the purpose of accelerating and improving development activities. As a result, it also prevents software defects by visualizing relationships between components. Let’s describe a few use cases to better explain traceability and why and how you can use it to your benefit.

    Reply
  7. Tomi Engdahl says:

    Detect and Remediate Log4j2 Vulnerabilities with this Free Developer Tool
    https://www.logic.nl/detect-and-remediate-log4j-vulnerabilities-with-this-free-developer-tool/?utm_medium=email

    This free developer tool, which is hosted on Github and is now available for use, quickly scans projects to find vulnerable Log4j versions and provides the exact path — both to direct or indirect dependencies — along with the fixed version for speedy remediation. As a standalone tool, developers can download the utility that matches their platform, run it within the terminal, and run the scan command on the root folder of the project.

    https://github.com/whitesource/log4j-detect-distribution

    Reply
  8. Tomi Engdahl says:

    I need a BIOS for my embedded x86 design. Which choices do I have?
    https://www.logic.nl/i-need-a-bios-for-my-embedded-x86-design-which-choices-do-i-have/?utm_medium=email

    Which BIOS you can use for your embedded X86 device depends on a number of interdependent factors:

    Is this a one-time off design, or are you planning multiple designs and/or derivatives?
    Are you expecting to switch to newer CPU SKU’s as they come available and are pin compatible?
    Do you expect future changes to the design that will affect its functionality of feature set?
    Is device security important and are you implementing security in hardware?
    Is your device required to boot a single operating system, or must it support multiple OSes?
    Does the device need to support third party plug-in cards beyond your control? In other words, does it require to fully comply with the UEFI standards?
    And last but not least: how experienced are you and your team in x86 architecture designs and firmware development?

    Lets say that you are going to develop a one-time, black box device, with a single CPU SKU, running a mainstream Linux version, and yes, you need some level of security like secure boot and perhaps secure flash update. Your team obviously has knowledge about Linux and programming in C. Most likely they know how to use open source software, find information and documentation, research feature implementations and resolve issues with the help of open source communities.

    As the device, and with it, the BIOS specifications are well defined and limited, a cost effective solution can be an open source bootloader like Intel’s Slim Bootloader or Coreboot. Both are well documented, have the support of a large community and they are somewhat backed up by Intel and AMD.

    The latter sounds trivial but is very important because part of the required microcode encapsulated in the open source is only made available in object code format from the silicon manufacturers. In other words, a black box within an open source project. These binaries contain the tweaks needed to support certain features within the processor, and also to correct errors in the silicon. Nothing wrong there: updates are released regularly with release notes that tell you what is new, but more importantly, what you need to modify in your code. If your design is very limited in feature set, and you are not an early adopter of new silicon, then the open source you are going to download will be mature, and most of the issues, if any, will be resolved by the community. If you are an early adopter, than that’s where you can expect trouble, and you need to wager if you want to take this risk.

    On the other side of the spectrum you have the open source EDK from the UEFI organization, an initiative originally founded by Intel to assure compatibility between x86 processors, peripherals and OSes, but today has evolved to promote interoperability between different silicon architectures too. The EDK therefore is maintained by companies that need high levels of compatibility, extended feature sets and advanced security. So, the open source EDK seems a good alternative if your design is more than a black box system, requires multi OS support, remote management (BMC features), and up to date security. But after downloading and installing you will soon notice that this comes at a cost; it’s overwhelming and will require a firmware team to invest a significant amount of time in learning how to customize the UEFI code for their device.

    That’s the reason why independent BIOS vendors (IBV) such as Insyde Software Inc. are members of the UEFI organization. They use the EDK’s as a basis to develop and deliver feature rich and tested firmware bundles that fully support the Reference Validation Platforms (RVP) from Intel and AMD accompanied by matching documentation, training, support, and if you so wish, customization services to adapt their UEFI solution for you specific design.

    Of course, using a commercial BIOS implies investing in acquiring a license for the source code, training and paying for support, but on the other hand, it will definitely give you a head start compared to using an open source project, struggling through loads of documentation and files by yourself…

    Reply
  9. Tomi Engdahl says:

    3 tips for product start-up engineers
    https://www.logic.nl/3-tips-for-product-start-up-engineers/?utm_medium=email

    When an engineer starts designing a new product, there are lots of things to take into consideration before even starting the actual development. It could be months into the development process when suddenly you realize that the hardware that was chosen in the beginning turns out to be suboptimal for the requirements of the product. Okay, now what? Start all over? Not necessarily.

    The choices of software tools and hardware, as well as the development methods that shape a project are obviously a major factor in the project’s success. It may prove difficult to determine all these factors beforehand, but it will be even more difficult to change the process once started (if possible at all, without starting all over again).

    Choose products and tools that embrace change

    During the development process, it may turn out that the requirements of the end product have changed. Features can be added or removed, and while you are optimizing the product it might become clear that you can install less powerful hardware, so you can bring down the bill of materials. In order to prevent the necessity for a complete redesign when the project’s requirements increase or decrease, it’s wise to look into products and tools that can scale up or down accordingly as the requirements change.

    More innovation by using off-the-shelf products

    The ability to construct a proof-of-concept without breaking the bank leads to better innovation. After all, before the (mass-)production of your new product, tests will need to be done in order to make sure all the requirements are met, while being as cost-efficient as possible. This helps to improve development and ultimately drives innovation. No need to re-invent the wheel, as most components that you’ll need are often already available on the market. It might only be a difference of 10% that will distinguish your product from the competition’s product, so resources are better spent on these 10%.

    Using platform-independent tools

    As mentioned, the chances of requirements changing during the development process are significant. What if you need to switch to a more powerful processor when you are already using the most powerful processor from a certain vendor? This could become a real problem, because some tools are developed specifically for a certain architecture, or for a product line of that specific vendor. This is often the case with compilers, IDE’s, GUI tools or Build Optimizers. It takes a lot of time to really master these tools well, and when it turns out these tools are only compatible with, let’s say STM32, it can become a costly endeavour to retrain yourself or your engineers to a new tool.

    Reply
  10. Tomi Engdahl says:

    Why asset owners should take cyber security into account in their safety assessment?
    https://www.etteplan.com/stories/why-asset-owners-should-take-cyber-security-account-their-safety-assessment?utm_campaign=newsletter-1-2022&utm_content=newsletter&utm_medium=email&utm_source=apsis-anp-3&pe_data=D43445A477046455B45724541514B71%7C30320219

    Regulations are not the end-all

    Cyber security threats to business continuity are growing. Cyber security regulations such as NIS2 and the new Machine Regulation give guidance to organizations on how to control and manage these threats. Nevertheless, taking cybersecurity into account requires more from organizations.

    “Taking cyber security into account when assessing technical safety requires a change in culture and in the way we are doing things currently,” says Jari Laurila, manager of safety operations at Etteplan. “It’s possible and even essential to make the correct changes in advance and already at the equipment buying stage.”

    Technical Risk assessments for machines and equipment is a routine activity for asset owners. Unfortunately, cyber security is often forgotten at this stage. A mixed machine base from different eras doesn’t help.

    “Machines are commonly updated through USB sticks and PLC’s have free entry in production facilities. The traditional ways of doing things leave significant cyber security risks on the table,” continues Jari.
    Safety concept including cyber security

    Etteplan’s safety concept is used for creating detailed information about workplace safety conditions. The service is a focused way to produce a machine or production line based and detailed risk assessment. Updating existing assets, machinery and production lines safety to comply with currently standing local legislation.

    “Cyber security is important in all phases but is usually forgotten in the phases allocated to the asset owner. As a part of our Safety Concept we implement a detailed cyber security risk assessment and provide proposals for managing these risks,”

    Reply
  11. Tomi Engdahl says:

    https://etn.fi/index.php/tekniset-artikkelit/13131-pieni-nor-flash-on-haaste-sulautetuissa

    Sulautetut järjestelmät yleistyvät edelleen huimaa vauhtia. Useimmassa sovelluksissa tarvitaan vain melko pienikokoista NOR-tyyppistä flash-muistipiiriä. Muisteihin erikoistuneet isot valmistajat eivät kuitenkaan toimita enää pieniä muisteja, vaan keskittyvät suuriin, tiheisiin volyymipiireihin. Microchip sen sijaan on sitoutunut toimittamaan asiakkaille pieniä NOR-flasheja niin kauan kuin niitä tarvitaan, jopa yli 20 vuotta.

    Reply
  12. Tomi Engdahl says:

    GEN Z KIDS APPARENTLY DON’T UNDERSTAND HOW FILE SYSTEMS WORK
    https://futurism.com/the-byte/gen-z-kids-file-systems

    Over the past few years, many professors have noticed an alarming trend among their students. Overall, members of Gen Z, even those studying technical scientific fields, seem to have a total misunderstanding of computer storage, The Verge reports, and many fail to conceptualize the concept of directories and folders filled with digital files.

    “The most intuitive thing would be the laundry basket where you have everything kind of together, and you’re just kind of pulling out what you need at any given time,” Princeton University senior Joshua Drossman told The Verge.

    Reply
  13. Tomi Engdahl says:

    An Arduino library for logging to Syslog server in IETF format (RFC 5424) and BSD format (RFC 3164)
    https://github.com/arcao/Syslog

    Reply
  14. Tomi Engdahl says:

    ESP32 + Containers + High-Level Language + Connectivity = Toit
    https://www.youtube.com/watch?v=H8gV7u3MvkI

    Writing code faster? Managing IOT devices remotely? All things we would like, of course. When a few viewers wrote that they discovered a new kid in the block that promises a new view on these topics, I had to try it. What did I discover? Let’s have a closer look.

    Viewer comments:

    Thank you for this video. There are some interesting trade-offs to consider when choosing between Arduino IDE and Toit (other than learning something new). Arduino have made a huge effort to have one idea that can be used for multiple architectures so it is easier to jump between different types of hardware, so there is some future proofing there. Toit on the other hand is just ESP32 and we don’t know if or how fast they will react to new chips in the future. But the architecture in Toit in undeniably superior with containers and wifi updating. I am hoping that in the future this type of architecture will be decoupled from the specific language (i.e. choose which container type/language to use without changing the architectue). That would be game changer if it moved in that direction.

    Reply
  15. DevOps Service Providers says:

    Understanding Development Work Practices Allows Security Teams to Speak to Developers Using Terms They Understand

    Buckminster Fuller famously said that giving people a tool will shape the way they think. Similarly, when it comes to development teams, understanding how development tools work can provide a valuable window into the developers thought process. Security teams can use these insights to better advance their agendas and get vulnerabilities detected and fixed faster.

    Security teams understand the risk associated with fielding vulnerable applications, but they need the support of the DevOps team to build secure applications and address identified security issues.

    How Do Developers Track Their Workload?

    Developers typically track their work load in defect tracking or change management systems such as Atlassian JIRA, Bugzilla, HPE Application Lifecycle Management (ALM) and IBM ClearCase.

    A key difference between security and development teams is that security professionals care about vulnerabilities and developers care about bugs. The critical point for security teams to understand is that developers will likely not care about vulnerabilities until those vulnerabilities are being tracked in in their bug tracking system.

    Reply
  16. Tomi Engdahl says:

    Hello (Many Quantum) World(s)
    https://hackaday.com/2022/02/17/hello-many-quantum-worlds/

    Historically, the first program you write for a new computer language is “Hello World,” or, if you are in Texas, “Howdy World.” But with quantum computing on the horizon, you need something better. Like “Hello Many Worlds.” [IonQ] proposes what that looks like and then writes it in seven different quantum languages in a post you should check out.

    None of the languages look too complex, but sometimes the setup to run them on remote quantum computers is a bit more code. Many of these could also be run on a simulator if you want the practice.

    Hello Many Worlds in Seven Quantum Languages
    https://ionq.com/posts/june-24-2021-hello-many-worlds

    Reply
  17. Tomi Engdahl says:

    Evaluating Different Development and Prototyping Boards for Wearable Applications
    https://www.digikey.com/en/articles/evaluating-different-development-and-prototyping-boards-for-wearable-applications?dclid=CPqLn8GpovYCFRxtGQod2-cLnQ

    The open source Arduino concept has proved to be tremendously successful among hobbyists and makers. It has also been embraced by professional designers for early development and prototyping, and more recently for full-on designs. With the emergence of applications such as wearables and health monitoring, both types of users require higher performance and more functionality in ever smaller board form factors.

    This article briefly discusses how Arduino boards have evolved to meet the needs of makers and professionals for high performance and functionality in low-power, space-constrained applications. It then introduces and shows how to get started with a recent addition to the Arduino family, the Seeeduino XIAO from Seeed Technology Co.
    How Arduino evolved to meet demands of wearable designs

    Many hobbyists and designers are interested in developing physically small products for deployment in space-constrained environments, including wearables. These are typically smart electronic systems that are often based on a microcontroller in conjunction with sensing and/or display devices. In some cases, they serve as high-tech jewelry. In other cases, they are worn close to and/or on the surface of the skin, where they may detect, analyze, and transmit body data such as temperature, heart rate, and pulse oxygenation, as well as environmental data. In some cases, they provide immediate biofeedback to the wearer.

    For such designs, many hobbyists and makers use Arduino microcomputer development boards. So, too, do an increasing number of professional engineers who may use these development boards as evaluation and prototyping platforms to accelerate and lower the cost of evaluating ICs, sensors, and peripherals.

    Such users typically start with the A000073 Arduino Uno Rev3, which is billed as, “The board everybody gets started with”

    One way to reduce the microprocessor development board’s physical footprint is to move to an ABX00028 Arduino Nano Every, which is based on the ATMEGA4809-MUR microcontroller from Atmel (Figure 2). It has 50% more program memory than the Arduino Uno (48 Kbytes) and 3x the amount of SRAM (6 Kbytes). Like the Arduino Uno, the Arduino Nano Every is based on a 5 volt processor that offers 14 digital I/O along with six analog input pins, which can also be used as digital I/O if required. Also, like the Uno, the Nano Every offers one each of a UART, SPI, and I2C interface. However, unlike the Uno which supports only two external interrupts, all of the Nano Every’s digital pins can be used as external interrupts.

    Another popular alternative that can be programmed using the Arduino’s integrated development environment (IDE) is the DEV-13736 Teensy 3.2 from SparkFun Electronics (Figure 3). When it comes to I/O, this 3.3 volt development board really ups the ante, with 34 digital pins, 12 of which support PWM, along with 21 high-resolution analog inputs.

    Even though the Teensy 3.2 is only 6.3 cm2, this is still too large for many applications. The solution for those seeking yet smaller and more powerful platforms lies within the vast Arduino ecosystem. A relatively new option is the Seeeduino XIAO from Seeed Technology (Figure 4), which measures only 23.5 x 17.5 mm (4.11 cm2), or the size of a standard postage stamp. The designers of the Seeeduino XIAO also focused on ultra-low cost.

    Generally speaking, working with the Seeeduino XIAO is as easy as working with any other Arduino or Arduino-compatible development board, but there are some tips and tricks that are worth noting.

    A good starting point is to make sure to work with the most current version of the Arduino IDE. Next, visit the Seeeduino XIAO Wiki for instructions on how to augment the Arduino IDE with the appropriate board manager.

    Reply
  18. Tomi Engdahl says:

    Linuxissa siirrytään uudempaan C-kieleen
    https://etn.fi/index.php/13-news/13235-linuxissa-siirrytaeaen-uudempaan-c-kieleen

    Linux-ydintä on iät ja ajat koodattu vuonna 1989 standardoidulla C-kielen versiolla. Nyt Linus Torvalds on päätynyt siihen, että kernelissä siirrytään uudempaan C11-standardiin.

    Syynä C-päivitykseen on Torvaldsin huomaama bugi, joka syntyy C89-kielen tavasta käsitellä muuttujia loopeissa. Torvalds päätyi siihen, että siirtyminen uudempaan C-versioon poistaa ongelman.

    C11 on sinänsä tuo isoja muutoksia Linux-ytimen koodaamiseen. Siinä on esimerkiksi standardoitu multisäikeiden tuki. Lisäksi kaikki GCC-kääntimet jo tukevat C11-standrdia, joten tältäkään osin ongemia ei tule.

    Re: [RFC PATCH 03/13] usb: remove the usage of the list iterator after the loop
    https://lwn.net/ml/linux-kernel/CAHk-=wiyCH7xeHcmiFJ-YgXUy2Jaj7pnkdKpcovt8fYbVFW3TA@mail.gmail.com/

    Reply
  19. Tomi Engdahl says:

    Bugit helpommin esiin sulautetusta Linuxista
    https://etn.fi/index.php?option=com_content&view=article&id=13236&via=n&datum=2022-02-28_15:17:39&mottagare=30929

    Sulautetun Linuxin virheenkorjaus eli debuggaaminen on erittäin monimutkaista, ja se asettaa kokeneimmillekin sulautettujen järjestelmien kehittäjille monia haasteita. Visuaaliset jäljitysdiagnostiikkatyökalut, jotka tukevat erityisesti sulautettua Linuxia, voivat helpottaa työtä huomattavasti. Näin sanoo MAB Labsin perustaja Mohammed Billoo.

    Äskettäin sain tehtäväksi kehittää mukautettu Linux-ohjain kuluttamaan ulkoisen laitteen lähettämää dataa. Vaikka Linux-ytimessä on natiiveja mekanismeja, jotka varmistavat, että ohjaimen toiminta on oikea, virheenkorjaus ja suorituskyvyn arviointi on kaukana suoraviivaisesta. Siksi päätin testata, auttaisivatko – ja jos, niin miten – uudet jäljitystyökalut, kuten sulautettua Linuxia tukeva Tracealyzer, ajurin ja keskeytyskäsittelijän analysoinnista käyttäjätilan sovellusten ja kääntäjävaihtoehtojen tarkastelussa.

    Käytin jäljitystyökalua Yocto-pohjaisen Linux-jakelun kanssa aloittaen mukautetun kerroksen rakentamisesta kortin BSP-pakettiin, jotta siinä voidaan käyttää avoimen lähdekoodin LTTng-kirjastoa. Tämä tarjosi lukuisia arvokkaita näkökulmia ajurin suorittamiseen osana Linux-järjestelmää, kernel mukaan lukien. Sain myös kokonaisvaltaisemman näkemyksen ajurista varmistaakseni, ettei suorituskyvyssä ole pullonkauloja tai mistä mahdolliset pullonkaulat tunnistaa.

    avoimen lähdekoodin LTTng-kirjaston käyttö avaa laajan valikoiman ominaisuuksia sulautetun Linux-suunnittelun eri näkökohtien tutkimiseen ajureista ja keskeytyskäsittelijöistä käyttäjätilan sovelluksiin ja kääntäjävaihtoehtoihin. Tällaisen yhdistelmän käyttäminen kehitysprosessin aikana paitsi lisää näkyvyyttä, myös ratkaisee ongelmia aikaisemmassa vaiheessa prosessia. Erittäin kokeneen kehittäjän näkökulmasta tämä auttaa välttämään piilotetut virheet, ja säästää aikaa ja kustannuksia myöhemmin projektin aikana.

    https://percepio.com/tracealyzer-linux-blogs/

    Reply
  20. Tomi Engdahl says:

    Planet Debug Enables Remote Work on Real Hardware
    Feb. 28, 2022
    With the help of cameras and remote Wi-Fi-based programming, the CODEGRIP debugging tool lets students access the development boards and attached interface devices to develop projects anywhere on Earth, and at any time.
    https://www.electronicdesign.com/tools/learning-resources/design-solutions/video/21234127/electronic-design-planet-debug-enables-remote-work-on-real-hardware?utm_source=EG%20ED%20Analog%20%26%20Power%20Source&utm_medium=email&utm_campaign=CPS220224016&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  21. Tomi Engdahl says:

    Open Source LXI Tools Free Us From Vendor Bloat
    https://hackaday.com/2022/02/18/open-source-lxi-tools-free-us-from-vendor-bloat/

    LXI, or LAN eXtensions for Instrumentation is a modern control standard for connecting electronics instrumentation which supports ethernet. It replaces the older GPIB standard, giving much better performance and lower cost of implementation. This is a good thing. [Martin Lund] has created the open source lxi-tools project which enables us to detach ourselves from the often bloated vendor tools usually required for talking LXI to your bench equipment. This is a partial rewrite of an earlier version of the tool, and now sports some rather nice features such as mDNS for instrument discovery, support for screen grabbing, and a LUA-based scripting backend. (API Link)

    Reply
  22. Tomi Engdahl says:

    Against The Cloud
    https://hackaday.com/2022/02/19/against-the-cloud/

    One of our writers is working on an article about hosting your own (project) website on your own iron, instead of doing it the modern, cloudy-servicey way. Already, this has caused quite a bit of hubbub in the Hackaday Headquarters. Who would run their own server in 2022, and why?

    The arguments against DIY are all strong. If you just want to spin up a static website, you can do it for free in a bazillion different places. GitHub’s Pages is super convenient, and your content is version controlled as a side benefit. If you want an IoT-type data-logging and presentation service, there are tons of those as well — I don’t have a favorite. If you want e-mail, well, I don’t have to tell you that a large American search monopoly offers free accounts, for the low price of slurping up all of your behavioral data. Whatever your need, chances are very good that there’s a service for you out there somewhere in the cloud.

    Reply
  23. Tomi Engdahl says:

    New Part Day: Smallest ARM MCU Uproots Competition, Needs Research
    https://hackaday.com/2022/03/01/new-part-day-smallest-arm-mcu-uproots-competition-needs-research/

    We’ve been contacted by [Cedric], telling us about the smallest MCU he’s ever seen – Huada HC32L110. For those of us into miniature products, this Cortex-M0+ package packs a punch (PDF datasheet), with low-power, high capabilities and rich peripherals packed into an 1.6mm x 1.4mm piece of solderable silicon.

    This is matchstick head scale computing, with way more power than we previously could access at such a scale, waiting to be wrangled. Compared to an ATTiny20 also available in WLCSP package, this is a notable increase in specs, with a way more powerful CPU, 16 times as much RAM and 8-16 times the flash! Not to mention that it’s $1 a piece in QTY1, which is about what an ATTiny20 goes for. Being a 0.35mm pitch 16-pin BGA, your typical board house might not be quite happy with you, but once you get a board fabbed and delivered from a fab worth their salt, a bit of stenciling and reflow will get you to a devboard in no time.

    https://www.hdsc.com.cn/Category82-1393

    Reply
  24. Tomi Engdahl says:

    11 Myths About In-Memory Database Systems
    Feb. 25, 2022
    In-memory database systems have become popular and commonplace in the 2000s. Nevertheless, misinformation and misunderstandings still abound about the technology. This article will attempt to set the record straight.
    Steven Graves
    https://www.electronicdesign.com/industrial-automation/article/21216834/mcobject-11-myths-about-inmemory-database-systems?utm_source=EG%20ED%20Connected%20Solutions&utm_medium=email&utm_campaign=CPS220224022&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    What you’ll learn:

    Where in-memory databases are used (spoiler: everywhere!).
    Suitability of in-memory databases for microcontrollers.
    Risks and mitigation of data loss.
    How an in-memory database is different than database cache.

    Steven Graves, president of McObject, debunks some of the myths and misinformation surrounding in-memory database technology.

    1. In-memory databases will not fit on a microcontroller.

    In-memory databases can and do fit on microcontrollers in a wide range of application domains, such as industrial automation and transportation. In-memory database systems can be quite compact (under 500K code size) and, as mentioned in the answer to Myth #3 below, use the available memory quite frugally. A microcontroller in an embedded system often doesn’t manage a lot of data (and so doesn’t require a lot of memory), but the data is still disparate and related, so it therefore benefits from database technology.

    2. In-memory databases don’t scale.

    There are two dimensions to scalability: horizontal and vertical. Vertical scalability means getting a bigger server to handle a growing database. While RAM is certainly more expensive than other storage media, it’s not uncommon to see servers with terabytes of memory. Ipso facto, in-memory databases can scale vertically into the terabytes.

    Horizontal scalability means the ability to shard/partition a database and spread it over multiple servers while still treating the federation as a single logical database. This applies equally to in-memory databases.

    3. In-memory databases can lose your data (aren’t persistent).

    In the absence of any mitigating factors, an in-memory database does “go away” when the process is terminated, or the system is rebooted. However, this can be solved in several ways:

    Snapshot the database before shutdown (and reload it after the next startup).
    Combine #1 with transaction logging to persistent media to protect against an abnormal termination.
    Use NVDIMM (RAM coupled with flash, a supercapacitor, and firmware to dump the RAM to flash on power loss and copy back to RAM on restart).
    Use persistent memory a la Intel’s Optane.

    4. In-memory databases are the same as putting the database in a RAM drive.

    False. A true in-memory database is optimized to be such. A persistent database is optimized to be such. These optimizations are diametrically opposed. An in-memory database is optimized to minimize CPU cycles (after all, you use an in-memory database because you want maximum speed), and to minimize the amount of space required to store any given amount of data. Conversely, a persistent database will use extra CPU cycles (e.g., by maintaining a cache) and space (by storing indexed data in the index structures) to minimize I/O because I/O is the slowest operation.

    5. In-memory databases are the same as caching 100% of a conventional database.

    Also false. Caching 100% of a conventional database will improve read (query) performance but will do nothing to improve insert/update/delete performance. And, maintaining a cache means having extra logic that a persistent database needs for a least-recently-used (LRU) index—an index to know if a requested page is in cache, or not (a DBMS doesn’t “know” that you’re caching the entire database; it needs to carry out this logic regardless), marking pages as “dirty” and so on.

    6. In-memory databases are only suitable for very few, and specific, use cases.

    In-memory databases are utilized for almost any workload you can imagine. Common applications include consumer electronics, industrial automation, network/telecom infrastructure equipment, financial technology, autonomous vehicles (ground/sea/air), avionics, and more.

    7. In-memory databases are more susceptible to corruption.

    In-memory databases are no more susceptible to corruption than any other type of database, and perhaps less so. Databases on persistent media can be “scribbled on” by rogue or malevolent processes, corrupting them. Operating systems offer more protection to memory than file systems provide to files.

    8. An in-memory database will crash if it becomes full.

    A well-written in-memory database will handle an out-of-memory condition as gracefully as a well-written persistent database handles a disk becoming full. Ideally, an in-memory database doesn’t allocate memory dynamically; rather, memory is allocated up-front and memory required for the in-memory database is doled out from that initial memory pool as needed.

    Such an approach eliminates the possibility of allocating more memory than that permitted by user/system limits. It also eliminates the possibility of a memory leak affecting other parts of a system that an in-memory database operates in.

    9. An in-memory database can consume all system memory.

    A well-written database shouldn’t allocate memory dynamically for storage space (general-purpose heap could be a different story). Storage space (memory) for an in-memory database should be allocated one time on startup. If it fills up, the in-memory database system should report that fact and allow the application to determine next steps (prune some data, allocate additional memory, graceful shutdown, etc.).

    10. In-memory databases are very fast, therefore suitable for real-time systems.

    Real-time systems break down into two classes: soft real-time and hard real-time. In-memory databases can be suitable for soft real-time systems. Hard real-time systems require more than just speed—they require determinism. That, in turn, requires a time-cognizant database system, i.e., one that’s aware of, and can manage, real-time constraints (deadlines). It also has become fashionable to advertise in-memory databases as “real-time,” which has become synonymous with “real fast” but has no relationship to soft or hard real-time systems.

    11. It doesn’t make sense to have an in-memory client/server database.

    Refer to Myth #1. An in-memory database that’s sharded/partitioned practically requires a client/server architecture. Therefore, client application(s) can be isolated from the physical topology of the distributed system, including any changes to it (e.g., to scale it horizontally).

    Reply
  25. Tomi Engdahl says:

    ESP32 VIRTUAL MACHINE LETS YOU CHANGE PROGRAMS ON THE FLY
    https://hackaday.com/2022/02/27/esp32-virtual-machine-lets-you-change-programs-on-the-fly/

    Often, reprogramming a microcontroller involves placing it in reset, flashing the code, and letting it fire back up. It usually involves shutting the chip down entirely. However, [bor0] has built a virtual machine that runs on the ESP32, allowing for dynamic program updates to happen.

    https://github.com/bor0/evm-esp32

    Reply
  26. Tomi Engdahl says:

    Kuuma ohjelmointikieli puolittaa sähkönkulutuksen – näin koodatut ohjelmat toimivat myös 10 kertaa nopeammin
    Antti Kailio25.2.202212:45|päivitetty25.2.202212:45OHJELMOINTIENERGIA
    Rustilla koodatut ohjelmat toimivat nopeasti ja vaativat verrattain vähän palvelinkapasiteettia.
    https://www.tivi.fi/uutiset/kuuma-ohjelmointikieli-puolittaa-sahkonkulutuksen-nain-koodatut-ohjelmat-toimivat-myos-10-kertaa-nopeammin/733317be-3c00-45c7-bdf9-14b5945eddfe

    Enough with the Zero Sum Game of Rust vs. Go
    https://thenewstack.io/enough-with-the-zero-sum-game-of-rust-vs-go/

    Reply
  27. Tomi Engdahl says:

    The Co-Processor Architecture: An Embedded System Architecture for Rapid Prototyping
    https://www.digikey.com/en/articles/the-co-processor-architecture-an-embedded-system-architecture-for-rapid-prototyping?dclid=CK3gx8mQtvYCFaNgwgod40sC5A

    The embedded systems designer finds themselves at a juncture of design constraints, performance expectations, and schedule and budgetary concerns. Indeed, even the contradictions in modern project management buzzwords and phrases further underscore the precarious nature of this role: “fail fast”; “be agile”; “future-proof it”; and “be disruptive!”. The acrobatics involved in even trying to satisfy these expectations can be harrowing, and yet, they have been spoken and continue to be reinforced throughout the market. What is needed is a design approach, which allows for an evolutionary iterative process to be implemented, and just like with most embedded systems, it begins with the hardware architecture.

    The co-processor architecture, a hardware architecture known for combining the strengths of both microcontroller unit (MCU) and field programmable gate array (FPGA) technologies, can offer the embedded designer a process capable of meeting even the most demanding requirements, and yet it allows for the flexibility necessary to address both known and unknown challenges. By providing hardware capable of iteratively adapting, the designer can demonstrate progress, hit critical milestones, and take full advantage of the rapid prototyping process.

    Within this process are key project milestones, each with their own unique value to add to the development effort. Throughout this article, these will be referred to by the following terms: The Digital Signal Processing with the Microcontroller milestone, the System Management with the Microcontroller milestone, and the Product Deployment milestone.

    Reply
  28. Tomi Engdahl says:

    Making ISO 26262 Traceability Practical
    March 4, 2022
    Increase system quality and accelerate functional-safety assessments by identifying and fixing the traceability gaps between disparate systems.
    https://www.electronicdesign.com/technologies/embedded-revolution/article/21235207/arteris-ip-making-iso-26262-traceability-practical

    What you’ll learn:

    How to bridge the gaps between requirements, architecture, design, verification, and validation.
    How to use requirements management to support ISO 26262 standards.

    The ISO 26262 standard states that functional-safety assessors should consider if requirements management, including bidirectional traceability, is adequately implemented. The standard doesn’t specify how an assessor should go about accomplishing this task. However, it’s reasonable to assume that a limited subset of connections between requirements and implementation probably doesn’t rise to the expectation.

    An apparently simple requirement from a customer, such as “The software interface of the device should be compliant in all respects with specification ABCD-123, except where explicitly noted elsewhere in these requirements,” expands into a very complex set of requirements when fully elaborated.

    How can an architect and design team effectively manage this level of traceability amid a mountain of specifications and requirements lists? How can they ensure that what they have built and tested at each step of the development cycle ties back to the original requirements (Fig. 1)? This is especially challenging since handoffs between requirements, architecture, design, verification, and validation depend on human interpretation to bridge the gaps between these steps.

    The Default Approach to Traceability

    The most obvious way to implement traceability is through a matrix, whether implemented in a dedicated tool like Jama Connect or an Excel spreadsheet. One requirement per line, maybe hierarchically organized, with ownership, source reference, implementation reference, status and so on. This matrix is a bidirectional reference.

    Matrices can work well when they’re relatively small. Perhaps the architect will split these up into sub-teams and assign the responsibility for checking correspondence between sub-matrices and the system matrix to an integrator.

    Matrices become unmanageable, though, when the number of requirements moves into the thousands. A matrix provides a disciplined way to organize data, but it doesn’t provide automation. Ensuring correspondence between requirements and implementation is still a manual task.

    First Steps to Automation

    The core problem is connecting between domains that speak very different languages: requirements management, documentation, chip assembly, verification, and hardware/software interface (HSI) descriptions. One approach common in software and mechanical systems is through application-lifecycle-management (ALM) or product-lifecycle-management (PLM) solutions, where all development tools are offered under a common umbrella by a single provider. With that level of control, an ALM or PLM could manage traceability in data between these domains.

    However, it’s difficult to see how that kind of integration could work with electronic-design-automation (EDA) tool flows, where the overriding priority is to stay current with leading technologies and complexities. System-on-chip (SoC) development teams demand best-in-class flows and are unlikely to settle for solutions offering traceability at the expense of lower capability.

    Reply
  29. Tomi Engdahl says:

    An Interview With Reinhard Keil
    https://hackaday.com/2022/03/06/an-interview-with-reinhard-keil/

    Over on the Embedded FM podcast, [Chris] and [Elecia] just released their interview with [Reinhard Keil] of compiler fame. [Reinhard] recounts the story of Keil’s growth and how it eventually became absorbed into Arm back in 2005. Along with his brother Günter, the two founded the company as Keil Software in the Americas, and Keil Elektronik in Europe. They initially made hardware products, but as the company grew, they became dissatisfied with the quality and even existence of professional firmware development tools of the day. Their focus gradually shifted to making a CP/M- and a PC-based development environment, and in 1988, they introduced the first C-compiler designed for the 8051 from the ground up.

    https://embedded.fm/episodes/404

    Reply
  30. Tomi Engdahl says:

    Kuinka estää flashin kuluminen?
    https://etn.fi/index.php/new-products/13295-kuinka-estaeae-flashin-kuluminen

    Suunnittelutiimien on oltava varovaisia määrittäessään, mitä flash-piirejä ja tiheyksiä käyttävät sovelluksissaan, koska ajan myötä ja käytöstä riippuen ne kuluvat. Nykypäivän flash-muistit ovat siirtyneet pois monikiteisen piin kelluvan hilan tekniikasta piinitridikennoihin, joihin varaukset vangitaan. Tämä tarkoittaa, etteivät vanhat suunnittelun mallit ja säännöt enää ole voimassa.

    Huolimatta siitä, että flashia käytetään lähes kaikissa sulautetuissa järjestelmissä, niissä tehdään edelleen virheitä, jotka usein saavat paljon julkisuutta. Nämä ongelmat johtuvat liiallisesta kirjoitus/poisto-syklien määrästä, jotka johtavat jopa tuotteiden takaisinkutsuihin turvallisuusongelmien takia. Tuotekehitysprosessin alkuvaiheessa tiimin on arvioitava näiden W/E-käyttöjen (write/erase) säännöllisyys ja kirjoitettavan datan määrä.

    Kulumisen tasoittamisen ymmärtäminen

    Jos osaa samasta flashista käytetään staattisen datan, kuten käynnistyslataimen tai sovelluskoodin tallennukseen, nämä muistisivut eivät kulu juuri lainkaan. Edistyneemmät kulumista tasaavat lähestymistavat siirtävät staattista dataa uusiin sijainteihin, jolloin niihin liittyvät muistisivut voivat pidentää flash-muistin yleistä käyttöikää. Linuxia käyttävät sovellukset hyötyvät tiedostojärjestelmistä, jotka integroivat kulumisen tasoituksen hallitsemattomaan NAND-tallennukseen. Tällaisia ovat esimerkiksi JFFS2 ja YAFFS.

    Muita flash-tallennuksessa huomioitavia seikkoja ovat ympäristö ja nopeus. Flashin kestävyys ja datan säilyminen määritetään tyypillisesti 40 °C:ssa, mutta nämä luvut laskevat nopeasti nousseissa lämpötiloissa.

    Lopuksi

    Markkinavoimat ovat painostaneet NAND-flash-valmistajia toimittamaan entistä suurempia tiheyksiä käyttämällä uusinta, huippuluokan liitäntätekniikkaa. Siirtyminen 3D NAND -flashiin on jo pitkällä, joten perinteiset hallitsemattomat SLC NAND -laitteet ja ohjelmistojen kulumisen tasoittamisen kehittäminen jäävät vähitellen menneisyyteen. Taso- ja 3D-flashin teknisten erojen vuoksi kulumista tasaavien ohjelmistojen kehittämistä tarvitsevien on tehtävä tiivistä yhteistyötä valmistajien kanssa tarvittavien algoritmien kehittämiseksi. Useiden bittien tallennus solua kohden on nykypäivän NAND-flashille jo standardi. Niille, jotka haluavat parantaa kestävyyttä ja säilytysikää, pSLC on kuitenkin usein hyväksyttävä vaihtoehto. Hallitut NAND- muistit, kuten e-MMC ja UFS, yksinkertaistavat toteutuksia huomattavasti, mutta tällaisten piirien käyttö vaatii silti huolellista arvioitua elinkaaren työkuormien analysointia sen varmistamiseksi, että haihtumaton tallennustila ei kulu ennenaikaisesti.

    Reply
  31. Tomi Engdahl says:

    Lisää turvaa teollisuuden muistikorteille
    https://etn.fi/index.php/13-news/13289-lisaeae-turvaa-teollisuuden-muistikorteille

    Teollisuudessa käytetään paljon SD- ja microSD-kortteja. Niissä eivät kuitenkaan riitä kulutuslaitteiden ominaisuudet esimerkiksi suojauksen tai kestävyyden osalta. Saksalainen Hyperstone on esitellyt uudet S9-sarjan korttiohjaimet, jotka parantavat moni teollisuuden korttien ominaisuuksia.

    Uusi S9-ohjainperhe tarjoaa luotettavimmat NAND-ohjaimet, jotka täyttävät vaativien sovellusten vaatimukset avaimet käteen -periaatteella. Lisäksi ohjaimesta on olemassa S9S-versio, jossa on sovellusohjelmaliittymä (API). Se lisää useita suojausominaisuuksia.

    Ohjaimen FlashXE ECC- ja hyReliability-ominaisuudet takaavat pidennetyn kestävyyden, tietojen eheyden ja turvan sähkökatkoksissa teollisuusautomaatiossa, tietoliikenteessä, sekä verkko- ja lääketieteelliset laitteissa. HyMap-käännöskerroksen avulla S9 saavuttaa minimaalisen kirjoitusvahvistuksen ja maksimaalisen kestävyyden.

    Turvaversio S9S tarjoaa laitteistotuen AES-128/256-salaukselle, julkisen avaimen elliptisen käyrän salaukselle, TRNG/DRBG- ja SHA-256-salaukselle, GPIO-nastoille ja ISO7816-, I2C- ja SPI-liitännöille.

    Reply
  32. Tomi Engdahl says:

    Make It Compatible
    https://hackaday.com/2022/03/12/make-it-compatible/

    I’m probably as guilty as anyone of reinventing the wheel for a subpart of a project. Heck, sometimes I just feel like working on a wheel design. But if that’s the path you choose, you have to think about whether or not it’s important that others can replicate your project. The nice thing about a bog-standard wheel is that everyone has got one.

    The case study I have in mind is a wall-plotter project that appeared on Hackaday this week. It’s a really sweet design, and in many ways would be an ideal starter project. I actually need a wall plotter (for reasons) and like a number of the choices made. For instance, having nearly everything, including the lightweight geared steppers on the gondola makes it easy to install and uninstall — you just pin up the timing belt from which it hangs and you’re done. Extra weight on the gondola helps with stability anyway. It’s open source and based on the Arduino libraries, so it should be easy enough to port to whatever microcontroller I have on hand.

    But the image-generation toolchain is awkward, involving cutting and pasting into a spreadsheet, which generates a text file in a custom plotting micro-language. Presumably the designer doesn’t know about Gcode, which is essentially the lingua franca of moving machines, or just didn’t feel like implementing it. Where in Gcode, movement commands are like “G1 X100 Y50”, this device expects “draw_line(0,0,100,50)”. They’re essentially equivalent, but incompatible.

    I totally understand that the author must have had a good time thinking up the movement commands and writing the spreadsheet that translates SVG files into them. I’ve been there and done that! But if the wall plotter spoke Gcode instead of its own dialect, it would slot instantly into any number of graphics processing workflows, which would make me, the potential user, happier.

    When you are looking at reinventing the wheel, think about your audience. If you’re the only person likely to see the project, go ahead and scratch whatever itch you’ve got. You’ll learn more that way. But if you want to share the project with as many people as possible, adhering to the most widely used standards is a good choice for your users, even if it is less fun than dreaming up your own movement language.

    Reply
  33. Tomi Engdahl says:

    How an OSPO Can Help Secure Your Software Supply Chain
    https://thenewstack.io/how-an-ospo-can-help-secure-your-software-supply-chain/
    It’s nearly impossible these days to build software without using open source code. But all that free software carries additional security risks.
    Organizations grapple with how best to secure their open source software supply chain. But there’s another problem: Many companies don’t even know how many open source applications they have — or what’s in them.
    An open source program office (OSPO) — a bureau of open source experts within your organization dedicated to overseeing how your company uses, creates and contributes to free software — can help coordinate all these efforts.
    An OSPO can help a company get a handle on the open source code it uses and establish visibility into open source projects and tools, said Liz Miller, vice president and principal analyst at Constellation Research.
    “Fundamentally, the purpose of an open source program office is to centralize the understanding of dependencies, implementation and utilization of open source code across an enterprise,” Miller said. “There is a significant security benefit to an OSPO.”
    Today’s software is made up of components from a variety of sources. “It’s never 100% one thing,” said VMware’s Ambiel.
    “There’s some code that you have written for the first time, so you obviously know what’s in there. But you may have used some containerized software. And you are going to be reusing some code. And everyone uses open source code.”
    Here’s the scary part: In Synopsys’ analysis, 84% of the codebases had at least one vulnerability. And 91% of the open source components used hadn’t seen any maintenance of the past two years.
    “The reality of open source is that for the security professional, hearing that a software supply chain is filled with unchecked, unknown and completely invisible open source code is the stuff nightmares are made of,” she said.
    That’s why software needs to come with a “bill of materials,” said Ambiel, a complete inventory of all the components that go into a software package, and their versions and license terms.
    And there’s a lot happening on that front. An OSPO can help companies stay on top of the latest recommendations, she said.
    The CNCF white paper also recommended that companies scan their software with software-composition analysis tools to detect vulnerable open source components, and use penetration testing to check for basic security errors or loopholes and resistance to standard attacks.
    Companies need to have a clear understanding of what open source code is used in their environment, stay up to date on patching, and even conduct their own vulnerability scans and assessments if necessary. An OSPO can help coordinate those efforts.
    Securing the Software Supply Chain with a Software Bill of Materials
    For example, the open source community has been working on supply chain security and compliance for years. The Linux Foundation’s Tern project, which inspects container images, is part of its Automated Compliance Tooling initiative.
    “What’s current today is technical debt tomorrow. It’s a big job. But when it comes to these big ecosystem challenges, that’s where the open source community really shines and can step up.”
    —Suzanne Ambiel, director of open source marketing and strategy, VMware Tanzu
    An OSPO can also tap outside expertise through the OpenSSF, which is working on system solutions and ways to combat increasing attacks like typosquatting and malicious code.
    https://thenewstack.io/securing-the-software-supply-chain-with-a-software-bill-of-materials/
    What happens when the maintainer of a popular open source framework or component dies, goes to prison or just gets fed up? Developers whose software depends on that repo might have time to prepare; there might be an official repo with a formal succession process, a fast but informal community fork. Or the code might just disappear — which could affect commercial tools using it too. And even if there are a warning and time to plan, it’s only helpful if developers are aware of the dependencies in their software and monitoring their status.
    Death, prison terms and abrupt departures might be rare; software vulnerabilities aren’t.
    Even large, experienced technology organizations can make mistakes in securing their repo (Canonical’s GitHub account was compromised in 2019) or miss the update that fixes a newly-discovered vulnerability in a component.
    Viewing the open source that developers and operations teams consume as a supply chain makes it easier to think about where problems occur.
    Importing one package doesn’t add just one dependency; it also brings the upstream dependencies that package imports.
    Software security tools like linters, fuzzers and static code analysis can improve code quality. While Coverity is a proprietary static analysis tool, it’s free to scan open source projects written in C, C++, C#, Java, JavaScript, Python and Ruby for defects and to get explanations of the root cause.
    Embold is also free for open source use. Google’s OSS-Fuzz service, run in conjunction with the Linux Foundation’s Core Infrastructure Initiative, uses multiple fuzzing engines, checks open source projects written in C/C++, Rust and Go free and has already found 17,000 bugs in 250 projects.
    Rather than leaving every maintainer to check one project at a time, GitHub is hoping its Security Lab (free for open source projects) and CodeQL will help remove vulnerabilities at scale across thousands of projects.
    Also free for open source projects is Snyk, which will scan your source code repo and tell you if you have dependencies on. Now that GitHub owns npm, it’s going to be easier to check those dependencies
    But useful as automated dependency tools are for understanding what code a project is so developers can update and patch (and for automating that patching as part of source code and build management), the longer-term approach needs to be more systematic — because dependency chains are so deep in the open source world. Importing one package doesn’t add just one dependency; it also brings the upstream dependencies that package imports. Because many Node packages are snippets, installing one Node package means trusting, on average, 80 packages, and that number is going up over time.
    “One interesting trend we’re seeing with this in these ecosystems is that once something gets popular, it gets even more popular,”
    To make it easier to detect when build servers are compromised, Microsoft is pushing the adoption of reproducible builds; builds of source code should be not just versioned but deterministic, with a record of the tools used and the steps needed to either reproduce or verify the build.
    Most of Windows is now built with reproducible builds and Linux is moving towards reproducible. It can have some odd side-effects though; the timestamps in signed Windows binaries are no longer actual times because otherwise, they’d be different every time the build was run, so moving to reproducible builds can mean a lot of changes.

    Reply
  34. Tomi Engdahl says:

    Andy Green’s Libwebsockets Can Parse and Render HTML5, CSS on an ESP32 or Other Microcontroller
    https://www.hackster.io/news/andy-green-s-libwebsockets-can-parse-and-render-html5-css-on-an-esp32-or-other-microcontroller-0df5efc65a88

    Designed for user interface creation, the new HTML and CSS parsing feature of libwebsockets is impressively flexible.

    Reply
  35. Tomi Engdahl says:

    Motivating developers to write secure code https://www.ncsc.gov.uk/blog-post/motivating-developers-to-write-secure-code
    As we spoke about previously we continue to see the exploitation of common software vulnerabilities leading to high impact outcomes. This is despite the availability of security tools and processes (think Security Development Lifecycles) that are designed to help improve the security of software development. This leads us to question why these tools are not having the desired impact. Back in 2017 we trailed an NCSC-sponsored research project looking into how software developers could be motivated and enabled to adopt and integrate secure coding practices. Run by our Research Institute, RISCS, the project’s full title is: Motivating Jenny to write secure software: community and culture of coding’. It has now borne fruit in the form of a toolkit, designed to help organisations of all sizes change the conversation about security within and around development teams. This blog post will outline the major findings of the research, which through engagement with developers, has produced a toolkit to help developers consider security during their daily jobs. Toolkit:
    https://motivatingjenny.org/index.html

    Reply
  36. Tomi Engdahl says:

    Some developers are fouling up open-source software https://www.zdnet.com/article/some-developers-are-fouling-up-open-source-software
    - From ethical concerns, a desire for more money, and simple obnoxiousness, a handful of developers are ruining open-source for everyone. One of the most amazing things about open-source isn’t that it produces great software. It’s that so many developers put their egos aside to create great programs with the help of others. Now, however, a handful of programmers are putting their own concerns ahead of the good of the many and potentially wrecking open-source software for everyone.

    Reply
  37. Tomi Engdahl says:

    Open Source Software Faces Threats of Protestware and Sabotage | WIRED
    https://www.wired.com/story/open-source-sabotage-protestware/

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*