New approaches for embedded development

The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.

Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.

Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.

Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.

Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.

Traditional approaches

C/C++

Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.

C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.

And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.

Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.

Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.

Java

Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones,) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.

Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.

Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.

 

Increasing trends

Databases

Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.

There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).

SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).

If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.

RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.

Software quality

Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind

Safety

Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?

IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.

If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.

Security

The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.

Developing Secure Embedded Software white paper  explains why some commonly used approaches to security typically fail:

MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY

Many organizations are only now becoming aware of the need to incorporate security into their software development lifecycle.

Some techniques for building security to embedded systems:

Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed

Flood of new languages

Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device.  There are also many other relevant languages

One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages ​​(Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again.  Scripting languages ​​are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.

Scripting languages ​​have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used.
High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.

There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.

Bash and other shell scipting

Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).

One of the most useful tools when developing from within a Linux environment is the use of shell scripting. Scripting can help aid in setting up environment variables, performing repetitive and complex tasks and ensuring that errors are kept to a minimum. Since scripts are ran from within the terminal, any command or function that can be performed manually from a terminal can also be automated!

The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.

There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.

Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.

Lua

Lua is a lightweight  cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.

Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.

There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.

PHP

PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.

Python

Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming,  Many other paradigms are supported using extensions, including design by contract and logic programming.

Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.

Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.

Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.

For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriateExamples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.

Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.

JavaScript and node.js

JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases!  There is huge community of developers. JavaScript is a high-level language.

Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.

Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.

It doesn’t require a complicated IDE; all you really need is a terminal.

JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.

JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!

JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications

JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!

Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.

Future trends

According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:

One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.

Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.

This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.

Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.

But the shift will happen as hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot.

 

Sources

Most important information sources:

New approaches to dominate in embedded development

A New Approach for Distributed Computing in Embedded Systems

New Approaches to Systems Engineering and Embedded Software Development

Lua (programming language)

Embracing Java for the Internet of Things

Node.js

Wikipedia Node.js

Writing Shell Scripts

Embedded Linux – Shell Scripting 101

Embedded Linux – Shell Scripting 102

Embedding Other Languages in BASH Scripts

PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog

PHP

Python (programming language)

JavaScript: The Perfect Language for the Internet of Things (IoT)

Node.js for Embedded Systems

Embedded Python

MicroPython – Embedded Pytho

Anyone using Python for embedded projects?

Telit Programming Python

JavaScript: The Perfect Language for the Internet of Things (IoT)

MICROCONTROLLERS AND NODE.JS, NATURALLY

Node.js for Embedded Systems

Why node.js?

Node.JS Appliances on Embedded Linux Devices

The smartest way to program smart things: Node.js

Embedded Software Can Kill But Are We Designing Safely?

DEVELOPING SECURE EMBEDDED SOFTWARE

 

 

 

1,687 Comments

  1. Tomi Engdahl says:

    Do Not Use Print For Debugging In Python Anymore
    A refined “print” function for debugging in Python
    https://towardsdatascience.com/do-not-use-print-for-debugging-in-python-anymore-6767b6f1866d

    What is the most frequently used function in Python? Well, probably in most of the programming languages, it has to be the print() function. I believe most of the developers like me, would use it to print messages into the console many times during the development.
    Of course, there is no alternative that can completely replace the print() function. However, when we want to output something for debugging purposes, there are definitely better ways of doing so. In this article, I’m going to introduce a very interesting 3rd party library in Python called “Ice Cream”. It could create lots of conveniences for quick and easy debugging.

    Reply
  2. Tomi Engdahl says:

    Rust is creeping into the Linux kernel, which could mean a very important step forward in terms of security.
    https://www.techrepublic.com/article/let-the-linux-kernel-rust/

    Reply
  3. Tomi Engdahl says:

    JPEGENC is a fast and convenient JPEG image encoding library from Larry Bank, which can run on any MCU with at least 6K of free RAM.
    https://github.com/bitbank2/JPEGENC

    Reply
  4. Tomi Engdahl says:

    How ‘shift left’ helps secure today’s connected embedded systems – EDN
    https://www.edn.com/how-shift-left-helps-secure-todays-connected-embedded-systems/
    DevSecOps—which stands for development security operations—expands on DevOps principles with a “shift left” principle, designing and testing for security early and continuously in each software iteration.
    Defense-in-depth and the process model
    Traditionally, the practice for secure embedded code verification has been largely reactive. Code is developed in accordance with relatively loose guidelines and then subjected to performance, penetration, load, and functional testing to identify vulnerabilities.
    A more proactive approach ensures code is secure by design. That implies a systematic development process, where the code is written in accordance with secure coding standards, is traceable to security requirements, and is tested to demonstrate compliance with those requirements as development progresses.
    One interpretation of this proactive approach integrates security-related best practices into the V-model software development lifecycle that is familiar to developers in the functional safety domain. The resulting secure software development life cycle (SSDLC) represents a shift left for security-focused application developers, ensuring that vulnerabilities are designed out of the system (Figure 1).
    Shift left: What it means
    The concepts behind the “shift left” principle should be familiar to anyone developing safety-critical applications because for many years, functional safety standards have demanded a similar approach. Consequently, the following best practices proven in the functional safety domain apply to security-critical applications as well:
    Establish requirements at the outset
    Undocumented requirements lead to miscommunication on all sides and create rework, changes, bug fixes, and security vulnerabilities. To ensure smooth project development, all team members must understand in the same way all parts of the product and the process of its development. Clearly defined functional and security requirements help ensure they do.

    Reply
  5. Tomi Engdahl says:

    Digging Into An ATtiny Simulator Bug With GDB
    https://hackaday.com/2021/08/26/digging-into-an-attiny-simulator-bug-with-gdb/

    Being able to track down a bug in a mountain of source code is a skill in its own right, and it’s a hard skill to learn from a book or online tutorial. Besides the trial-by-fire of learning while debugging your own project, the next best thing is to observe someone else’s process. [Uri Shaked] has given us a great opportunity to brush up on our debugging skills, as he demonstrates how to track down and squish a bug in the Wokwi Arduino simulator.

    A user was kind enough to report the bug and include the offending Arduino sketch. [Uri]’s first step was to reduce the sketch to the smallest possible program that would still produce the bug.

    GDB Debugging ATtiny Simulator Bug – Building Wokwi
    https://www.youtube.com/watch?v=YUUHw-bU9yk

    Reply
  6. Tomi Engdahl says:

    Step 1: pick a code to learn.
    Step 2: pick an object you want to control.
    Step 3: search for similar projects to get examples and help when needed.
    Step 4 – ~: meditate to relieve the stress of trouble shooting.

    Reply
  7. Tomi Engdahl says:

    Raspberry Pi on suosituin kortti kehitykseen
    https://etn.fi/index.php/13-news/12516-raspberry-pi-on-suosituin-kortti-kehitykseen

    Farnellin tutkimus osoittaa, että suosituin sulautettu kortti tuotekehitykseen on Raspberry Pi. Korttitietokoneita käytetään yhä enemmän teollisissa suunnitteluissa ja esineiden internetissä. Tämä on johtanut pulaan suosituista korteista.

    Farnellin raportin mukaan puolet ammattisuunnittelijoista käyttää SBC-kortteja teollisiin sovelluksiin ja IoT-suunnitteluun, jotka ovat suosituimpia SBC-sovelluksia (single board computer). Raspberry Pi on suosituin kortti ja sitä suosii 44 prosenttia ammattikäyttäjistä.

    Arduino sijoittui toiseksi (28 prosenttia) ja Texas Instrumentsin Beagleboard kolmanneksi 6 prosentilla. Tutkimus osoittaa myös, että Raspberry Pi -käyttäjät ovat uskollisimpia ja harvemmin käyttävät toista korttia.

    Yhden kortin tietokoneita käytetään kaikissa tuotekehitys- ja tuotantovaiheissa. 23 prosenttia vastaajista käyttää niitä suunnittelukonseptin todistamiseen ja 35 prosenttia prototyyppien valmisteluun. 22 prosenttia käyttää halpoja kortteja myös volyymituotannossa.

    Reply
  8. Tomi Engdahl says:

    The Dark Side Of Package Repositories: Ownership Drama And Malware
    https://hackaday.com/2021/09/08/the-dark-side-of-package-repositories-ownership-drama-and-malware/

    At their core, package repositories sound like a dream: with a simple command one gains access to countless pieces of software, libraries and more to make using an operating system or developing software a snap. Yet the rather obvious flip side to this is that someone has to maintain all of these packages, and those who make use of the repository have to put their faith in that whatever their package manager fetches from the repository is what they intended to obtain.

    How ownership of a package in such a repository is managed depends on the specific software repository, with the especially well-known JavaScript repository NPM having suffered regular PR disasters on account of it playing things loose and fast with package ownership. Quite recently an auto-transfer of ownership feature of NPM was quietly taken out back and erased after Andrew Sampson had a run-in with it painfully backfiring.

    In short, who can tell when a package is truly ‘abandoned’, guarantee that a package is free from malware, and how does one begin to provide insurance against a package being pulled and half the internet collapsing along with it?

    Reply
  9. Tomi Engdahl says:

    Our devices could be used a lot longer if manufacturers could be coaxed to provide software support for them indefinitely. Right now, the incentives go the other way around.

    We Need Software Updates Forever Manufacturers should maintain their software and firmware indefinitely
    https://spectrum.ieee.org/we-need-software-updates-forever

    Reply
  10. Tomi Engdahl says:

    You Can Run Doom on a Chip From a $15 Ikea Smart Lamp
    Software engineer Nicola Wrachien demoed his creation in a video that shows the chip running a memory-optimized version of Doom over his custom hardware.
    https://uk.pcmag.com/games/133930/you-can-run-doom-on-a-chip-from-a-15-ikea-smart-lamp

    Reply
  11. Tomi Engdahl says:

    Learning From a Rocket with the World’s Smallest Flight Computer
    Immensely complex and high risk (and also a lot of fun!).
    https://www.hackster.io/news/learning-from-a-rocket-with-the-world-s-smallest-flight-computer-2fa6b5fa6e9a

    Reply
  12. Tomi Engdahl says:

    To Stack or Not to Stack—Allocation is the Question
    Sept. 23, 2021
    Embedded programmers have three places to store data: global, the heap, and the stack.
    https://www.electronicdesign.com/altembedded/article/21176380/electronic-design-to-stack-or-not-to-stackallocation-is-the-question

    What you’ll learn:

    Why stack allocation is a good idea for embedded programming.
    Myths about stack allocation.

    Stack Allocation Myths

    If someone wants to write an “11 Myths” article for me that would be great, but I will only hit on a couple here so that we can move onto implementation details. By the way, I will be looking at this from an Ada/SPARK and then C/C++ perspective, so hang in there. I know most of you will be C/C++ fans.

    Myth: Stacks are small and should be used sparingly.

    Many recommend that structures and arrays be allocated in the heap and that stacks are small, leading to stack overrun errors. Actually, all memory comes from the same place, and in single threaded applications, the stack is often at the top of a block of memory that contains the heap at the bottom—hopefully they never grow into each other.

    Two things change this model. First is multitasking, where each task has its own stack and often a shared heap. Second, virtual memory or memory protection systems enable large memory spaces to have physical memory scattered about with the ability to move memory around on demand.

    In any case, stack size is something that’s normally configurable. Therefore, if the amount of memory needed is known, as is often the case for embedded applications, then heap allocation, with its potential for memory fragmentation, can be avoided.

    Myth: Stack allocation is only for fixed-size elements.

    This is true for most programming languages. However, some languages, such as Ada, allow for variable-sized objects such as arrays to be allocated upon entry to a function or block.

    Myth: Stack size is an unknown.

    This can be true when using recursive functions. But those aside, the maximum amount of stack space required can be computed, even including interrupt support. The amount of memory needed is essentially the maximum sum of the amounts of memory needed by each procedure while walking the call graph of an application.

    Determining stack size is actually a feature built into C/C++ compilers. The gcc compiler’s -fstack-usage flag will generate a table that has the amount of memory used by each procedure. Call-graph analysis tools like stack_usage are available that can use this information to determine the maximum amount of memory needed.

    IAR’s Embedded Workbench has this built into the compiler. It’s part of the static analysis provided by IAR, which is very useful for embedded developers that the suite targets.

    GNATstack is a tool from AdaCore that works with Ada/C/C++. It can handle recursive functions if you provide a limitation on the level of recursion and it can detect cycles of indirect recursion.

    Heap Allocation Within a Procedure

    One reason heap allocation is often used rather than stack allocation is to address changing sizes at runtime (Fig. 1). Normally the structure, such as an array, is allocated upon entry, used, and then released upon exit.

    Dynamic Stacking

    Some programming languages like Ada allow variable structure sizes to be allocated on the stack (Fig. 2). In Ada’s case, it’s possible to do this allocation within a block in addition to a procedure or function entry.

    Static Stacking and Simplifying Memory Management

    Embedded applications often utilize global memory for static storage or allocate a predefined set of buffers from the heap at startup. These are never freed. Stack allocation can be used in the same fashion, providing some advantages such as the potential of eliminating the heap, its overhead, and potential problems like memory fragmentation.

    In this case, procedures or functions would be written to allocate and initialize the desired data and pass references or pointers to procedures or functions that would use the data. The stack would grow and shrink as needed, so a function that sends a block of data and then receives a response would be provided with buffers allocated in the stack.

    Stack allocation has the advantage over heap allocation when it comes to speed and simplicity. Heap allocation requires extracting a block when one is requested and returning it when the application is done with it. Overhead is often minimal, but it’s not usually deterministic when compared to stack allocation.

    Another advantage to this approach for multitasking applications is that the memory overhead will be known for a particular process. It can be easily replicated since it’s a matter of starting up a new task.

    Stacks that Grow and Overflows

    Virtual-memory and memory-protection systems make stack allocation more interesting—overflows can be detected—and addressed—when they occur, versus memory leaks that often result in errors well past the occurrence of the root cause. Likewise, for embedded applications, an overflow usually indicates incorrect program design since, in theory, a task should have a fixed upper limit for its stack usage.

    Individual tasks rarely use all available memory and most virtual-memory systems have an address space that’s significantly larger than physical memory. This means that spaces can be left between blocks of memory; accessing the intervening spaces results in a fault because it’s considered an error. For stack support, it also can be used to initially provide a smaller block; then more physical memory can be added to the stack if an overflow is detected. New Cortex-M microprocessors have built-in stack overflow checking in the hardware.

    Challenges Using Libraries

    Incorporating outside code such as libraries using a stack-first approach can be a challenge depending on the tools and libraries. That’s because, in many instances, the operation of the libraries with respect to stack usage is unknown. This is a general problem because even if large blocks aren’t allocated in the stack, but rather globally or in the heap, the use of library functions still requires stack space.

    On the embedded side, libraries are often available in source form so that they can be analyzed. It’s also possible to do runtime estimates and then provide headroom so that applications don’t run the risk of stack overflow.
    Wrapping Up

    Many of you may already lean toward stack allocation, but the design pattern or frame of mind for others may not be as common. The approach doesn’t have to be used exclusive of heap allocation. However, like many techniques, it can be useful if you can examine the application from this perspective.

    In many instances, it’s a matter of how you approach a problem. Heap-allocation procedures often return a block allocated from the heap. With a stack approach, you need to pass the already allocated stack entity to a function to do some work.

    Reply
  13. Tomi Engdahl says:

    BFree Brings Intermittent Computing To Python
    https://hackaday.com/2021/09/29/bfree-brings-intermittent-computing-to-python/

    Generally speaking, we like our computing devices to remain on and active the whole time we’re using them. But there are situations, such as off-grid devices that run on small solar cells, where constant power is by no means a guarantee. That’s where the concept of intermittent computing comes into play, and now thanks to the BFree project, you can develop Python software that persists even when the hardware goes black.

    Implemented as a shield that attaches to a Adafruit Metro M0 Express running a modified CircuitPython interpreter, BFree automatically makes “checkpoints” as the user’s code is running so that if the power is unexpectedly cut, it can return the environment to a known-good state instantaneously. The snapshot of the system, including everything from the variables stored in memory to the state of each individual peripheral, is stored on the non-volatile FRAM of the MSP430 microcontroller on the BFree board; meaning even if the power doesn’t come back on for weeks or months, the software will be ready to leap back into action.

    https://github.com/tudssl/bfree

    Reply
  14. Tomi Engdahl says:

    https://etn.fi/index.php/13-news/12652-uusi-matlab-ehdottaa-korjauksia-koodiin

    Mathworks on esitellyt 2021b-päivityksen suosittuun MATLABiin ja Simulink-työkaluun. Pakettiin on tuotu kaksi kokonaan uutta tuotetta, viisi isoa päivitystä ominaisuuksiin ja kaikkiaan satoja uusia ominaisuuksia.

    Uusina ominaisuuksia MATLABiin on tuote koodauksessa avustavia ominaisuuksia. Jatkossa editori osaa uudelleenjärjestellä koodia ja editoida kokonaisia koodiblokkeja samanaikaisesti. Editori myös ehdottaa korjauksia koodiin ja virheentarkistusta. Tavoitteena on tuottaa tehokkaampaa, vähemmän virheitä sisältävää koodia.

    Reply
  15. Tomi Engdahl says:

    Awesome Python Video Tutorials Keep You Motivated
    https://hackaday.com/2021/10/01/awesome-python-video-tutorials-keep-you-motivated/

    Programming languages are one of those topics that we geeks have some very strong and often rather polarised opinions about. As new concepts in computing are dreamt up, older languages may grow new features, if viable, or get left behind when new upstarts come along and shake things up a bit. This scribe can remember his early days programming embedded systems, and the arguments that ensued when someone came along with a project that required embedded C++ or worse, Java, when we were mostly diehard C programmers. Fast forward a decade or two, and things are way more complicated. So much choice, so much opinion.

    So it’s really nice to come across some truly unique and beautifully made Python tutorial videos, that are engaging and fun to watch.

    https://www.youtube.com/playlist?list=PLi01XoE8jYohWFPpC17Z-wWhPOSuh8Er-

    Reply
  16. Tomi Engdahl says:

    Need A New Programming Language? Try Zig
    https://hackaday.com/2021/10/05/need-a-new-programming-language-try-zig/

    Maybe you’ve heard of it, maybe you haven’t. Zig is a new programming language that seems to be growing in popularity. Let’s do a quick dive into what it is, why it’s unique, and what sort of things you would use it for. (Ed Note: Other than “for great justice“, naturally.)
    What Is It?

    You’ve likely heard of Rust as it has made significant inroads in critical low-level infrastructures such as operating systems and embedded microcontrollers. As a gross oversimplification, it offers memory safety and many traditional runtime checks pushed to compile time. It has been the darling of many posts here at Hackaday as it offers some unique advantages. With Rust on the rise, it makes sense that there might be some space for some new players. Languages like Julia, Go, Swift, and even Racket are all relative newcomers vying for the highly coveted mindshare of software engineers everywhere.

    So let’s talk Zig. In a broad sense, Zig is really trying to provide some of the safety of Rust with the simplicity and ease of C. It touts a few core features such as:

    No hidden control flow
    No hidden memory allocations
    No preprocessor, no macros
    First-class support for optional standard library
    Interoperable by design
    Adjustable Runtime Safety
    Compile-time code-execution

    The backtrace here is especially impressive because this is a relatively simple language without a garbage collector, runtime, or virtual machine.

    Let’s talk about some of Zig’s other features: interoperable by design, adjustable runtime safety, and compile-time code execution.

    “Interoperable by design” means that ordinary Zig is easily consumed by C and in turn, consumes C. In many other languages, such as Python, you need to specifically marshall data for C and C++ interoperability. Zig can include C files directly in the main code by virtue of the built-in Clang compiler.

    “Adjustable runtime safety” means that many of the runtime checks that Zig has can be turned on or off depending on the application. Things like integer overflow, bounds checking, unreachable code, and others.

    What Would You Use It For?

    Since Zig is LLVM-based, the targets for Zig include:

    x86_64
    ARM/ARM64
    MIPS
    PowerPC
    WASM32
    RISCV64
    Sparc v9
    Linux
    MacOS
    Windows
    FreeBSD
    DragonFly
    UEFI

    Given that it interoperates with C so smoothly, it’s quite simple to swap out small chunks or libraries for Zig equivalents.

    Additionally, Zig can be used on microcontrollers. As a bit of a cherry-picked example, [Kevin Lynagh] recently went through the journey of converting his keyboard firmware from Rust to Zig. Many of Rust’s well-known language features such as features, macros, and pattern matching are used to initialize and scan ports for key presses. In Zig, these are replaced by inline for, a for loop that is unrolled at compile time, and some clever use of comptime. In particular [Kevin] points out the consistency of the language and how it is a language that he feels like he could master.

    https://ziglang.org/

    Reply
  17. Tomi Engdahl says:

    Python on suosituin
    https://etn.fi/index.php/13-news/12643-python-on-suosituin

    IEEE Spectrum -lehti on jälleen asettanut ohjelmointikielet järjestykseen. Suosituimmaksi, vielä selvällä erolla muihin, nousi Python. Sen jälkeen tulevat Java, C, C++ ja Javascript.

    Top Programming Languages 2021
    https://spectrum.ieee.org/top-programming-languages/#toggle-gdpr

    Reply
  18. Tomi Engdahl says:

    Coding for Regulated Energy Systems
    https://shiftleft.grammatech.com/coding-for-regulated-energy-systems

    North American Electric Reliability Corporation’s supply chain regulations could cost millions for those out of compliance. Dick Brooks explains how this impacts third party software providers of NERC-related systems.

    The stakes are high for energy companies acquiring third-party software. They face fines of up to $1 million per day if they’re found to be out of compliance with NERC’s newest supply chain risk management regulations.

    Q: Can you refresh us on requirements for software supply chain development?

    The North American Electric Reliability Corporation defines the cyber security infrastructure protection standards for the energy industry. Last year, NERC’s software supply chain CIP standards went into effect, which require electric entities to verify the integrity and authenticity of a software package before making any baseline configuration changes. That means that NERC auditors will come on sight and look for evidence that companies are following these regulations and checking the authenticity of software suppliers before making any changes.

    Q: What are the costs associated with noncompliance?

    NERC defines the fines. Depending on how egregious the violation is, it can run into the millions of dollars in the worst cases. For example, if a party doesn’t provide evidence demonstrating their compliance, and if the software is being used on energy systems that are critical, these infractions would add to the fines. There haven’t been a lot of fines issued yet, largely because of COVID-19 keeping auditors from going out to check. That’s going to come to an end as the pandemic lifts.

    Of note: NERC issued a 2019 fine of $10 million to an energy co with 127 CIP violations, including failure to identify and classify critical assets.

    Q: How does this impact developers of third-party energy applications (and for other ICS systems)?

    Edison Electric Institute produced the Model Procurement Contract Language for energy consumers that spells out expectations for their software vendors. One critical element that the EEI identified is the software bill of materials (SBOM), to provide software vendors some insight into what sort of questions and answers are expected of them. The North American Transmission Forum has also created a model questionnaire for electric companies to use when submitting to their software vendors to help them perform a software risk assessment.

    Any software vendor who’s providing a solution used within the energy critical infrastructure is subject to providing responses to those questions and the SBOM materials called out in the Edison document. There are a lot more details there that software vendors need to be aware of, including what standards and format to use and how the SBOM will be delivered. Today, vendors are using their customer portals to distribute software bill of materials, along with the questionnaire, too.

    Q: What best practices can you offer to NERC-related developers and developers of other ICS systems.

    Get used to SBOMs in the DevOps environment. Also, look within repositories like GitHub to help implement these SBOMs. There are lots of tools and free help available, and the learning curve is relatively short.

    Next, grab a copy of the vendor questionnaire available online and produce that document so customers can retrieve it and automate their supply chain risk management tools. If you are using SPDX and CycloneDX standard formats that have been adopted by the NTIA, you’ll be well on your way to having a viable solution.

    Reply
  19. Tomi Engdahl says:

    The easiest programming languages to learn
    https://www.zdnet.com/article/easiest-programming-languages-to-learn/

    Adding programming languages to your skill set can open new career opportunities or increase your earning potential. But what are the easiest programming languages to learn?

    Reply
  20. Tomi Engdahl says:

    Single Event Upsets: High Energy Particles From Outer Space Flipping Bits
    https://hackaday.com/2021/10/09/single-event-upsets-high-energy-particles-from-outer-space-flipping-bits/

    Our world is constantly bombarded by high-energy particles from various sources, and if they hit in just the right spot on the sensitive electronics our modern world is built on, they can start flipping bits. Known as Single Event Upsets (SEU), their effect can range from unnoticeable to catastrophic, and [Veritasium] explores this phenomenon in the video after the break.

    The existence of radiation has been known since the late 1800s, but the effect of low-level radiation on electronics was only recognized in the 1970s when trace amounts of radioactive material in the ceramic packaging of Intel DRAM chips started causing errors. The most energetic particles come from outer space and are known as cosmic rays. They originate from supernovas and black holes, and on earth they have been linked to an impossibly fast Super Mario 64 speedrun and a counting error in a Belgian election. It’s also possible to see their path using a cloud chamber you can build yourself. There are even research projects that use the camera sensors of smartphones as distributed cosmic ray detectors.

    The Universe is Hostile to Computers
    https://www.youtube.com/watch?v=AaZ_RSt0KP8

    Reply
  21. Tomi Engdahl says:

    With Luos Rapid Embedded Deployment Is Simplified
    https://hackaday.com/2021/10/10/with-luos-rapid-embedded-deployment-is-simplified/

    Those of us tasked with developing firmware for embedded systems have a quite a few hurdles to jump through compared to those writing for the desktop or mobile platforms. Solved problems such as code reuse or portability are simply harder. It was with considerable interest that we learnt of another approach to hardware abstraction, called Luos, which describes itself as micro-services for embedded systems.

    This open source project enables deployment of distributed architectures composed of collaborating micro-services. By containerizing applications and hardware drivers, interfaces to the various components are hidden behind a consistent API. It doesn’t even matter where a resource is located, multiple services may be running on the same microcontroller, or separate ones, yet they can communicate in the same way.

    By following hardware and software design rules, it’s possible to create an architecture of cooperating computing units, that’s completely agnostic of the actual hardware. Microcontrollers talk at the hardware level with a pair of bidirectional signals, so the hardware cost is very low. It even integrates with ROS, so making robots is even easier.

    By integrating a special block referred to as a Gate, it is possible to connect to the architecture in real-time from a host computer via USB, WiFi, or serial port, and stream data out, feed data in, or deploy new software. The host software stack is based around Python, running under Jupyter Notebook, which we absolutely love.

    Current compatibility is with many STM32 and ATSAM21 micros

    https://www.luos.io/

    Reply
  22. Tomi Engdahl says:

    Python Provides Classic Basic
    https://hackaday.com/2021/10/08/python-provides-classic-basic/

    Back in the late 1970s and early 1980s when you turned on a PC, more often than not, you’d get a Basic prompt. Most people would then load a game from a tape, but if you were inclined to program you could just start writing. [Richpl] wanted that same experience and thus PyBasic was born. Along with some other Github contributors, the system has grown quite a bit and would be a good start at porting classic games or creating a replica vintage computer.

    The interpreter lacks specialized hardware-specific features such as sound and graphics, of course, but then again, you could add them. It does have file I/O and also includes some interesting features like an analog of C’s ternary operator.

    https://github.com/richpl/PyBasic

    Reply
  23. Tomi Engdahl says:

    Will AI Help Design Your Next Product?
    Sept. 29, 2021
    Machine learning is creeping into tools used to design chips, software, and more.
    https://www.electronicdesign.com/altembedded/article/21175371/electronic-design-will-ai-help-design-your-next-product?utm_source=EG%20ED%20Connected%20Solutions&utm_medium=email&utm_campaign=CPS211004016&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Machine-learning (ML) and artificial-intelligence (AI) models based on deep neural networks (DNNs) are being exploited in a plethora of applications from voice analysis in the cloud for smart speakers to identifying objects for self-driving cars. Many of these applications employ multiple models that perform different types of identification and optimization chores.

    But consumer and business applications aren’t the only places where AI/ML is coming into play. AI/ML software-development kits allow designers to incorporate these technologies into their own products, and tool developers are integrating them into their solutions so that the compiler you’re using might have an AI/ML model or two tuning your next design.

    AI/ML will continue to crop up in more tools. However, in conventional language compilers like gcc or LLVM, it’s made fewer inroads because such compilers already have a good deal of optimization implemented by design.

    Reply
  24. Tomi Engdahl says:

    A VISUAL
    PROGRAMMING LANGUAGE
    FOR MICROCONTROLLERS
    https://xod.io/

    Reply
  25. Tomi Engdahl says:

    Basics Of Remote Cellular Access: Watchdogs
    https://hackaday.com/2021/10/14/basics-of-remote-cellular-access-watchdogs/

    When talking about remote machines, sometimes we mean really remote, beyond the realms of wired networks that can deliver the Internet. In these cases, remote cellular access is often the way to go. Thus far, we’ve explored the hardware and software sides required to control a machine remotely over a cellular connection.

    However, things can and do go wrong. When that remote machine goes offline, getting someone on location to reboot it can be prohibitively difficult and expensive. For these situations, what you want is some way to kick things back into gear, ideally automatically. What you’re looking for is a watchdog timer!

    Watchdogs

    The concept of a watchdog timer is simple. When attached to a system and enabled, the watchdog timer starts counting down from a preset time. The embedded system or computer is then responsible for sending a “kick” signal to the watchdog at regular intervals. This resets the watchdog back to its maximum time value, and it begins counting down again. If the “kick” is not received before the watchdog timer reaches zero, the watchdog reboots the system.

    Summary

    The aim of this article is to explain the basic concept of watchdogs, and why they’re useful for remote systems. Hopefully, the ideas presented here are enough to help you implement watchdog timers to improve the uptime and serviceability of your own projects. After all, there’s nothing cooler than being able to show off your rugged and reliable remote project to everyone at the Hackerspace.

    Reply
  26. Tomi Engdahl says:

    https://www.mwrf.com/technologies/systems/media-gallery/21177690/microwaves-rf-products-of-the-week-october-8-2021?id=21177690&slide=4

    Software Tool Displays Runtime Variables in Real Time

    STMicroelectronics’ STM32CubeMonitor software tool displays runtime variables of STM32 applications in real time, allowing developers to customize the graphical visualization using Windows, Linux, or MacOS. The app offers an extensive set of features that simplify access to valuable insights, including a graphical flow editor that allows users to drag and drop items and features to build custom dashboards.

    Users can also quickly add widgets such as gauges, bar graphs, and plots without the need for programming. By utilizing Node-RED, STM32CubeMonitor can be coupled with a wide choice of extensions to address a wide diversity of application types.

    The app provides two function modes to best suit users’ needs. These include a Design mode that allows users to create and edit monitoring interfaces for specific applications, and an Operator mode that supports the deployment of pre-built user interfaces with vivid demonstrations and field testing.

    https://www.st.com/en/development-tools/stm32cubemonitor.html

    The STM32CubeMonitor family of tools helps to fine-tune and diagnose STM32 applications at run-time by reading and visualizing their variables in real-time. In addition to specialized versions (power, RF, USB-PD), the versatile STM32CubeMonitor provides a flow-based graphical editor to build custom dashboards simply, and quickly add widgets such as gauges, bar graphs and plots. With non-intrusive monitoring, STM32CubeMonitor preserves the real-time behavior of applications, and perfectly complements traditional debugging tools to perform application profiling.
    [STM32CubeMonitor screen capture]
    With remote monitoring and native support for multi-format displays, STM32CubeMonitor enables users to monitor applications across a network, test multiple devices simultaneously, and perform visualization on various host devices such as PCs, tablets, or mobile phones. Moreover, with the direct support of the Node-RED® open community, STM32CubeMonitor allows an unlimited choice of extensions to address a wide diversity of application types.

    All features

    Graphical flow-based editor with no programming needed to build dashboards
    Connects to any STM32 device via ST-LINK (SWD or JTAG protocols)
    Reads and writes variables on-the-fly from and to the RAM in real time while the target application is running
    Parses debugging information from the application executable file
    Direct acquisition mode or snapshot mode
    Trigger to focus on application behaviors of interest

    Enables to log data into a file and replay for exhaustive analysis
    Delivers customized visualization with configurable display windows (such as curves and boxes) and a large choice of widgets (such as gauges, bar graphs and plots)
    Multi-probe support to monitor multiple targets simultaneously
    Remote monitoring with native support of multi-format displays (PCs, tablets, mobile phones)
    Direct support of the Node-RED® open community
    Multi-OS support: Windows®, Linux® Ubuntu® and macOS®

    Reply
  27. Tomi Engdahl says:

    Using GNU Profiling (gprof) With ARM Cortex-M
    Learn how to profile an embedded application on ARM Cortex-M devices using GNU gprof.
    https://dzone.com/articles/using-gnu-profiling-gprof-with-arm-cortex-m

    Tutorial: Using GNU Profiling (gprof) with ARM Cortex-M
    https://mcuoneclipse.com/2015/08/23/tutorial-using-gnu-profiling-gprof-with-arm-cortex-m/

    Reply
  28. Tomi Engdahl says:

    Vizio In Hot Water Over Smart TV GPL Violations
    https://hackaday.com/2021/10/22/vizio-in-hot-water-over-smart-tv-gpl-violations/

    As most anyone in this community knows, there’s an excellent chance that any consumer product on the market that’s advertised as “smart” these days probably has some form of Linux running under the hood. We’re also keenly aware that getting companies to hold up their end of the bargain when it comes to using Linux and other GPL licensed software in their products, namely releasing their modified source, isn’t always as cut and dried as it should be.

    Occasionally these non-compliant companies will get somebody so aggravated that they actually try to do something about it, which is where smart TV manufacturer Vizio currently finds itself. The Software Freedom Conservancy (SFC) recently announced they’re taking the Irvine, California based company to court over their repeated failures to meet the requirements of the GPL while developing their Linux-powered SmartCast TV firmware. In addition to the Linux kernel, the SFC also claims Vizio is using modified versions of various other GPL and LGPL protected works, such as U-Boot, bash, gawk, tar, glibc, and ffmpeg.

    According to the SFC press release, the group isn’t looking for any monetary damages. They simply want Vizio to do what’s required of them as per the GPL and release the SmartCast source code, which they hope will allow for the development of an OpenWrt-like replacement firmware for older Vizio smart TVs. This is particularly important as older models will often stop receiving updates, and in many cases, will no longer be able to access all of the services they were advertised as being able to support. Clearly the SFC wants this case to be looked at as part of the larger Right to Repair debate, and given the terrible firmware we’ve seen some of these smart TVs ship with, we’re inclined to agree.

    Software Freedom Conservancy files right-to-repair lawsuit against California TV manufacturer Vizio Inc. for alleged GPL violations
    Litigation is historic in nature due to its focus on consumer rights, filing as third-party beneficiary
    https://sfconservancy.org/copyleft-compliance/vizio.html

    IRVINE, Calif. (Oct. 19, 2021) Software Freedom Conservancy announced today it has filed a lawsuit against Vizio Inc. for what it calls repeated failures to fulfill even the basic requirements of the General Public License (GPL).

    The lawsuit alleges that Vizio’s TV products, built on its SmartCast system, contain software that Vizio unfairly appropriated from a community of developers who intended consumers to have very specific rights to modify, improve, share, and reinstall modified versions of the software.

    The GPL is a copyleft license that ensures end users the freedom to run, study, share, and modify the software. Copyleft is a kind of software licensing that leverages the restrictions of copyright, but with the intent to promote sharing (using copyright licensing to freely use and repair software).

    Software Freedom Conservancy, a nonprofit organization focused on ethical technology, is filing the lawsuit as the purchaser of a product which has copylefted code. This approach makes it the first legal case that focuses on the rights of individual consumers as third-party beneficiaries of the GPL.

    “That’s what makes this litigation unique and historic in terms of defending consumer rights,” says Karen M. Sandler, the organization’s executive director.

    Reply
  29. Tomi Engdahl says:

    Ada for the Embedded C Developer
    May 27, 2021
    Why would you need to consider learning another programming language?
    https://www.electronicdesign.com/industrial-automation/article/21165399/adacore-ada-for-the-embedded-c-developer?utm_source=EG%20ED%20Auto%20Electronics&utm_medium=email&utm_campaign=CPS211019022&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    If you’re a C programmer, then you’ve probably heard about Ada but discounted it because, well, it isn’t C. And why would you need to consider learning another language anyway?

    While C is considered a good choice for desktop programs or applications where a shortened time-to-market is a major objective, it’s poorly suited for working within the domain of high-integrity systems. Ada’s strength is in areas where reliability is paramount. It’s heavily used in embedded real-time systems, many of which are safety-critical. Specific domains include aerospace and defense, civil aviation, rail, automotive, and medical, among others. These applications require a high degree of safety: A software defect isn’t just an annoyance—it may have severe consequences.

    Learning Ada isn’t complicated. Programming paradigms haven’t evolved much since object-oriented programming gained a foothold, and the same paradigms are present one way or another in many widely used languages.

    AdaCore recently launched a new Ada for the Embedded C Developer course that introduces you to the Ada language by comparing it to C. It assumes that you have good knowledge of the C language. It also assumes that the choice of learning Ada is guided by considerations linked to reliability, safety, or security. In that sense, it teaches you Ada paradigms that should be applied in replacement of those usually applied in C.

    Ada for the Embedded C Developer
    https://learn.adacore.com/courses/Ada_For_The_Embedded_C_Developer/index.html

    Reply
  30. Tomi Engdahl says:

    https://www.uusiteknologia.fi/2021/10/28/aivan-tarkka-data-ei-aina-tarpeen/

    Tampereen yliopiston koordinoimassa Apropos-hankkeessa nuoret tutkijat kehittävät approksimaatiolaskennan ratkaisuja, jossa pyritään löytämään optimaalinen tasapaino datan tarkkuuden ja energiankulutuksen välillä. Tuloksista voi olla hyötyä tulevaisuudessa esimerkiksi esineiden internetin IoT-laitteiden hallinnassa.

    Reply
  31. Tomi Engdahl says:

    Pain Points for RF Product Design
    Oct. 29, 2021
    Many unique RF challenges emerge in the development of products, from implementing active wireless technologies to passive RFID tagging. To help address them, it’s crucial to engage your RF engineers early in the design and product requirements stage.
    https://www.electronicdesign.com/technologies/analog/article/21179902/intelligent-product-solutions-pain-points-for-rf-product-design?utm_source=EG%20ED%20Analog%20%26%20Power%20Source&utm_medium=email&utm_campaign=CPS211025093&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  32. Tomi Engdahl says:

    Run Machine Learning Code in an Embedded IoT Node to Easily Identify Objects
    https://www.digikey.com/en/articles/run-machine-learning-code-in-an-embedded-iot-node?dclid=CJHs2oKhiPQCFdcqGAodDfYFEA

    Internet of Things (IoT) networks operating in dynamic environments are being expanded beyond object detection to include visual object identification in applications such as security, environmental monitoring, safety, and Industrial IoT (IIoT). As object identification is adaptive and involves using machine learning (ML) models, it is a complex field that can be difficult to learn from scratch and implement efficiently.

    The difficulty stems from the fact that an ML model is only as good as its data set, and once the correct data is acquired, the system must be properly trained to act upon it in order to be practical.

    This article will show developers how to implement Google’s TensorFlow Lite for Microcontrollers ML model into a Microchip Technology microcontroller. It will then explain how to use the image classification and object detection learning data sets with TensorFlow Lite to easily identify objects with a minimum of custom coding.

    It will then introduce a TensorFlow Lite ML starter kit from Adafruit Industries that can familiarize developers with the basics of ML.

    Reply
  33. Tomi Engdahl says:

    Näin valitset Linuxin oikein
    https://etn.fi/index.php/13-news/12793-naein-valitset-linuxin-oikein

    IoT:n, tekoälyn ja koneoppimisen käytön kasvaessa nykypäivän laitevalmistajien on ymmärrettävä saatavilla olevat kaupalliset tai avoimen lähdekoodin käyttöjärjestelmät. Tässä kuvataan kolme käyttöjärjestelmää ja niiden ominaisuudet, joiden avulla voidaan löytää optimaalinen, helppokäyttöinen, turvallinen, vaatimustenmukainen ja tehokas alusta suunnitteluun ja valmistettavuuteen tuotteen elinkaaren aikana.

    Käyttöjärjestelmä voi olla vakaa yritysluokan ratkaisu, kuten Ubuntu ja Windows, jotka ovat helppokäyttöisiä ja monipuolisia, mutta joita ei ole helppo muokata. Toinen vaihtoehto on käyttää sulautettua Linux-ratkaisua, kuten Yocto Project, ja usein julkaistavia versioita erittäin muokattavissa olevista ratkaisuista. Kolmas vaihtoehto on reaaliaikainen käyttöjärjestelmä.

    Nykyään kehittäjät haluavat yritysluokan käyttöjärjestelmän vakauden ja helppokäyttöisyyden sekä perinteisen sulautetun käyttöjärjestelmän suorituskyvyn, muokattavuuden ja vähän muistia vievän koon.

    Nykyään kehittäjät haluavat yritysluokan käyttöjärjestelmän vakauden ja helppokäyttöisyyden sekä perinteisen sulautetun käyttöjärjestelmän suorituskyvyn, muokattavuuden ja vähän muistia vievän koon.
    MITÄ TARKOITTAA YRITYSLUOKKA?

    Enterprise- eli yritysluokalla tarkoitetaan sovelluksia, jotka on suunniteltu kestäviksi ja skaalautuviksi laajaan organisaatioon, ja jotka ovat riittävän tehokkaita vastaamaan sitä käyttävien yritysten tarpeita. Yritysluokan käyttö- järjestelmä on monipuolinen ja vakaa, ja se sisältää vuosittaisia julkaisupäivityksiä, rikkaan tai näyttävän grafiikan ja helppokäyttöisyyden Debianissa ja Ubuntussa.

    Sulautettujen sovellusten kehittäjät voivat helposti lisätä valmiita paketteja tai sovelluksia käyttö- järjestelmään käyttämällä kohdepakettien hallintatyökaluja. Koko käyttöjärjestelmä on tyypillisesti esiasennettu binaarinen käynnistyskuvatiedosto, joten käyttöönotto on nopeaa ja helppoa. Kehitys tehdään tyypillisesti suoraan kohteeseen eli isäntäkoneeseen ja käytössä on runsaasti resursseja asioiden lisäämiseen, kuten kääntimiä.

    Yritysluokan käyttöjärjestelmä voi kuitenkin olla haaste, kun kirjoitetaan sovelluksia 32-bittiselle Arm-prosessorille, josta puuttuu riittävät resurssit kohdennetuille kehitystyökaluille. Eri alustoilla toimivia kehitystyökaluja ei ole usein saatavilla, mikä pakottaa käyttäjät kehittämään sovelluksia isäntäraudalle. Perinteistä yritystason käyttöjärjestelmää ei ole helppo muokata, koska se vie muistia yli gigatavun verran. Käynnistysaika voi olla 30 sekuntia tai pidempi, mikä vaikeuttaa optimointia, kun yritetään vastata perinteisen sulautetun käyttöjärjestelmän nopeuteen.

    SULAUTETTU LINUX- JÄRJESTELMÄ

    Perinteisessä sulautetussa Linux-käyttöjärjestelmässä kehittäjät työskentelevät tyypillisesti lähdekoodin tasolla ja määrittävät koko käyttöjärjestelmän vähintään siten, että se kootaan (build) lähdekoodista ensimmäisen kerran. Tämä mahdollistaa käyttöjärjestelmän skaalautumisen ja kehittäjät voivat tyypillisesti pienentää käyttöjärjestelmän alle gigatavuun. Käyttäjät voivat helposti portata Linux -käyttöjärjestelmän eri laitteisiin ja optimoida sen käynnistymään nopeammin kuin yritystason käyttöjärjestelmän.

    Linux-käyttöjärjestelmä tarjoaa reaaliaikaisia ominaisuuksia ja hyvää suorituskykyä. Koska kehitys tapahtuu lähdekoodin tasolla, sulautetut käyttöjärjestelmät ovat hyvin muokattavissa, mutta tämä voi tulla kalliiksi. Oppimiskäyrä on korkeampi perinteisessä sulautetussa Linux-käyttöjärjestelmässä verrattuna yritystason ratkaisuun. Jos esimerkiksi sulautettu kehittäjä haluaa lisätä Chromiumin kaltaisen sovelluksen laitteeseen, kun hän rakentaa käyttöjärjestelmää Yocto-projektin avulla, kehittäjän on luotava kerros ja kirjoitettava reseptejä varmistaakseen, että kaikki Chromiumin edellytykset on rakennettu mukaan ja otettu käyttöön kohdekuvatiedostossa. Tämä voi olla melko aikaa vievää.

    Verrattuna Debianin kaltaiseen yritystason käyttöjärjestelmään sulautetun Linuxin kehittäjien on vain asennettava Chromium, joka sisältää kaikki riippuvuudet. Sen lataaminen kestää vain muutaman minuutin. Lähdepohjaisen käyttöjärjestelmän käytön mahdollinen puute on kuitenkin se, että jos käyttäjä ei ole varovainen, on erittäin mahdollista luoda useita muunnelmia. 10 kehittäjää voi tehdä 10 muutosta samaan lähdetiedostoon helposti. Siten voi olla 10 eri versiota binaarista tuotteen 10 eri versiossa.

    DEBIAN-LINUX

    Debian on yritysluokan Linux-käyttöjärjestelmä, joka on ollut olemassa ”ikuisesti” ja jota käytetään nykyään sulautetuissa laitteissa. Ubuntu perustuu Debianiin ja sen käyttäjäkunta on yritysluokan Linuxeista kaikkein suurin, joten siihen perehtyneiden kehittäjien löytäminen on helppoa. Debian on erittäin vakaa, ja siitä julkaistaan uusi versio kahden tai useamman vuoden välein.

    Käyttöjärjestelmä on monipuolinen sisältäen yli 59 000 valmiiksi rakennettua pakettia, jotka toimivat hyvin yhdessä. Pakettisyötteiden avulla Debianin kehittäjät voivat helposti laajentaa käyttöjärjestelmää.

    Debianilla on erittäin hyvin dokumentoitu kehityskäytäntö, johon on pakko sitoutua ja joka tarjoaa korkealaatuista dokumentaatiota ja keskitetyn vianseurantajärjestelmän. Yocto-projektiin verrattuna jokainen paketti sisältää ytimen, avoimen SSL:n, Qt:n graafisten käyttöliittymien luomiseen, ja alustariippumattomat sovellukset. Jokaisella näistä työkaluista on omat kehityspolitiikkansa, mukaan lukien mahdollisesti erilaiset vianseurantamekanisminsa.

    Lisäksi Debian tarjoaa työkaluja tiedostojärjestelmän mukauttamiseen ja mukautettujen käyttö- järjestelmäkuvatiedostojen luomiseen. Työkalujen oppiminen voi kuitenkin olla aikaa vievää ja Debianin koon pienentäminen ja sen mukauttaminen voi olla vaikeaa näitä työkaluja käytettäessä. Toinen huomioon otettava seikka Debianin ja muiden ei-kaupallisten käyttöjärjestelmien kohdalla on tuen tai tieturvakorjausten saaminen yhteisöltä sen jälkeen, kun niitä ei enää tarjota. Haasteita voi myös muodostaa tekninen ongelma, jota yhteisö aio korjata nykyisessä julkaisussa.

    MENTOR EMBEDDED LINUX OMNI OS

    Kolmas Linux-tyyppi on Mentor Embedded Linux (MEL) Omni OS, joka perustuu Debianiin. Se tarjoaa molempien edellä esiteltyjen maailmojen parhaat puolet. MEL Omni OS tarjoaa yritysluokan käyttöjärjestelmän tärkeimmät edut yhdessä sulautetun käyttöjärjestelmän koon, konfiguroitavuuden ja suorituskyvyn kanssa. Lisäksi se tukee pilviliitettävyyttä ja IoT-ratkaisujen ominaisuuksia. MEL Omni OS sisältää sekä reaaliaikaisen sulautetun ytimen että version, joka tukee ennakoivaa (preemptive) moniajoa. MEL Omni OS yhdistää yritysluokan ja sulautetun käyttöjärjestelmän parhaat ominaisuudet binaarijakeluversioon, mikä mahdollistaa esimerkiksi UL 2900 -standardoinnin.

    https://issuu.com/etndigi/docs/etndigi2_2021?fr=sNWU4MzE0NTQ5NDc

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*