New approaches for embedded development

The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.

Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.

Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.

Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.

Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.

Traditional approaches

C/C++

Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.

C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.

And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.

Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.

Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.

Java

Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones,) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.

Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.

Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.

 

Increasing trends

Databases

Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.

There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).

SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).

If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.

RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.

Software quality

Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind

Safety

Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?

IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.

If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.

Security

The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.

Developing Secure Embedded Software white paper  explains why some commonly used approaches to security typically fail:

MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY

Many organizations are only now becoming aware of the need to incorporate security into their software development lifecycle.

Some techniques for building security to embedded systems:

Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed

Flood of new languages

Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device.  There are also many other relevant languages

One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages ​​(Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again.  Scripting languages ​​are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.

Scripting languages ​​have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used.
High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.

There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.

Bash and other shell scipting

Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).

One of the most useful tools when developing from within a Linux environment is the use of shell scripting. Scripting can help aid in setting up environment variables, performing repetitive and complex tasks and ensuring that errors are kept to a minimum. Since scripts are ran from within the terminal, any command or function that can be performed manually from a terminal can also be automated!

The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.

There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.

Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.

Lua

Lua is a lightweight  cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.

Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.

There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.

PHP

PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.

Python

Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming,  Many other paradigms are supported using extensions, including design by contract and logic programming.

Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.

Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.

Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.

For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriateExamples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.

Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.

JavaScript and node.js

JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases!  There is huge community of developers. JavaScript is a high-level language.

Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.

Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.

It doesn’t require a complicated IDE; all you really need is a terminal.

JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.

JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!

JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications

JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!

Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.

Future trends

According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:

One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.

Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.

This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.

Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.

But the shift will happen as hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot.

 

Sources

Most important information sources:

New approaches to dominate in embedded development

A New Approach for Distributed Computing in Embedded Systems

New Approaches to Systems Engineering and Embedded Software Development

Lua (programming language)

Embracing Java for the Internet of Things

Node.js

Wikipedia Node.js

Writing Shell Scripts

Embedded Linux – Shell Scripting 101

Embedded Linux – Shell Scripting 102

Embedding Other Languages in BASH Scripts

PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog

PHP

Python (programming language)

JavaScript: The Perfect Language for the Internet of Things (IoT)

Node.js for Embedded Systems

Embedded Python

MicroPython – Embedded Pytho

Anyone using Python for embedded projects?

Telit Programming Python

JavaScript: The Perfect Language for the Internet of Things (IoT)

MICROCONTROLLERS AND NODE.JS, NATURALLY

Node.js for Embedded Systems

Why node.js?

Node.JS Appliances on Embedded Linux Devices

The smartest way to program smart things: Node.js

Embedded Software Can Kill But Are We Designing Safely?

DEVELOPING SECURE EMBEDDED SOFTWARE

 

 

 

1,687 Comments

  1. Tomi Engdahl says:

    IoT Infographic: Journey Towards Successful IoT Solutions
    https://iot-analytics.com/iot-infographic-journey-successful-solutions/

    In order to create transparency around what it takes to develop a commercial industrial IoT Solution, we created an IoT infographic that helps people adopting IoT technology understand some of the crucial elements of the journey. On a high level there are usually 5 steps:

    1. Business Case Development
    2. Build vs. Buy Decision
    3. Proof of Concept (PoC)
    4. Initial Pilot Rollout
    5. Commercial Deployment

    Reply
  2. Tomi Engdahl says:

    Don’t be a Code Tyrant, Be A Mentor
    http://hackaday.com/2017/05/16/dont-be-a-code-tyrant-be-a-mentor/

    Hardware hacking is a way of life here at Hackaday. We celebrate projects every day with hot glue, duct tape, upcycled parts, and everything in between. It’s open season to hack hardware. Out in the world, for some reason software doesn’t receive the same laissez-faire treatment. “Too many lines in that file” “bad habits” “bad variable names” the comments often rain down. Even the unsafest silliest of projects isn’t safe. Building a robot to shine lasers into a person’s eyes? Better make sure you have less than 500 lines of code per file!

    Why is this? What makes readers and commenters hold software to a higher standard than the hardware it happens to be running on? The reasons are many and varied, and it’s a trend I’d like to see stopped.

    Software engineering is a relatively young and fast evolving science. Every few months there is a new hot language on the block, with forums, user groups, and articles galore. Even the way software engineers work is constantly changing. Waterfall to agile, V-Model, Spiral model. Even software design methodologies change — from pseudo code to UML to test driven development, the list goes on and on.

    Terms like “clean code” get thrown around. It’s not good enough to have software that works. Software must be well commented, maintainable, elegant, and of course, follow the best coding practices. Most of these are good ideas… in the work environment. Work is what a lot of this boils down to. Software engineers have to stay up to date with new trends to be employable.

    There is a certain amount of “born again” mentality among professional software developers. Coders generally hate having change forced upon them

    Reply
  3. Tomi Engdahl says:

    Product development can be faster and cheaper

    The most important result of VTT’s Semtec project is the new calculation methods, which are for the first time in industrial use and available with industry-specific tools.

    VTT’s project has developed faster, more accurate and more agile computing tools and methods for the product development of electrotechnical equipment. This makes it possible to exclude expensive and time-consuming prototype phases in the electromechanical industry.

    Technology Research Center VTT’s SEMTEC project developed faster, more accurate and more agile computing tools and methods. This makes it possible to exclude expensive and time-consuming prototype phases in the electromechanical industry.

    Finnish industry gains this competitive edge as the product development of electric motors and generators and transformers speeds up and is more cost-effective for the market. The project also produces quieter and more energy-efficient machines.

    “SEMTEC has launched close and symbiotic cooperation between industrial companies, research institutes and universities. Thanks to open source, new models developed by scientists can be tested in industry-specific design systems. The project has provided the opportunity to commercialize a new, top-notch modeling machinery for our own use, which we have already been able to win major deals, “says the study Trafotek Eelis Takala.

    The project utilized the Elmer tool, a software based on CSC’s Open Source Elemental Method (FEM) software, which enables effective parallel computation and multiple physical phenomena to be coupled.

    Sources:
    http://www.uusiteknologia.fi/2017/05/17/tuotekehitysta-voi-nopeuttaa-ja-halpuuttaa/
    http://www.etn.fi/index.php/13-news/6323-uusilla-tyokaluilla-eroon-kalliista-protokehityksesta

    Reply
  4. Tomi Engdahl says:

    Parallel programming masterclass with compsci maven online
    Dr Panda’s recent Swiss presentation free to view
    https://www.theregister.co.uk/2017/05/22/parallel_programming_101_201_and_1001/

    Dr DK Panda is a world-recognised expert on parallel programming and networking. He’s a Distinguished Scholar at The Ohio State University and his research group has developed the MVAPICH2 (high performance MPI and MIP+PGAS) libraries for InfiniBand, iWARP, and RoCE with support for GPUs, Xeon Phi, and virtualization.

    High-Performance and Scalable Designs of Programming Models for Exascale Systems
    https://www.youtube.com/watch?v=rep3eZN9znM
    https://www.slideshare.net/insideHPC/highperformance-and-scalable-designs-of-programming-models-for-exascale-systems

    Reply
  5. Tomi Engdahl says:

    The Internet of Things and Modular Design: Revolutionizing Hardware Design
    http://www.techonline.com/electrical-engineers/education-training/tech-papers/4458109/The-Internet-of-Things-and-Modular-Design-Revolutionizing-Hardware-Design

    The advent of the Internet of Things (IoT) offers the potential to automatically collect detailed performance information from every device in the field at minimal cost. This field performance data can then be crunched to identify design weaknesses and improve product quality. The most powerful way to use this data is to reorganize the design process by basing it on modules that are continuously improved based on performance feedback. These optimized modules can then be shared and managed across the organization and used as the basis of the product development process, creating a closed-loop development process for the first time.

    Reply
  6. Tomi Engdahl says:

    IoT System Implementation – Understanding the Big Picture
    https://www.mentor.com/embedded-software/events/iot-system-implementation-understanding-the-big-picture?contactid=1&PC=L&c=2017_05_23_esd_iot_system_picture_ws_v2

    IoT is the buzz word of this age and there is a very strong momentum and push to realize IoT enabled systems at a fast pace and to reap their benefits. However, IoT systems are inherently complex as their implementation involves use of diverse set of technology layers e.g. cloud services, communication protocols, connectivity options, embedded device software etc. On each technology layer there are numerous choices that complicate it further. This situation has resulted in an enormous jargon of terminology around IoT and sometimes it gets confusing to sail through this jargon and understand what is involved in implementing an end-to-end IoT system.

    Reply
  7. Tomi Engdahl says:

    Making the Internet of Things smart, secure, and power-efficient
    http://files.iccmedia.com/magazines/basmay17/basmay17-p10.pdf

    In the IoT, Intelligent devices are
    interconnected and AI algorithms
    are being used to process the vast
    amounts of sensor data that is being
    produced. This exciting marriage of
    IoT and AI requires state-of-the-art
    sensors, security, and power delivery
    to make it all possible

    Reply
  8. Tomi Engdahl says:

    Toward Continuous HW-SW Integration
    https://semiengineering.com/shifting-left-toward-continuous-integration/

    Increased complexity and heterogeneity are prompting new methods that can avert surprises at the end of the design cycle.

    Hardware is only as good as the software that runs on it, and as system complexity grows that software is lagging behind.

    The way to close that gap is to improve the methodology for developing that software in the first place. That includes making sure updates are verified and tested before being pushed out to devices, adding the same kinds of detailed checks that chipmakers have used to develop hardware in the past.

    Trying to shift software development further left isn’t a new idea, of course. A number of approaches have been developed over the years to solve this problem. Agile software methods, for example, attempt to reduce errors by pooling the efforts of two or more software developers working simultaneously on code. Continuous integration, meanwhile, addresses the problem from a different angle. In essence, code is checked into a shared repository or development branch continuously, and then verified by frequent automated builds to find problems early.

    “More and more development teams are using continuous integration as a means to streamline the overall development process, and to avoid unpleasant surprises during the integration phases of development,”

    With continuous integration, a digital twin is developed simultaneously with the real machine, ideally from the initial concept. The approach also allows development teams to work in more isolated teams, where the concept of continuous integration might apply more for the system, becoming a question of “when” the code is ready to integrate into the system.

    “For example, take the Xilinx UltraScale+ MPSoC, which integrates quad ARM Cortex-A53 cores, Cortex-R5 cores, and an FPGA fabric, with multiple power planes enabling functional separation,” Kurisu said. “One would expect multiple development teams to be writing code for this SoC—one team developing Linux on the application cores, another team developing safety applications on the real-time cores, and another implementing algorithms on the FPGA fabric. Architecturally, each of these application areas might communicate over a defined interface. Although each of these individual teams might use a continuous integration methodology to build the code that runs on their cores, the main part of the integration begins once all the parts of the system are available. Here again, the question of continuous integration is tied to the question of when code is ready for full-system integration.”

    Complexity has been growing steadily, in part because no one is quite sure what kinds of chips or functionality will be required across a wide swath of nascent markets, such as virtual/augmented reality, automotive, medical, industrial IoT and deep learning. A common approach has been to throw multiple processor types and functionality onto a chip, because that is a cheaper alternative than trying to put everything into a single ASIC, and and then glue it all together with software.

    But as the number of heterogeneous elements in a design continues to expand, this is just one more option for simplifying the development process.

    “One could argue that today’s state of the art in both software and hardware would preclude the need for continuous integration,” said Mentor’s Kurisu. “A model-based design actually motivates the approach, and in fact creates leverage for a continuous integration methodology.”

    Reply
  9. Tomi Engdahl says:

    The Benefits of HW/SW Co-Simulation for Zynq-Based Designs
    https://www.eeweb.com/blog/adam_taylor_2/the-benefits-of-hw-sw-co-simulation-for-zynq-based-designs

    Heterogeneous System-on-Chip (SoC) devices like the Xilinx Zynq 7000 and Zynq UltraScale+ MPSoC combine high-performance processing systems with state-of-the-art programmable logic. This combination allows the system to be architected to provide an optimal solution. User interfaces, communication, control, and system configuration can be addressed by the Processor System (PS). Meanwhile, the Programmable Logic (PL) can be used to implement low latency, deterministic functions and processing pipelines that exploit its parallel, nature such as those used by image processing and industrial applications.

    Verifying interactions between the PS and PL presents challenges to the design team. The 2015 Embedded Markets Survey identified debugging as one of the major design challenges faced by engineering teams and also identified a need for improved debugging tools. While bus functional models can be used initially, these models are often simplified and do not enable verification of the developed SW drivers and application at the same time. Full functional models are available, but these can be prohibitively expensive. When implementing a heterogeneous SoC design, there needs to be a verification strategy that enables both PL and PS elements to be verified together at the earliest possible point.

    Traditionally, verification has initially been performed for each element (functional block) in the design in isolation; verifying all the blocks together occurs when the first hardware arrives. The software engineering team developing the applications to run on the PS needs to ensure the Linux Kernel contains all the necessary modules to support its use and has the correct device tree blob; this is normally verified using QEMU (short for Quick Emulator), which is a free and open-source hosted hypervisor that performs hardware virtualization.

    Meanwhile, in order to correctly verify the PL design, the logic verification team is required to generate and sequence commands like those issued by the application software to verify that the logic functions as required.

    It is possible to use a development board as in interim step to verify the HW and SW interaction before the arrival of the final hardware. However, debug on real hardware can be complicated, requiring additional instrumentation logic to be inserted in the hardware. This insertion takes additional time as the bit file needs to be regenerated to include the instrumentation logic. Of course, this change in the implementation can also impact the underlying behavior of the design, thereby masking issues or introducing new issues that make themselves apparent only in the debugging builds.

    Being able to verify both the SW and the HW designs using co-simulation, therefore, provides several significant benefits. It can be performed earlier in the development cycle and does not require waiting for development hardware to arrive, thereby reducing the cost and impacts of debugging.

    HW & SW Co-simulation

    Co-Simulation between SW and HW requires the logic simulation tool used to verify the HW design to be able to interact with an SW simulation emulation environment.

    The release of Aldec’s Riviera-PRO (2017.10) enables just this HW and SW co-simulation by the provision of a bridge between Riviera-PRO and QEMU, thereby enabling the execution of the developed software for Linux-based Zynq developments.

    This bridge has been created using SystemC Transaction Level Modelling (TLM) to define the communication channels between QEMU and Riviera-PRO. The concurrent verification of the SW and HW is facilitated by the bridge’s ability to transfer information in both directions.

    Within this integrated simulation environment, the engineering team is able to use standard and advanced debug methodologies to address any issues that may arise as the verification proceeds. In the case of Riviera-PRO, this includes such capabilities as setting break points within the HDL, examining data flow, and even analyzing the code coverage and paths that are exercised by the SW application running in QEMU. In the case of QEMU, the SW team can use Gnu DeBugger (GDB) to instrument both the kernel and the driver to step through the code using breakpoints.

    This co-simulation approach has the benefit of not only providing greater visibility and debugging capability within the hardware simulation environment, but it also enables the same Linux kernel developed for the target hardware to be used within QEMU.

    Reply
  10. Tomi Engdahl says:

    The new tools will quickly find the C language bugs

    In designing circuits, finding faults as early as possible is the area where all EDA vendors work closely. Mentor Graphics has now introduced three new tools for design defects in C ++ and SystemC without test benches.

    The new tools are called Catapult DesignChecks, Catapult Coverage, and C to RTL Equivalence SLEC HLS to ensure that the synthesized RTL code and the previous higher C-description match formally.

    According to Mentor, with new tools, logic circuit designers can save time considerably: up to over 50 percent, for example, in the desktop, machine learning, telecommunications and image processing applications. According to the company, the tools bring “RTL-level” verification to the C-description level.

    Source: http://www.etn.fi/index.php/13-news/6431-uudet-tyokalut-loytavat-c-kielen-bugit-nopeasti

    More:
    Mentor Ushers in New Era of C++ Verification Signoff with New Catapult Tools and Solutions
    https://www.mentor.com/company/news/mentor-ushers-new-era-c-verification-signoff-new-cataplult

    New Catapult DesignChecks tool finds bugs early in C++/SystemC HLS code requiring no testbench – saving designers days or weeks of debugging.
    New Catapult Coverage provides synthesis-aware RTL-like coverage metrics of C++/SystemC HLS code – for fast, easy coverage closure from C to RTL.
    New C to RTL Equivalence SLEC HLS tool formally verifies Catapult HLS C++/SystemC source to synthesized RTL – providing ultimate verification confidence from C to RTL.
    Catapult HLS now generates a complete UVM environment for synthesized RTL – saving weeks/months in RTL testbench creation for blocks and SoC.

    Reply
  11. Tomi Engdahl says:

    Hardware/Software Tipping Point
    https://semiengineering.com/hardwaresoftware-tipping-point/

    Has the tide turned from increasing amounts of general purpose, software defined products, to one where custom hardware will make a comeback?

    It doesn’t matter if you believe Moore’s Law has ended or is just slowing down. It is becoming very clear that design in the future will be significant different than it is today.

    Moore’s law allowed the semiconductor industry to reuse design blocks from previous designs, and these were helped along by a new technology node—even if it was a sub-optimal solution. It lowered risk and the technology node provided performance and power gains that enabled increasing amounts of integration.

    Slow turn or landslide?
    “More and more we are seeing people build things that have a more focused system model in mind,” says Drew Wingard, CTO at Sonics. “We see targeted chips that are doing just neural network inferencing, or at the edge we see IoT devices that may power a smart watch. These have a more constrained set of system requirements at least with respect to the non-CPU resources.”

    But this is not likely to be a landslide. “We will not get as big gains for the new nodes compared to previous ones because we are not reducing the voltage as much,”

    Legacy means that this transition will be like steering the Titanic. “A lot of SoCs are built to give the software an easier task,”

    Most of the time, dark silicon is talked about as being a bad thing because it is an inefficient use of chip area and resources. But is it fair to assume that all parts of a chip should be used all of the time?

    “The role of the operating system’s (OS) power management shifts a bit and then the OS responsibility becomes setting up the policy choices. Then you can move the actual management of the power states into hardware.”

    Fundamentally there are two ways to control of power. “Software creates runtime variation in the thermal profile of chips,” says Oliver King, CTO for Moortec. “This makes it difficult to predict, at design time, the thermal issues unless the software is already well defined. There are a couple of approaches to this problem. The first is to have hardware which can sense and manage it’s own issues. The second is for software to take into account data from thermal and voltage sensors on die.”

    Many in the industry are frustrated with how few of the power-saving features that have been designed into chips actually get used by software. “A lot of chips that have aggressive power management capability require so much firmware to be able to turn them on and off that they never get to use the features – they never have time to write this firmware,” adds Wingard.

    Adding accelerators
    The migration of software into hardware is usually accomplished with the addition of accelerators. These could be dedicated hardware, or more optimized programmable solutions that are tailored to specific tasks. These include DSP, neural networks and FPGA fabrics.

    There are a number of different kinds of accelerators, each with its own set of attributes. But overall, the intent and utility of accelerators is the same. “It makes sense to deploy accelerators when the algorithms are well enough understood that you can make use of that hardware effectively,” points out Wingard. “Then we can do things with less energy.”

    Sometimes, a standard becomes so important within the industry that custom hardware also becomes the right choice. An example of this is the H.264 video compression standard. “It would be foolish to do this with a programmable solution,” adds Desai.

    It is also possible to generalize multiple standards into a class of operations and to create a partially optimized solution for them. “Audio and voice may be using 24-bit processing versus baseband, which may be doing complex math necessary for complex FFT and FIR,”

    Then there is the emerging area of neural networks. “While neural networks are not standardized, you know from a high level that there are standard ways of doing things,” says Desai.

    Wingard goes on to explain that “the inner loop of a neural network looks like a matrix multiply and a non-linear function that determines what I do with the result of that matrix multiply. At the lowest level it is very generic. If you go up one level, there is a network topology that is implemented.”

    The world would be a lot simpler and more power efficient if there were not so many competing standards.

    FPGAs have been used for a long time as co-processors but they require a coarse-grained approach to partitioning due to the high latency between the processor and the FPGA which resides in a separate chip. But that is changing.

    “We have seen specific FPGA architectures that had arrays of ALUs – both academically and commercially—to do software offload,”

    “FPGAs traditionally have been general-purpose, but embedded FPGAs can be used as accelerators for verticals like data center neural networks, automotive and edge networks,” said Steve Mensor, vice president of marketing at Achronix. “There is an ultra-short reach for connectivity and you can have a dedicated I/O.”

    One of the advantages of including eFPGAs is that many of these markets are still nascent, so changes are likely over the life of a product because standards are still being defined.

    What should be accelerated?
    Deciding which software functions should be migrated into hardware or specialized processor is not an easy task. For one thing, the industry lacks the tools to make this easy. “You have to be able to measure it,” exclaims Amos. “Without this you cannot even measure how efficient the hardware/software combination is. What gets measured gets fixed.”

    Amos explains that we need a new way to measure how this combination is performing in the real world. One option is to build the chip and measure it, but it would be much better to do this before the silicon has been fixed. “

    “There are tools that look at hardware and can optimize it just by looking at it statically,”

    Others are using FPGA prototypes to measure how efficient the software is so that decisions can get made. “We need a model with just enough accuracy to fool you into thinking you are running on silicon,”

    Improving software
    Even without hardware changes, there is a lot of gain that could be made just by updating the software. “Legacy can never be ignored,” Amos says. “There is a lot of software out there, and to go back to code that has been running for 10 years is unlikely.”

    Amos also has a warning about getting too aggressive moving functionality into hardware. “We have had a migration from a community of hardware engineers when software was just emerging. Since then the universities are churning out software engineers, and there are not enough hardware engineers to make big changes. If we start moving software into hardware, who is going to do that?”

    As with many things, economics will be the final decisionmaker. If something can be done more efficiently for the same or less dollars, then it will happen.

    Reply
  12. Tomi Engdahl says:

    Safety Plus Security: A New Challenge
    https://semiengineering.com/safety-plus-security-a-new-challenge/

    First in a series: There is a price to pay for adding safety and security into a product, but how do you assess that and control it? The implications are far reaching, and not all techniques provide the same returns.

    Nobody has ever integrated safety or security features into their design just because they felt like it. Usually, successive high-profile attacks are needed to even get an industry’s attention. And after that, it’s not always clear how to best implement solutions or what the tradeoffs are between cost, performance, and risk versus benefit.

    Putting safety and security in the same basket is a new trend outside of mil/aero, and it adds both complexity and confusion into chip design. For one thing, these two areas are at different levels of maturity. Second, many companies believe they only have to deal with one or the other. But as the automotive industry has learned, security impacts safety and the two are tightly bound together. Interestingly, the German language uses the same word for both – sicherheit.

    Incorporating safety and security into a product is not about a tool or an added piece of IP. It requires a change in workflows, which makes it a more difficult transition than just a new spec item or workflow step caused by the latest technology node. It also requires a mindset change in how to approach the design in the first place, because safety and security both need to be built into designs from the very outset.

    “It is important for people to reorient their priorities and understand the tradeoffs,” states Rob Knoth, product management director for the DSG group at Cadence. “Even design schedule has to be considered. There may be a time-to-market window, and factoring in safety and security on top of an already challenging design schedule is difficult.”

    Ignoring safety and security is no longer an issue for an increasing number of products. “As chips become more capable and are being used in more fully autonomous systems, it is increasingly important, and in many cases critical, that they correctly and rapidly analyze and react to the environment,

    There are three classes of faults that have to be considered when looking at safety and security—random, systematic and malicious. Each of these requires a different approach, different kinds of analysis and each will result is different impacts on the product schedule and cost.

    Random failures
    Perhaps the easiest category to analyze is random failures, and there are an array of tools and techniques that can be used to guard against these in hardware.

    The question is how to accomplish this. “To create an extremely safe system, one could simply duplicate or triplicate all the chips in the system, and even all the IP systems on each chip (we’ve seen this done in practice!),”

    There are places where redundancy is necessary.

    The challenge is knowing which parts of the hardware to concentrate on. “This could include redundant register files, added ECC protection in memory, redundant CPU core so that you can go lock-step,”

    And there are hidden dangers. “It doesn’t take a genius to figure out that you need to be astute about what and how you use duplication,”

    Shuler summarizes the techniques most commonly used. “Add redundant hardware blocks only where this has the greatest effect on functional safety diagnostic coverage, and only for IP that does not have other sufficient protection. To define safety goals and where to implement functional safety features requires thorough analysis of potential system failure modes, and then quantitatively proving whether safety mechanisms cover these faults.”

    The right balance point also depends on the intended market. “For IoT and small devices, we can’t just build in lots of redundancy,” says OneSpin’s Darbari. “Half of the time you need these devices to be really small and have good power characteristics.”

    In avionics, safety is assured by duplication, and it utilizes diverse design and architecture. This adds considerable design and verification expense and cannot be justified for most markets.

    “Safety and security protection comes at a cost,” Shuler says. “If a system can’t meet near real-time latency requirements and huge processing throughput requirements, then people could be injured or killed. But if the system is too expensive to be developed and fielded economically, then perhaps the whole industry loses an opportunity to save human lives.”

    Not all faults are visible during design at the RT level, and fault simulation is not capable of evaluating all of the faults at the system level.

    Systemic failures
    Finding systemic failures is the cornerstone challenge of modern verification, and is simply stated as how to determine the design does everything in the specification and meets all of the requirements. While the industry has a plethora of tools to address this challenge, it is one of the toughest challenges that the industry faces.

    “How do we know we have done enough?” asks Darbari. “Coverage can direct you towards gaps, but you also need to consider completeness. Have all of the requirements been verified properly? Were there any over-constraints in the testbench, were there hidden bugs in the implementation?” These are the questions that keep verification engineers up at night.

    Formal verification is one area that is seeing a lot of advances in recent years.

    Verification a single simple concept. It is the comparison of two models, each developed independently, such that the probability of a common design error in both models is small. Those two models are most commonly defined as being the design and the testbench.

    The industry is getting closer to that capability. “Sequential equivalence checking allows you to take two copies of the design and prove that they are functionally equivalent,”

    Malicious faults
    For random and systemic safety analysis, the industry has a track record and has built solutions that help make them tractable problems, but security is an evolving area. Security is significantly farther behind in terms of understanding the problem, building solutions, analyzing impact and measuring effectiveness.

    Security is about protecting the system from malicious attacks and this goes beyond the notions of functional verification. Functional verification is about ensuring that intended functionalities work correctly. Security is about ensuring that there are no weaknesses that can be exploited to make the system perform unintended functionality. This is about handling the known unknowns and the unknown unknowns.

    “There are some best practices in the industry but there is no good set of metrics for assessing how secure something is,” says Mike Borza, member of the technical staff for Synopsys’ Security IP. “People tend to make qualitative statements about it, and they also tend to use the best practices to evaluate the security of a device. You find things such as security audits that look to assess the common vulnerabilities that we know about, and what is being done to mitigate or eliminate those. Unfortunately, that is the state of the art.”

    Reply
  13. Tomi Engdahl says:

    Programmers who use spaces ‘paid more’
    http://www.bbc.com/news/technology-40302410?utm_campaign=digest&utm_medium=email&utm_source=app

    Computer programmers who use spaces as part of their coding earn $15,370 (£12,000) more per year than those who use tabs, a survey of developers has revealed.

    Reply
  14. Tomi Engdahl says:

    http://www.phpoc.com

    PHPoC vs PHP

    Similar to PHP, PHPoC can create a variety of web pages to suit your environment and send email alarms of a sensor status or access the database.
    Unlike PHP, however, PHPoC provides a variety of hardware interfaces
    and control functions to monitor and control machines or devices.

    PHPoC helps you to quickly realize your idea, rapidly make your application prototyping. PHPoC lets you develop your application on embedded devices as easily as on your computer. With supported library, You can do something as big as you can imagine by some simple lines of codes without worrying about hardware design. Let PHPoC arouse your passion and inspire you to realize your imagination.

    Reply
  15. Tomi Engdahl says:

    5 Challenges Developers Face When Using an RTOS
    https://www.designnews.com/design-hardware-software/5-challenges-developers-face-when-using-rtos/91417179857006?cid=nl.x.dn14.edt.aud.dn.20170629.tst004t

    Real-time Operating Systems are becoming a necessary component that most embedded software developers need to use in their applications.

    Challenge #1 – Deciding When to Use an RTOS
    The fact is, there is a lot that can be done by developers to emulate preemptive scheduling before needing to make the switch. So, what are a few key indicators that an RTOS is the right way to go?
    Below are several questions a developer should consider:
    Does the application include a connectivity stack such as USB, WiFi, TCP/IP, etc.?
    Will the systems time management be simplified by using an RTOS?
    Will application management and maintenance be improved if an RTOS is used?
    Is deterministic behavior needed?
    Do program tasks need the ability to preempt each other?
    Does the MCU have at least 32 kB of code space and 4 kB of RAM?
    If the answer to most of these questions is yes, then odds are using an RTOS will help simplify application development.

    Challenge #2 – Setting Task Priorities
    Selecting task priorities can be a challenge. Which task should have the highest priority? The next highest? Can the tasks even be scheduled? These are the questions that often come into the minds of developers working with an RTOS.
    Developers should start by using rate monotonic scheduling to get a general feel for whether their periodic tasks can be scheduled successfully. RMS assumes that tasks are periodic and don’t interact with each other so it only serves as a starting point but can get developers 80% of the way to the finish line.

    Challenge #3 – Debugging
    Debugging an embedded system is a major challenge. Developers can spend anywhere from 20% – 80% of their development cycle debugging their application code with averages typically being around 40%. That is a lot of time spent debugging.

    Challenge #4 – Managing Memory
    An important challenge for developers is managing memory. There are several layers to memory management when using an RTOS.

    Challenge #5 – The Learning Curve
    Developers who are switching from bare-metal coding techniques into an RTOS environment often struggle with learning about RTOSes.

    Conclusion
    Whether you are new to using an RTOS or are a seasoned veteran, as developers we face very similar challenges when designing and implementing our RTOS-based applications. As system complexity increases, the need to be an expert at using an RTOS is going to be a requirement for every embedded software engineer.

    Reply
  16. Tomi Engdahl says:

    Sensor Fusion Software Simplifies Design of Robots, Autonomous Cars
    Applications include smaller systems, such as drones and canes for the blind.
    https://www.designnews.com/automation-motion-control/sensor-fusion-software-simplifies-design-robots-autonomous-cars/82637524657038?cid=nl.x.dn14.edt.aud.dn.20170629.tst004t

    Reply
  17. Tomi Engdahl says:

    De-Risking Design: Reducing Snafu’s When Creating Products
    You can reduce design risk with sound up-front procedures that anticipate and solve potential problems.
    https://www.designnews.com/design-hardware-software/de-risking-design-reducing-snafu-s-when-creating-products/31600514357039?cid=nl.x.dn14.edt.aud.dn.20170629.tst004t

    With growing time-to-market pressures and increasingly complex systems within products, the design process has become risky. These risks show up during the process of bringing a product concept into reality. Whether it’s sole-source components that might cause supply chain issues or untested connectivity added at the end to meet competitive pressure, much can go wrong with design. You face the added risk once the product is out in the field and the market reacts to it.

    Many companies accept a wide range of risks along the way, pushing for shorter timelines and reduced costs. But, Murphy’s Law has a way of catching up. A last-minute feature could delay the launch or expose a bug. A single-source component could experience supply chain woes, threatening a holiday launch. “If you have all the time and money, you can be confident you will get the net results, but it will take a long time and many iterations,” said Hebert. “The question is how do you balance risk and additional cost? Hopefully you can do it in such a way there are no hard trade-offs.”

    Building De-Risk into the Design Process

    Avoiding the hard trade-offs and reducing the likelihood of problems due to untested technology or supply issues is a matter of implementing procedures that identify risk and mitigate as much of it as possible. Hebert calls it de-risking design. He describes it as a combination of up-front analysis and strategic testing. He noted that up-front analysis and test/validation can be done on different aspects of the product simultaneously, avoiding the time-consuming process of doing one consideration at a time. “It’s front loading — a stitch in time saves nine,” said Hebert. “You can do things in parallel. If you have two or three things that have never been tested, you can focus on them in isolation.”

    Hebert noted that you can greatly reduce problems in the design and production process by systematically examining the product for potential problems. Hebert sees this as a new stage of design that has not traditionally been part of the design process. “There are a lot of risks you can de-risk. This is something that has never been done before. We create a board with components that are not risky, and then we isolate the parts that are risky,” said Hebert. “The best design decisions include planning for optimistic realism. You need consider the full picture with each tradeoff along the way, and you have to invest in test and validation.”

    Gain the Knowledge of the Product’s Technology

    Do you want to add IoT connectivity to your product? Do you want to make sure that connectivity is secure?
    “It’s not just de-risking the system but also understanding it. It’s called knowledge-based product development,”

    Reply
  18. Tomi Engdahl says:

    The State of Software Composition 2017
    What’s in your app?
    https://www.synopsys.com/software-integrity/resources/analyst-reports/state-of-software-composition-2017.html

    An increasingly large amount of all software today consists of third-party code, either purchased or licensed consumer off-the-shelf (COTS) software or free open source software (FOSS). Software Composition Analysis (SCA) is a testing process that breaks down the individual components, the ingredients of any software, producing a Bill of Materials (BoM) that shows what vulnerabilities and software components exist within a given application.

    Reply
  19. Tomi Engdahl says:

    Ignoring Anomalies
    https://semiengineering.com/ignoring-anomalies/

    In an age where time to market is everything, anomalies can be easy to ignore, but they can also be the key to new discoveries and save lives.

    Everyone has been in this situation at some point in their career—you have a data point that is so far out of the ordinary that you dismiss it as erroneous. You blame the test equipment, or the fact that it is Friday afternoon and happy hour started 10 minutes ago. In most cases it may never happen again and nobody will ever notice that you quietly swept it under the rug.

    But in doing so, you may have ignored a very important bug or missed out on the discovery of something that will send your work in a total new direction.

    Thursday keynote at DAC this year, titled “Emotion Technology, Wearables and Surprises.”

    However, I continued to look at why the voltage droop would cause a lockup. It was impossible to bring the processor out of that state, once entered, except by a complete system reboot. The processor was the Motorola 6800

    The result of finding this was that every avionic and military system in the UK that used the 6800 processor had to be recalled and retrofitted to prevent this from happening. Motorola also changed the design to ensure that it took a sequence of instructions to enter a test mode.

    Don’t ignore anomalies
    So the next time you see an outlier in a dataset, don’t ignore it. That outlier may be the indicator of a non-linearity, or a new discovery, or an area of research that may be a lot more important than the original direction you were going in.

    Reply
  20. Tomi Engdahl says:

    Develop Software Earlier: Panelists Debate How to Accelerate Embedded Software Development
    A lively and open discussion explored tools and methodologies to accelerate embedded software development.
    https://www.designnews.com/content/develop-software-earlier-panelists-debate-how-accelerate-embedded-software-development/64146835257052?cid=nl.x.dn14.edt.aud.dn.20170706.tst004t

    Lauro: What are the technologies to develop software and what are the most effective ways to use them to start and finish software development earlier?

    Russ: When people think about starting to develop software earlier, they often don’t think about emulation. Software can be developed on emulation. Emulation represents the earliest cycle-accurate representation of the design capable of running software. An emulator can be available very early in the design cycle, as soon as the hardware developers have their design to a point where it runs, albeit not fully verified, and not ready to tape-out. At that point, they can load memory images into the design and begin running it. Today’s emulators support powerful debug tools that give much better debug visibility into the system, and make it practical to start software development on emulation, sooner than ever before.

    Jason: There are six techniques for pre-silicon software development: FPGA prototyping; Emulation; Cycle-Accurate/RTL simulation; Fast instruction set simulation; Fast models and an emulation hybrid; operating system simulation (instruction set abstracted away). Most projects adopt two or three since it is too difficult to learn, setup and maintain all of them.

    When you start a project, you write code, and the first thing that occupies your mind is how to execute that code. Software developers are creative about finding ways to execute their software. It could be on the host machine, a model, or a prototype.

    The next thing you need to think about is how to debug the software, because the code we write doesn’t always work the first time. There are various ways to perform early software debugging.

    Then, less known, are performance analysis and quality of software.

    Mike: I’m the hardware guy, and I have to deal with software people. While early software development is fine and good, it’s too slow and too abstract. You find that software people tread water and until they get further along the hardware cycle and have something that gives blue screens of death or smoke when there are bugs. That’s when real software development is accomplished.

    FPGAs can be used to prototype complete systems long before real silicon is available.

    Lauro: Jason, you are known as a leading authority on virtual prototyping. What are the advantages?

    Jason: At ARM, we provide various types of models to perform early software development, but there are tradeoffs, and pros and cons. Virtual prototyping is extremely flexible. You can run your codes on a model sitting at your desk. It’s abstracted, runs fast but may not have all the detail. Still, it is probably the best way to get functional issues out of your code right away.

    The good thing about fast models is they are available early.

    Mike: Sounds good, except the models never exist and really performance is not that good. In my experience, it’s hertz rather than megahertz.

    Jason: There are two kinds of models. In one domain, you have fast models. They are called fast because they are fast and use dynamic binary instruction translation. If you are targeting an ARMv8 instruction set, it’s running those instructions on your X86 workstation at a pretty high speed. Normally, 100 megahertz would be a reasonable speed depending on the size of your system.

    The other domain is what we call cycle models. These are the accurate models but they run much slower. Cycle models are extremely limited in terms of what you can do with them because anyone who needs them wants to see the accuracy of a processor, interconnects and the memory controller.

    Lauro: What is the advantage of using FPGA prototyping over emulation or virtual prototyping?

    Mike: There are places in the design cycle for virtual prototyping followed by emulation followed by FPGA prototype. But, first let me discuss design capacity.

    Lauro: How do you view today’s verification landscape? Is it one size fits all or specialized per application?

    Russ: The goal of verification is dependent upon what the final target is and how much you can verify. Security and correctness are going to be different for a medical device versus a toy. Those two products are going to be validated to different levels. As security and correctness become more important, we have more and more systems where the correctness of that system is going to impact whether people live or die. The amount of verification you want to achieve is substantially higher than it has been in the past. Jeremy Clarkson, who is the host of Top Gear BBC show, a while back quipped that one day your automobile is going to have to decide whether to kill you or not.

    Jason: Unless you work in a large company with an infrastructure to do virtual prototyping, FPGA prototyping and emulation together, by the time you learn all of them, your project is over. You have limited time to decide what to use to get the software done with higher quality and better performance, whatever it is.

    I think people choose different techniques for early software development and make a project decision about which approach make sense when. One possible scenario is a design group using FPGA systems for running tests in their lab.

    Russ: And you are going to get different characteristics out of these different models. A fast model is going to perform fast. It’s going to tell you how functionally the system will work but limited in terms of the detail or performance. An emulator is going to be much more accurate in terms of clock cycle counts but it’s going to run lot slower.

    Lauro: Outline the ideal verification flow for a large and complex networking chip that will go into a data center. Budget is no limit, resources are plentiful but the timeline is not. All the tools will work together seamlessly. Which ones would you implement?

    Mike: You start with a high-level model of what is going to run, some sort of virtual prototype, and start software development. As you get farther down, put it in an emulator and do it faster, closer to the real hardware, and then toward the end, just before tape-out, put it in FPGAs and run it quicker yet.

    Jason: In the old days, we used to create testbenches and perform block-level verification and build up a chip design piece by piece. Now there is so much IP reuse that people are buying from so many sources and a race to market. People are buying Verilog files from all the places they can get to make a chip, throwing it into the emulator and turning loose couple of hundred software people to make sure the system works. You write software, tests, all the functionality of all the peripherals and everything else to the shortest path with an unlimited budget and unlimited people.

    Lauro: What trends do you see in the verification space, what new challenges are emerging?

    Russ: One trend we clearly see is as we get more and more processors and processors get cheaper and faster is more of the functionality of the overall system moving into software because it is cheaper to create than hardware. To the degree we have the ability to do that, more and more of that functionality goes into software. Hence, the functioning of the overall system now depends not just on hardware, but hardware and software working together. Fundamentally, you can’t wait until somebody throws the hardware over the wall to start developing software because it’s going to be an integral part of that system. As hard as it is to debug them together, we have got to start sooner.

    Jason: Software complexity is getting much worse. Think about hypervisors, virtualization, like in automotive where you are going to have this type of virtualization. In the old days, you had a CPU and you could start it up, write some assembly code and get your application. Now, it’s complicated to deal with. Even simple interrupt controllers in an ARMv8 core has a thousand system registers. Do you program all one thousand correctly?

    Reply
  21. Tomi Engdahl says:

    How to Reduce Risk While Saving on the Cost of Resolving Security Defects
    http://www.securityweek.com/how-reduce-risk-while-saving-cost-resolving-security-defects

    1. Shift Left.
    2. Test earlier in the development cycle.
    3. Catch flaws in design before they become vulnerabilities.

    These are all maxims you hear frequently in the discussion surrounding software security. If this is not your first visit to one of my columns it is certainly not the first time you have heard it.

    These maxims certainly make sense and seem logically sound, but where is the proof? Show me the “so what?” that proves their real worth to organizations. Unfortunately, there exists a paucity of empirical research on the true value of implementing these maxims. Until now.

    Exploring the economics of software security

    A recent article was posted by Jim Routh, CSO of Aetna, Meg McCarthy, COO and President of Aetna, and Dr. Gary McGraw, VP of Security Technology for Synopsys. The article, titled “The Economics of Software Security: What Car Makers Can Teach Enterprises,” analyzes the total cost of ownership of software and the effect of utilizing security controls early in the development process.

    The Economics of Software Security: What Car Makers Can Teach Enterprises
    http://www.darkreading.com/perimeter/the-economics-of-software-security-what-car-makers-can-teach-enterprises-/a/d-id/1329083

    Embedding security controls early in the application development process will go a long way towards driving down the total cost of software ownership. 74

    Reply
  22. Tomi Engdahl says:

    Backchannel UART without the UART
    http://hackaday.com/2017/07/18/backchannel-uart-without-the-uart/

    Anyone who has worked with a microcontroller is familiar with using printf as a makeshift debugger. This method is called tracing and it comes with the limitation that it uses up a UART peripheral.

    [Jay Carlson] has a method by which he can piggyback these trace messages over an on-chip debugger. Though the newer ARM Cortex-M software debugger already has this facility but [Jay Carlson]’s hack is designed to work with the SiLabs EFM8 controllers. The idea is to write these debug messages to a predefined location in the RAM which the debugger can access anyway. His application polls a certain area of the memory and when it finds valid information, it reads the data and spits it out into a dedicated window. It’s using the debugger as a makeshift printf!

    The code is available on GitHub for you to check out if you are working the EFM8 and need a helping hand. The idea is quite simple and can be ported to other controllers in a multitude of ways like the MSP430 perhaps.

    Printf-style trace messages using an on-chip debugger connection
    https://jaycarlson.net/2017/07/16/printf-style-trace-messages-using-c2/

    Reply
  23. Tomi Engdahl says:

    Is threat modeling compatible with Agile and DevSecOps?
    Posted by David Harvey on July 7, 2017
    https://www.synopsys.com/blogs/software-security/threat-modeling-agile-devsecops/

    Bryan Sullivan, a Security Program Manager at Microsoft, called threat modeling a “cornerstone of the SDL” during a Black Hat Conference presentation. He calls it a ‘cornerstone’ because a properly executed threat model:

    Finds architectural and design flaws that are difficult or impossible to detect through other methods.
    Identifies the most ‘at-risk’ components.
    Helps stakeholders prioritize security remediation.
    Gets people thinking about the application attack surface.
    Drives fuzz testing.
    Provides the basis for abuse cases as it encourages people to think like an attacker.

    Threat modeling documentation

    Yes, threat modeling requires documentation, but that’s not a bad thing.

    When teaching threat modeling, a surprisingly common question I hear is “threat modeling requires documentation?” That question is often followed by an explanation that since moving to SAFe and CI/CD processes, firms have disposed of documentation.

    This persistent ‘zero documentation in Agile’ myth is based on a misunderstanding of the Agile Manifesto. It is true that the Agile methodology does prioritize working software over written documentation. However, there are a number of design, architecture, and user story artifacts needed to properly communicate commitments and other project parameters to stakeholders.

    If the application is mission critical and/or it handles sensitive data, then the project or application threat model is one of the most important artifacts. Architecture diagrams involving the inputs to the threat model are also highly valuable artifacts. The process of creating and maintaining these artifacts—usually a team exercise—is never automatable. This is known as an out-of-band activity.

    Performing threat modeling as an out-of-band activity

    There are a number of security activities, including tool-driven static application security testing (SAST) and software composition analysis (SCA). These testing approaches are amenable to automation and fit nicely within an always-deployable paradigm. Threat modeling doesn’t fit into this approach.

    Maintaining threat modeling artifacts

    As with any critical documentation, update the threat model as facts that form its basis change or are clarified during a development activity. The artifact you’ll need to maintain largely depends on the threat model method in use.

    Keep threat modeling artifact(s) in a repository available for team editing (e.g., a wiki or SharePoint site). Ensure that changes are tracked.

    Finding issues during threat modeling

    Prioritize issues found during threat modeling within the backlog.

    Remember, in a continuous development model (Agile or CI/CD), you’re going to be threat modeling as an out-of-band process. Thus, issues found may show up at any time during the threat modeling process. This may mean after development sprints are underway. Write up these issues as user stories and prioritize them on the backlog during a bug wash or sprint planning session—just as any other user story or defect.

    It may be necessary to ‘pull the chain and stop the train’ to fix a serious issue found in a threat model.

    The bottom line

    Threat modeling needs to be a part of CI/CD and Agile processes. There are too many benefits to threat modeling not to conduct this activity on mission critical applications—regardless of the methodology in use for development.

    Reply
  24. Tomi Engdahl says:

    Hackaday Prize Entry: Minimalist HTTP
    http://hackaday.com/2017/07/23/hackaday-prize-entry-minimalist-http/

    For his Hackaday Prize entry, [Yann] is building something that isn’t hardware, but it’s still fascinating. He’s come up with a minimalist HTTP compliant server written in C. It’s small, it’s portable, and in some cases, it will be a bunch better solution than throwing a full Linux stack into a single sensor.

    This micro HTTP server has two core modules, each with a specific purpose.

    [Yann] has been experimenting with HTTaP, and the benefits are obvious. You don’t need Apache to make use of it, HTTaP can work directly with an HTML/JavaScript page, and using only GET and POST messages, you can control hardware and logic circuits.

    HTTaP
    Test Access Port over HTTP
    https://hackaday.io/project/20170-httap

    Reply
  25. Tomi Engdahl says:

    eForth for cheap STM8S gadgets
    Turn cheap stuff from AliExpress into interactive development kits
    https://hackaday.io/project/16097-eforth-for-cheap-stm8s-gadgets

    Turn cheap STM8 µC boards into Forth development kits!

    The code is based on Dr. C.H. Ting’s interactive eForth for the STM8S Discovery, am STC Forth with Kernel, interpreter, and compiler in 5.5K Flash. I squeezed the interactive demo into 3.7K to get the most out of the 8K of an STM8S Value Line µC for $0.20.

    Many features were added: Flash programming, Forth interrupt handlers, background task, vectored I/O, drivers for 7S-LED displays, analog and digital I/O, DO..LOOP, CREATE..DOES>… There is a simple framework for configuration, feature selection, support of new boards. and other STM8 µCs. Mixing C with Forth is possible, too, e.g. as a shell for testing, setting parameters, or for scripting.

    What is it good for?

    The project delivers configurable board support code for selected targets, and docs.

    The code on GitHub can be used in many ways:

    for writing alternative firmware Chinese commodity boards (e.g. thermostats, DCDC converters, or relay boards)
    for embedded systems with an interactive shell (scriptable and extensible)
    for creating smart SPI, I2C, or RS232 smart sensors with a scripting shell, e.g. for RaspberryPi, Arduino, or ESP8266
    as an interactive environment for exploring the STM8 architecture
    for learning Forth. It’s easy and fun – find out why in the text below!

    Why a Forth for Cheap Chinese boards?

    Because it’s fun: cheap mass-produced imperfection is a playground for creativity :-)

    Right now, the W1209 is my favorite target: it’s a rather complete embedded control board with a UI at a very good price.

    eForth for STM8S Value Line and Access Line devices
    https://github.com/TG9541/stm8ef

    Reply
  26. Tomi Engdahl says:

    Before C, What Did You Use?
    http://www.electronicdesign.com/embedded-revolution/c-what-did-you-use?code=UM_NN7TT2&utm_rid=CPG05000002750211&utm_campaign=12191&utm_medium=email&elq2=aeff0ada2b4943c0aa35c670fbdc13fd

    Electronic Design’s Embedded Revolution survey reinforced the view of C and C++ dominance. Get some feedback from those involved in the development of popular embedded programming languages.

    Our recent Embedded Revolution survey reinforced the view of C and C++ dominance with embedded programmers (see figure), but the results also highlight the differences between the embedded space and others with respect to other programming languages in use. Assembler would not even be mentioned by other developers, plus MATLAB and LabVIEW would probably not hit the top 10.

    I thought it might be interesting to hear from some of those people who developed some of the other popular platforms highlighted in these results, since languages like C have changed over time. Since programming languages are not designed in isolation, I asked what other languages influence them, when they encountered C, and what they are working on now.

    Reply
  27. Tomi Engdahl says:

    Embedded Multicore Building Blocks Annexes MCA’s Task Management API
    http://www.electronicdesign.com/industrial-automation/embedded-multicore-building-blocks-annexes-mca-s-task-management-api?code=UM_NN7TT2&utm_rid=CPG05000002750211&utm_campaign=12191&utm_medium=email&elq2=aeff0ada2b4943c0aa35c670fbdc13fd

    The Multicore Association’s Multicore Task Management API is now integrated with the open-source Embedded Multicore Building Blocks framework.

    Reply
  28. Tomi Engdahl says:

    What Embedded Software Engineering Can Learn from Enterprise IT Testing Techniques
    http://www.electronics-know-how.com/article/2532/what-embedded-software-engineering-can-learn-from-enterprise-it-testing-techniques

    Embedded software organizations have always taken a ‘shift-left’ approach to software quality, rigorously applying defect prevention techniques early in the lifecycle. The demand for IoT requires a new testing paradigm that more closely resembles the challenges that Enterprise IT have faced for decades. As enterprise IT struggles to ‘shift-left’, embedded systems are struggling to ‘shift-right’ by testing more componentized and distributed architectures.

    Embedded software engineering has become a much bigger and more complex domain than we could have imagined. As devices are expected to communicate with other devices and embedded subsystems, a much larger surface area has emerged for defects that threaten the safety, security, and reliability of the software…

    Reply
  29. Tomi Engdahl says:

    Establish A Software Procurement Process To Manage Supply Chain Risk
    https://semiengineering.com/establish-a-software-procurement-process-to-manage-supply-chain-risk/

    Manage your software supply chain risk with practical cyber security procurement language

    Improving the procurement language in your software contracts is an effective way to convey requirements for built-in security. Too many examples of afterthought bolt-on security have put enterprises and users at risk due to exploitable software.

    Historically, there has been no shared liability associated with software because standard contracts have absolved software suppliers and outsourced development providers. This “caveat emptor” method no longer works as software is now included in life critical functions and devices, from personal medical devices to automobiles. Procurement professionals should instead strive to create demand for secure software by adopting a procurement governance model that includes security up-front in vendor selection and contract negotiation processes.

    https://www.synopsys.com/software-integrity/resources/white-papers/procurement-language-risk.html

    Reply
  30. Tomi Engdahl says:

    Choose your weapons – options for debugging
    https://www.mentor.com/embedded-software/blog/post/choose-your-weapons-options-for-debugging-9ef23ba7-dd95-4c1d-b83e-2dabf570ac96?contactid=1&PC=L&c=2017_07_27_esd_newsletter_update_v7_july

    I was recently approached by a software developer, who was new to embedded programming. As is commonly the case, we had a language problem. It was not that his English was deficient – he just did not speak “embedded”. He asked a question: How do I log on to my target hardware to do debugging?

    On the surface, this is a reasonable question. Having ascertained that he was not using Linux – he was using a conventional RTOS – I felt that I needed to explain his options for debugging on an embedded system …

    Before looking at how debugging of embedded software may be approached, I will share my view on how initial coding/debugging should be approached: Write some code – ideally on paper with a pencil. Then read it carefully and consider how the logic works. “Dry run” it in your mind for various data values. I call this an “inspection debug”. A little care at this stage can save much frustration later.

    Debugging is a continuous process, which starts almost as soon as any code is written.

    As software generally represents the largest amount of effort in the development of an embedded device, work must start as early in the project as possible. It is no longer possible to wait for the availability of working hardware before starting software development. This means that there are two distinct debugging phases: pre-hardware and post-hardware.

    Pre-hardware debug

    Before any [working] hardware is available, there are a few options that might be deployed for initial debugging:

    Host execution – Development tools for desktop/laptop computers are readily available and tend to be low cost or even free. Such tools are quite satisfactory for initial testing of code logic and may even be useful, with some ingenuity, for later stages of debugging.

    Simulation 1 – Running code on a host computer that is simulating the target hardware is a very satisfactory debug environment.

    Simulation 2 – An alternative type of simulator is an instruction set simulator [ISS].

    Evaluation board – Even if your final hardware is not available, it may be possible to obtain a board that uses the same or similar CPU and peripheral devices. It can then be used to debug using the techniques that will be applied once real hardware is ready [see below].

    Even when hardware is available, it may not be 100% reliable, so the pre-hardware techniques may remain useful, if only to remove doubts when a problem is cannot clearly be blamed on software or hardware.

    Post-hardware debug

    Once [reliable] hardware is readily available, it is naturally desirable to want to run software in its final environment.

    The normal approach is to run a debugger on the host computer and connect it to the target. There are two common ways that this connection is achieved:

    JTAG – It is very common for hardware to be provided with a JTAG port for a variety of testing-related reasons. This may provide a very satisfactory debug connection to the host using a low cost adapter. This approach is attractive, as it permits a debugger to connect even if there is not yet any working software on the target. The downside is that the debugging mode tends to be stop/start – i.e. the debugger can only “talk” to the target when execution is halted.

    Networking – If a device has a network interface [Ethernet, for example], this can provide a good debug connection. Some working software – a debug “agent” – is needed on the target before the debugger can connect. This approach facilitates “run mode” debugging, where code may continue running while the debugger is interacting with the target.

    Future debugging

    It is hard to see where exactly debugging will go in future, but there are two trends that are apparent to me. The first is the increasing prevelance of multicore designs, which present some interesting debug challenges. However, debug technology is likely to be essentially just “more of the same”. Another trend is an increasing willingness by developers to turn their backs on conventional “stop and stare” debugging in favour of very sophisticated trace and analysis tools, which can give excellent visibility of the workings of very complex systems.

    Reply
  31. Tomi Engdahl says:

    The Secret Life Of Accelerators
    https://semiengineering.com/the-secret-life-of-accelerators/

    Unique machine learning algorithms, diminished benefits from scaling, and a need for more granularity are creating a boom for accelerators.

    Accelerator chips increasingly are providing the performance boost that device scaling once provided, changing basic assumptions about how data moves within an electronic system and where it should be processed.

    To the outside world, little appears to have changed. But beneath the glossy exterior, and almost always hidden from view, accelerator chips are becoming an integral part of most designs where performance is considered essential. And as the volume of data continues to rise—more sensors, higher-resolution images and video, and more inputs from connecting systems that in the past were standalone devices—that boost in performance is required. So even if systems don’t run noticeably faster on the outside, they need to process much more data without slowing down.

    This renewed emphasis on performance has created an almost insatiable appetite for accelerators of all types, even in mobile devices such as a smart phone where one ASIC used to be the norm.

    “Performance can move total cost of ownership the most,” said Joe Macri, corporate vice president and product CTO at AMD. “Performance is a function of frequency and instructions per cycle.”

    And this is where accelerators really shine. Included in this class of processors are custom-designed ASICs that offload a particular operation in software, as well as standard GPU chips, heterogeneous CPU cores that can work on jobs in parallel (even within the same chip), and both discrete and embedded FPGAs

    But accelerators also add challenges for design teams. They require more planning, a deeper understanding of how software and algorithm works within a device, and they are very specific. Reuse of accelerators can be difficult, even with programmable logic.

    “Solving problems with accelerators require more effort,” said Steve Mensor, vice president of marketing at Achronix. “You do get a return for that effort. You get way better performance. But those accelerators are becoming more and more specific.”

    Accelerators change the entire design philosophy, as well. After years of focusing on lower power, with more cores on a single chip kept mostly dark, the emphasis has shifted to a more granular approach to ratcheting up performance, usually while keeping the power budget flat or trending downward. So rather than having everything tied to a single CPU, there can be multiple heterogeneous types of processors or cores with more specialized functionality.

    “There is now more granularity to balance a load across cores, so you can do power management for individual cores,” said Guilherme Marshall, director of marketing for development solutions at ARM. “These all require fine tuning of schedulers. This is a trend we’ve been seeing for awhile, and it’s evolving. The first implementation of this was big.LITTLE. Now, there is a finer degree of control of the power for each core.”

    This may sound evolutionary, but it’s not a trivial change. Marshall noted this required changes to the entire software stack.

    Concurrent with these changes, there is an effort to make software more efficient and faster. For years, software has been developed almost independently from the hardware

    Machine learning mania
    Accelerators are best known for their role in machine learning, which is seeing explosive growth and widespread applications across a number of industries. This is evident in sales of GPU accelerators for speeding up machine learning algorithms. Nvidia’s stock price chart looks like a hockey stick. GPUs are extremely good at accelerating algorithms in the learning phase of machine learning because they can run floating point calculations in parallel across thousands of inexpensive cores.

    As a point of reference, Nvidia’s market cap is slightly higher than that of Qualcomm, one of the key players in the smart phone revolution.

    Many architectures, one purpose
    While accelerators accomplish the same thing, no one size fits all and most are at least semi-customized.

    “On one side, there are accelerators that are truly integrated into the instruction set, which is the best form of acceleration,” said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. “In the last couple years, we’ve also seen accelerators emerge as separate blocks in the same package. So you may have an FPGA and an SoC packaged together. There also can be semi-custom IP and an FPGA. IP accelerators are a relatively new concept. They hang off the interconnect. But to be effective, all of this has to be coherent. So it may range from not-too-complex to simple, but if you want it to be coherent, how do you do that?”

    That’s not a simple problem to solve. Ironically, it’s the automotive industry, which historically has shunned electronics, that is leading the charge, said Mohandass.

    New types of accelerators are entering the market, as well. One of the reasons that embedded FPGAs have been gaining so much attention is that they can be sized according to whatever they are trying to accelerate. The challenge is understanding up front exactly what size core will be required, but the tradeoff is that it adds programmability into the device.

    Accelerators in context
    The semiconductor industry has always been focused on solving bottlenecks

    Other applications
    Driving the chip industry’s focus on accelerator chips are some fundamental market and technology shifts. There are more uncertainties about how new markets will unfold, what protocols need to be supported, and in the case of machine learning, what algorithms need to be supported. But accelerators are expected to be a key part of solutions to all of those shifts.

    “The raw technology will be built into your sunglasses,” said ArterisIP’s Shuler. “Cars are the first place it will show up because consumers are willing to pay for it. But it will show up everywhere—in your phone, and maybe even your dishwasher.”

    Conclusion
    Rising design costs and diminishing returns on scaling, coupled with a slew of new and emerging markets, are forcing chipmakers and systems companies to look at problems differently. Rather than working around existing hardware, companies are beginning to parse problems according to the flow and type of data. In this world, a general-purpose chip still has a place, but it won’t offer enough performance improvement or efficiency to make a significant difference from one generation to the next.

    Accelerators that are custom built for specific use cases are a much more effective solution, and they add a dimension to semiconductor design that is both challenging and intriguing. How much faster can devices run if everything isn’t based on a linear increase in the number of transistors? That question will take years to answer definitively.

    Reply
  32. Tomi Engdahl says:

    How Much Verification Is Necessary?
    https://semiengineering.com/how-much-verification-is-necessary/

    Sorting out issues about which tool to use when is a necessary first step—and something of an art.

    Reply
  33. Tomi Engdahl says:

    ×
    Programming Bug Security
    TechCrunch Urges Developers: Replace C Code With Rust
    https://developers.slashdot.org/story/17/07/16/1715256/techcrunch-urges-developers-replace-c-code-with-rust

    Copious experience has taught us all, the hard way, that it is very difficult, verging on “basically impossible,” to write extensive amounts of C code that is not riddled with security holes. As I wrote two years ago, in my first Death To C piece… “Buffer overflows and dangling pointers lead to catastrophic security holes, again and again and again, just like yesteryear, just like all the years of yore. We cannot afford its gargantuan, gaping security blind spots any more. It’s long past time to retire and replace it with another language.

    “The trouble is, most modern languages don’t even try to replace C…

    Death to C, ++
    https://techcrunch.com/2017/07/16/death-to-c/

    The C programming language is terrible. I mean, magnificent, too. Much of the world in which we live was built atop C. It is foundational to almost all computer programming, both historically and practically; there’s a reason that the curriculum for Xavier Niel’s revolutionary “42” schools begins with students learning how to rewrite standard C library functions from scratch. But C is no longer suitable for this world which C has built.

    I mean “terrible” in the “awe-inspiring dread” sense more than the “bad” sense. C has become a monster. It gives its users far too much artillery with which to shoot their feet off. Copious experience has taught us all, the hard way, that it is very difficult, verging on “basically impossible,” to write extensive amounts of C code that is not riddled with security holes. As I wrote two years ago, in my first Death To C piece:

    In principle, as software evolves and grows more mature, security exploits should grow ever more baroque … But this is not the case for software written in C/C++. Buffer overflows and dangling pointers lead to catastrophic security holes, again and again and again, just like yesteryear, just like all the years of yore.

    We cannot afford its gargantuan, gaping security blind spots any more. It’s long past time to retire and replace it with another language. The trouble is, most modern languages don’t even try to replace C. […] They’re not good at the thing C does best: getting down to the bare metal and working at mach speed.

    If you’re a developer you already know where I’m going, of course: to tout the virtues of Rust, which is, in fact, a viable C/C++ replacement. Two years ago I suggested that people start writing new low-level coding projects in Rust instead of C. The first rule of holes, after all, is to stop digging.

    Today I am seriously suggesting that when engineers refactor existing C code, especially parsers and other input handlers, they replace it — slowly, bit by bit — with Rust.

    Reply
  34. Tomi Engdahl says:

    Learned helplessness and the languages of DAO
    https://techcrunch.com/2016/10/01/learned-helplessness-and-the-languages-of-dao/

    Everything is terrible. Most software, even critical system software, is insecure Swiss cheese held together with duct tape, bubble wrap, and bobby pins. See eg this week’s darkly funny post “How to Crash Systemd in One Tweet.” But it’s not just systemd, not just Linux, not just software; the whole industry is at fault. We have taught ourselves, wrongly, that there is no alternative.

    Everything is terrible because the fundamental tools we use are, still, so flawed that when used they inevitably craft terrible things. This applies to software ranging from low-level components like systemd, to the cameras and other IoT devices recently press-ganged into massive DDoS attacks —

    — to high-level science-fictional abstractions like the $150 million Ethereum DAO catastrophe. Almost all software has been bug-ridden and insecure for so long that we have grown to think that this is the natural state of code. This learned helplessness is not correct. Everything does not have to be terrible.

    In principle, code can be proved correct with formal verification. This is a very difficult, time-consuming, and not-always-realistic thing to do; but when you’re talking about critical software, built for the long term, that conducts the operation of many millions of machines, or the investment of many millions of dollars, you should probably at least consider it.

    Less painful and rigorous, and hence more promising, is the langsec initiative:

    The Language-theoretic approach (LANGSEC) regards the Internet insecurity epidemic as a consequence of ad hoc programming of input handling at all layers of network stacks, and in other kinds of software stacks. LANGSEC posits that the only path to trustworthy software that takes untrusted inputs is treating all valid or expected inputs as a formal language, and the respective input-handling routines as a recognizer for that language.

    …which is moving steadily into the real world, and none too soon, via vectors such as the French security company Prevoty.

    As mentioned, programming languages themselves are a huge problem. Vast experience has shown us that it is unrealistic to expect programmers to write secure code in memory-unsafe languages. (Hence my “Death to C” post last year.)

    The best is the enemy of the good. We cannot move from our current state of disgrace to one of grace immediately. But, as an industry, let’s at least set a trajectory. Let’s move towards writing system code in better languages, first of all — this should improve security and speed. Let’s move towards formal specifications and verification of mission-critical code.

    And when we’re stuck with legacy code and legacy systems, which of course is still most of the time, let’s do our best to learn how make it incrementally better, by focusing on the basic precepts and foundations of programming.

    I write this as large swathes of the industry are moving away from traditional programming and towards the various flavors of AI. How do we formally specify a convoluted neural network? How does langsec apply to the real-world data we feed to its inputs? How do these things apply to quantum computing?

    I, uh, don’t actually have answers to any of those last few questions. But let’s at least start asking them!

    Reply
  35. Tomi Engdahl says:

    7 Things EEs Should Know About Artificial Intelligence
    http://www.electronicdesign.com/industrial/7-things-ees-should-know-about-artificial-intelligence?code=UM_NN7TT3&utm_rid=CPG05000002750211&utm_campaign=12213&utm_medium=email&elq2=49c083757ae4409d9d7ebabf70a084d5

    What exactly are the basic concepts that comprise AI? This “Artificial Intelligence 101” treatise gives a quick tour of the sometimes controversial technology.

    Just what the devil is artificial intelligence (AI) anyway? You’ve probably been hearing a lot about it in the context of self-driving vehicles and voice-recognition devices like Apple’s Siri, Amazon’s Alexa, Google’s Assistant, and Microsoft’s Cortana. But there’s more to it than that.

    AI has been around since the 1950s, when it was first discovered. It has had its ups and downs over the years, and today is considered as a key technology going forward. Thanks to new software and ever faster processors, AI is finding more applications than ever. AI is an unusual software technology that all EEs should be familiar with. Here is a brief introductory tutorial for the uninitiated.

    Reply
  36. Tomi Engdahl says:

    7 Tips for Securing an Embedded System
    https://www.designnews.com/content/7-tips-securing-embedded-system/86771223257220?cid=nl.x.dn14.edt.aud.dn.20170802.tst004t

    With more and more systems starting to connect to the Internet, there are more than a dozen best practices developers should follow to start securing their systems.

    Start using ARM Trustzone
    - ARM Trustzone will be available on new microcontrollers soon.

    Follow Language and Industry Best Practices
    - Using MISRA-C/C++ can ensure that best practices are followed that use a subset of the chosen language.
    - best practices in Cert-C is highly recommended

    Digitally Sign and Encrypt Firmware Updates

    Validate the Application at Start-up

    Monitor Stack and Buffer for Overflow

    Lock Flash Space

    Hire a Security Expert
    - Systems have become so complicated that it is impossible for any single person to be an expert in anything; and if we want to build robust, secure systems, developers need to leverage each other strengths.

    Conclusion

    While most developers, managers, and companies want to ignore security, it is perhaps one of the greatest challenges embedded system developers will face. Let’s be honest — no one wants to have to pay for or believes that they need to worry about security.

    Reply
  37. Tomi Engdahl says:

    5 Tips for Setting Realistic Project Expectations
    https://www.designnews.com/content/5-tips-setting-realistic-project-expectations/97311123157218?cid=nl.x.dn14.edt.aud.dn.20170804.tst004t

    Here are five tips developers can use to help ensure they set expectations that are realistic and not fantasy.

    1. Track Project Metrics

    My all-time favorite recommendation is that developers define critical development metrics such as features, estimated effort, actual effort, and lines of code, just to name a few. Developers and teams can’t set realistic expectations for delivery times and costs if they don’t have any data to help them create their estimates.

    2. Don’t Sugar Coat it

    Sometimes a project might be for a new client and the team becomes tempted to provide the customer with information and data that they want to hear rather than how it will really go. This may make the customer, or even your project manager, happy in the short term but it will only end in disaster. Don’t sugar coat estimates. Instead, give the hard facts and provide alternatives

    3. Consider Parallel Projects

    There are times when development teams will properly determine the time and effort required to develop and deliver a project.
    When 80 hours of work needs to be crammed into a 40-hour work week, the projects are not going to be done on time.

    4. Use a Project Management System

    Project expectations are rarely set in stone. They often shift throughout the project and even sometimes based on the mood of the developers and clients involved. One way to make sure all stakeholders stay on the same page and have the same expectations is to use a project management system that can track the project.

    5. Keep in Close Contact

    the best way to make sure everything goes smoothly is to communicate often

    Conclusion

    People naturally want to be optimistic: “That driver doesn’t seem like it should be too bad so I can have it done in a day.” There are always unknowns that lurk in embedded system projects that are nearly impossible to anticipate other than the fact that they will occur. Using metrics can help back up estimates and set expectations toward a realistic course, but data alone is not enough.

    Reply
  38. Tomi Engdahl says:

    German-based Segger, who has embedded hardware solutions and development tools, has introduced IP-over-USB technology. It allows you to control any device on a Windows, Linux, or MacOS machine using a browser.

    Connects to the device with an integrated web server. It can visualize status information in real time, and of course configure the device’s USB connection.

    The solution does not require any driver installation, but is a genuine plug-and-play solution. The USB device must only be attached to the machine and enter http: //usb.local in the address field of the browser.

    In addition to the browser connection, IPoverUSB technology can also utilize other IP-based services such as ftp or telnet, or client-specific UDP or TCP protocols. At the same time, the machine can be used to manage multiple devices that are separated by a serial number.

    As such, configuring devices with a browser is nothing new. For example, wifi routers have their own default IP number through which they are accessed.

    The new technology in Segger is that it is widened to all USB devices connected to the USB.

    Source: http://www.etn.fi/index.php/13-news/6622-nyt-voit-ohjata-kaikkia-usb-laitteita-selaimella

    More:
    IP over USB
    https://www.segger.com/products/connectivity/emusb-device/add-ons/ip-over-usb/

    We believe this to be a must-have for any state-of-the-art USB device:

    Using the IP-over-USB technology in combination with a built in web server, the device can easily be accessed from any host (Windows, Linux, Mac) by simply typing the device name into the web browser. The default device name is usb.local. A serial number can be added, even multiple device names (with or without serial number) can be assigned. The end user can access his device more easily than ever before. No setup program, no driver, no special knowledge required. It simply works! This technology is now readily available for any USB device and adds a lot of value by making it user friendly. No need for keys or a display on the unit. Any PC can be used for this purpose.

    Reply
  39. Tomi Engdahl says:

    Know the Load with this Simple Microcontroller CPU Meter
    http://hackaday.com/2017/08/05/know-the-load-with-this-simple-microcontroller-cpu-meter/

    How do you tell how much load is on a CPU? On a desktop or laptop, the OS usually has some kind of gadget to display the basics. On a microcontroller, though, you’ll have to roll your own CPU load meter with a few parts, some code, and a voltmeter.

    We like [Dave Marples]’s simple approach to quantifying something as complex as CPU load. His technique relies on the fact that most embedded controllers are just looping endlessly waiting for something to do. By strategically placing commands that latch an output on while the CPU is busy and then turn it off again when idle, a PWM signal with a duty cycle proportional to the CPU load is created. A voltage divider then scales the maximum output to 1.0 volt, and a capacitor smooths out the signal so the load is represented by a value between 0 and 1 volt.

    Ghetto CPU busy-meter
    Even in the embedded world, you really want to know how busy your CPU is.
    http://shadetail.com/blog/ghetto-cpu-busy-meter/

    In most embedded design patterns, you’ve got some sort of loop where the application sits, waiting for something to happen. That might be a timer expiring, an interrupt arriving or some other event.

    The easiest way to and is just by setting or clearing the state of a spare pin on your device. That also means you’ve got to set up the pin to be an output first of all…so our code now looks like;

    Great, we can now see, approximately, how busy our CPU is just by monitoring the pin.

    Reply
  40. Tomi Engdahl says:

    Take the Blue Pill and Go Forth
    http://hackaday.com/2017/08/04/take-the-blue-pill-and-go-forth/

    Forth has a long history of being a popular hacker language. It is simple to bootstrap. It is expressive. It can be a very powerful system. [jephthal] took the excellent Mecrisp Forth and put it on the very inexpensive STM32 “blue pill” board to create a development system that cost about $2. You can see the video below.

    If you have thirty minutes, you can see just how easy it is to duplicate his feat. The blue pill board has to be programmed once using an STM32 programmer. After that, you can use most standard Forth words and also use some that can manipulate the low-level microcontroller resources.

    Mecrisp Forth on STM32 Microcontroller (blue pill)
    https://www.youtube.com/watch?v=dvTI3KmcZ7I

    Reply
  41. Tomi Engdahl says:

    W1209 Data Logging Thermostat
    The W1209 thermostat is cheap, but it’s about time that it learns some new tricks!
    https://hackaday.io/project/26258-w1209-data-logging-thermostat

    W1209 thermostat are cheap, and so “easy” to use that there about 30.000 YouTube videos that explain how to “program” one. This project is here to change that: with STM8EF the behavior of the board can be scripted.

    This project turns a W1209 into a thermostat with a serial programming console, and a temperature logging feature. It provides easy to follow instructions, and a gentle introduction to Forth.

    The repository on GitHub now contains a very simple thermostat for a chicken incubator (apparently that’s a very popular application for the W1209).
    https://github.com/TG9541/W1209

    Reply
  42. Tomi Engdahl says:

    Designers leverage off-the-shelf components for embedded vision systems
    http://www.vision-systems.com/articles/print/volume-22/issue-7/features/designers-leverage-off-the-shelf-components-for-embedded-vision-systems.html?cmpid=enl_vsd_vsd_newsletter_2017-08-07

    Developments in FPGA software, custom-built processors, single-board computers and compact vision systems offer designers a variety of choices when building embedded vision systems.

    Cross-pollination

    In developing products for any type of embedded vision system, developers must be aware of the cross-pollination that exists between various hardware and software products that are available. Developers of mobile devices, for example, can take advantage of embedded vision processors that incorporate multiple processing elements to perform imaging tasks. Similarly, camera and frame grabber designers can leverage the power of an FPGA vendor’s intellectual property (IP) to perform functions such as camera standards interface conversion, Bayer interpolation and lens distortion correction.

    FPGA libraries

    Just as developers of cameras and frame grabbers can leverage the power of soft-core processors, they can also take advantage of FPGA IP libraries to perform dedicated image processing tasks. These include camera interfacing, image pre-processing functions such as Bayer interpolation, image compression, stereo vision, face detection and motion detection.

    Reply
  43. Tomi Engdahl says:

    C Programming Tips and Tricks
    Posted Aug 09, 2017 at 3:12 am
    https://www.eeweb.com/blog/max_maxfield/c-programming-tips-and-tricks

    Using a coding standard can save a huge amount of time and effort and greatly cut down on the frustration caused by tracking down niggly little problems.

    Of course, there’s the old saying that “Standards are great — everybody should have one!” The problem is that everybody does indeed tend to have one. If you work for a company, you are pretty much obliged to follow its in-house standard. Even if you work as a freelance contractor, if you are creating mission-critical or safety-critical code, you will be obliged to follow certain coding practices.

    In fact, I read the Barr Group’s standard some time ago, and I gleaned a lot of useful hints, tips, and tricks from it that I’ve incorporated into my own “Max Standard.”

    I always use uppercase alphanumeric characters and underscores for my constant names

    I typically use camel case for my variable names. This means that compound words or phrases are written such that each word in the middle of the phrase begins with a capital letter with no intervening spaces or punctuation

    I always prefix global variables with a ‘g’, one or more additional letters indicating the type, and an underscore; for example “gi_” (global integer) and “gb_” (global Boolean)

    If I have a function that’s going to do something like activate a self-destruct sequence, then I typically prefix it with the word “do” and I use camel case for the rest of the name

    When declaring a function, some people place the opening ‘{‘ immediately after the parentheses

    Like a lot of C programmers, I used to use two spaces for indentatio

    Embedded C Coding Standard
    https://barrgroup.com/Embedded-Systems/Books/Embedded-C-Coding-Standard

    Barr Group’s Embedded C Coding Standard was developed to minimize bugs in firmware by focusing on practical rules that keep bugs out–while also improving the maintainability and portability of embedded software. The coding standard details a set of guiding principles as well as specific naming conventions and other rules for the use of data types, functions, preprocessor macros, variables and much more. Individual rules that have been demonstrated to reduce or eliminate certain types of bugs are highlighted.

    The Embedded C Coding Standard is available in several formats:

    Online (Free): Reference the hyperlinked version available at the bottom of this page.
    PDF (Free): Download a free PDF through our online store.
    Book: Purchase a paperback book through our online store.
    DOC License: Create a customized coding standard for your organization’s internal use

    Reply
  44. Tomi Engdahl says:

    Using CNNs To Speed Up Systems
    https://semiengineering.com/using-cnns-to-speed-up-systems/

    Just relying on faster processor clock speeds isn’t sufficient for vision processing, and the idea is spreading to other markets.

    Convolutional neural networks (CNNs) are becoming one of the key differentiators in system performance, reversing a decades-old trend that equated speed with processor clock frequencies, the number of transistors, and the instruction set architecture.

    Even with today’s smartphones and PCs, it’s difficult for users to differentiate between processors with 6, 8 or 16 cores. But as the amount of data being generated continues to increase—particularly with images, video and other sensory data—performance will be determined less by a central processor than how quickly that data can be moved between various elements within a system, ore even across multiple systems.

    “This notion of vision capabilities that deep learning has brought to the forefront of the industry fulfills a need that people wanted a long time ago, and it has opened the door for new capabilities that existing architectures are not designed for,” said Samer Hijazi, senior design engineering architect in the IP group at Cadence. “For the first time there is an application that existing CPUs and processors cannot handle within a reasonable power budget. In some cases they cannot handle it at all. This has opened the door for CNNs. It’s an exciting opportunity for chip designers. There is finally a new demand on the consumer side for enhanced processing capability that existing architectures cannot offer, and this has triggered a flurry of of new startups.”

    The most important power impact from the CNN is for the ADAS application

    System-level performance
    While universities have been studying deep learning algorithms for some time, only recently has it been viewed as a commercial necessity. That has resulted in huge advances over the past couple of years, both on the hardware and on the software side.

    “Universities are not in the business of deploying products, so power does not matter as much and in this context,” said Hijaz

    But power matters a great deal in commercial applications such as autonomous vehicles.

    “Power is a critical factor for CNNs because the machines that drive them are every place where power is a big consideration,”

    General-purpose CPUs are not the processor of choice for multiple-accumulate operations.

    “CPUs have been focused on faster clock speeds and supporting wider bit resolution, going from 32 bits to 64 bits, and being able to support more instructions, branching and dedicated operations,” Hijazi said. “It’s about doing logic faster. This particular area starts with algorithms in need of a wide array of multipliers and accumulate.”

    The fundamentals
    At its most basic, CNNs are a combination of multiply-accumulate operations and memory handling.

    “You have to be able to get the data in and out quickly so that you don’t starve your multiply accumulators, and of course your multiply accumulators have to be efficient and small,”

    “If power is no issue, you get a GPU and have it liquid cooled,” said Cooper. “But if you have power issues — and power and area are tied hand-in-hand in a lot of cases even though there are some differences — you want small area and small power, so you’ll want to optimize a CNN as much as possible. This is where the tradeoffs start to happen. If you have a CNN, you can actually just hardwire it to make it hardware. It won’t be programmable, and it will be as small as possible, like an ASIC design. The problem with that is it’s a moving target. There’s always new research and new CNNs coming out, so they’re going to have to be programmable—but programmable as efficient as possible to get a small as possible.”

    Conclusion
    CNNs are just one of many possible approaches being suggested for moving large quantities of data. As the amount of data continues to balloon, entirely new architectures are being suggested.

    “Moore’s Law does not work anymore for modern scaling,” said Steven Woo, distinguished inventor and vice president of solutions marketing at Rambus. “The growth of digital data is happening far faster than device scaling can handle. So if you need to analyze or search through that data, that’s a different need than what existing architectures are built to do.”

    Reply
  45. Tomi Engdahl says:

    The future of Python: Concurrency devoured, Node.js next on menu
    Programming language keeps getting fatter amid awkward version 3 split
    https://www.theregister.co.uk/2017/08/16/python_future/

    Analysis The PyBay 2017 conference, held in San Francisco over the weekend, began with a keynote about concurrency.

    Though hardly a draw for a general interest audience, the topic – an examination of multithreaded and multiprocess programming techniques – turns out to be central to the future of Python.

    Since 2008, the Python community has tried to reconcile incompatibility between Python 2 and newly introduced Python 3.

    For years, adoption of Python 3 was slow and some even dared to suggest Python didn’t have a future.

    Python remains one of the most popular programming languages.

    It might even be argued that Python is resurgent, thanks to its utility for data science and machine learning projects. Now that almost all the popular Python packages have been ported to Python 3, doubts about the language have receded into the background.

    But there’s a counterpoint. JavaScript is also exceedingly popular, more so than Python by Redmonk’s measure. And it has some advantages when dealing with browsers and interfaces. In April, Stanford began testing a version of its introductory programming course taught in JavaScript instead of Java.

    Python and JavaScript are widely used in part because they’re easier to pick up than Java, C, C++, and C#. Both have active communities that write and maintain a large number of libraries. And neither has a strong corporate affiliation, the way Java has with Oracle, C# has with Microsoft, and Swift has with Apple.

    Concurrency is important, he said, because it’s how the real world operates. Things don’t always happen in a predictable sequence.

    Code that implements concurrency can handle multiple tasks that complete at different times. It’s essential for writing applications that scale

    Last December, Python version 3.6 arrived, bringing with it non-provisional support for the asyncio module introduced in Python 3.4. The module provides a mechanism for writing single-threaded concurrent code.

    “It’s insanely difficult to get large multi-threaded programs correct,” Hettinger explained. “For complex systems, async is much easier to get right than threads with locks.”

    The thing is, asynchronous code is Node.js’s reason for being. Node, a JavaScript runtime environment, was created to allow non-blocking, event-driven programming. Python is moving rather quickly into the same territory, and to do so, the size of the language – in terms of the standard library – has expanded considerably.

    “There’s twice as much Python as you know,” as Hettinger put it.

    Event-driven programming was available in Python long before Node existed, through the Twisted framework

    Event-driven programming relies on an event loop that runs continuously. Presented with asynchronous requests, it can process them without blocking other requests.

    “I think async is the future,” said Hettinger. “Threading is so hard to get right.”

    “Based on my own experience at PyCon, asyncio is really bringing the community together around event-driven concurrency as the main, blessed way to do concurrency at a language level in Python.”

    Over the next decade, Lefkowitz believes the Python community will need to improve packaging and deployment. “JavaScript has a better back-end story than Python has a front-end story right now,” he said.

    Reply
  46. Tomi Engdahl says:

    De-Risking Design: Reducing Snafu’s When Creating Products
    You can reduce design risk with sound up-front procedures that anticipate and solve potential problems.
    https://www.designnews.com/design-hardware-software/de-risking-design-reducing-snafu-s-when-creating-products/31600514357039

    With growing time-to-market pressures and increasingly complex systems within products, the design process has become risky. These risks show up during the process of bringing a product concept into reality. Whether it’s sole-source components that might cause supply chain issues or untested connectivity added at the end to meet competitive pressure, much can go wrong with design. You face the added risk once the product is out in the field and the market reacts to it.

    Throughout the product development and design journey, day-to-day risk decisions get made: Should we add a last-minute feature at launch? Should we use multiple sources for each component? “You have to look at design the way an investor looks at a portfolio, deciding where you want to be on the risk compendium,” Jeff Hebert, VP of engineering at product development firm, Synapse, told Design News .
    Many companies accept a wide range of risks along the way, pushing for shorter timelines and reduced costs. But, Murphy’s Law has a way of catching up. A last-minute feature could delay the launch or expose a bug. A single-source component could experience supply chain woes, threatening a holiday launch. “If you have all the time and money, you can be confident you will get the net results, but it will take a long time and many iterations,” said Hebert. “The question is how do you balance risk and additional cost? Hopefully you can do it in such a way there are no hard trade-offs.”

    Building De-Risk into the Design Process

    Avoiding the hard trade-offs and reducing the likelihood of problems due to untested technology or supply issues is a matter of implementing procedures that identify risk and mitigate as much of it as possible. Hebert calls it de-risking design. He describes it as a combination of up-front analysis and strategic testing. He noted that up-front analysis and test/validation can be done on different aspects of the product simultaneously, avoiding the time-consuming process of doing one consideration at a time. “It’s front loading — a stitch in time saves nine,” said Hebert. “You can do things in parallel. If you have two or three things that have never been tested, you can focus on them in isolation.”

    Gain the Knowledge of the Product’s Technology

    Do you want to add IoT connectivity to your product? Do you want to make sure that connectivity is secure? Then you need to become

    “It’s not just de-risking the system but also understanding it. It’s called knowledge-based product development,” said Hebert. “This involves learning as much as you can about the technology in the product. When the technology changes, you’ll understand the space you’re playing in, so you’ll know how the design needs to be changed.”

    Reply
  47. Tomi Engdahl says:

    Forget Troy. Try HelenOS
    http://hackaday.com/2017/08/20/forget-troy-try-helenos/

    Even though it seems like there are a lot of operating system choices, the number narrows if you start counting kernels, instead of distributions. Sure, Windows is clearly an operating system family, and on the Unix-like side, there is Linux and BSD. But many other operating systems–Ubuntu, Fedora, Raspian–they all derive from some stock operating system. There are some outliers, though, and one of those is HelenOS. The open source OS runs on many platforms, including PCs, Raspberry PIs, Beaglebones, and many others.

    http://www.helenos.org/

    HelenOS is a portable microkernel-based multiserver operating system designed and implemented from scratch. It decomposes key operating system functionality such as file systems, networking, device drivers and graphical user interface into a collection of fine-grained user space components that interact with each other via message passing. A failure or crash of one component does not directly harm others. HelenOS is therefore flexible, modular, extensible, fault tolerant and easy to understand.

    Reply
  48. Tomi Engdahl says:

    API Paves Road for Multicore SoCs
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1332165&

    A new API from the Multicore Association eases the job of programming increasingly heterogeneous embedded processors.

    Until roughly a decade ago, processors consisted of a single core. Performance increases were largely driven by frequency scaling. Since then, processor architectures have undergone significant changes to lower power consumption and optimize performance.

    To satisfy the demand for high performance even in small devices, hardware manufactures increasingly provide specialized accelerators for compute-intensive tasks. Many chips for embedded systems not only have an integrated graphics processing unit beside the main processor, but also contain additional hardware such as digital signal processors or programmable logic devices.

    The trend towards heterogeneity is expected to continue. One recent study said heterogeneous systems provide an effective way of responding to the ever-increasing demand for computing power. A separate report published by the IEEE said heterogeneous architectures will remain one of the top challenges in computer science until 2022.

    Efficiently leveraging the performance of such processors is an intricate task due to diverse programming models, putting an additional burden on software developers. The Multicore Task Management API (MTAPI) specifies interfaces that abstract the underlying hardware and let developers focus on their applications, enabling flexibility and portability.

    MTAPI was created by companies in the embedded domain working under the umbrella of the Multicore Association, a non-profit standards group. Recently, the Multicore Association announced the availability of a significantly enhanced implementation of MTAPI integrated into an open source framework called Embedded Multicore Building Blocks (EMB²).

    EMB² can be used for image and signal processing for the Internet of Things, for example, as well as applications that have to analyze large amounts of data in real-time and apps that perform complex calculations for simulations or augmented reality. All these applications share a need for heterogeneous computing to provide optimal performance.

    The latest version of EMB² provides compliance with the MTAPI reference implementation plus C++ wrappers for convenient task management. It supports heterogeneous systems at all levels.

    The framework is available for download at GitHub under a BSD license
    https://github.com/siemens/embb

    Reply
  49. Tomi Engdahl says:

    Application Security – the Achilles heel in cyber defense?
    http://info.prqa.com/application-security-evaluation-lp?utm_campaign=Lead%20nurturing&utm_source=hs_automation&utm_medium=email&utm_content=53726316&_hsenc=p2ANqtz–rhL-NmO8ywBfKfEh8H980PftIcTgf2WBFR3Hh4MIQxv4LZBCWz4MX4YVnLvxPtXwI5K3nV50tP9vgcevuDemcxzqQZAjfq-5-atVtMgXC3hjs7_A&_hsmi=53726316

    It has become clear that secure software is not a choice any more – it is a mandatory part of the development process!

    Cyber attacks seem to be a daily occurrence and because modern society has become dependent on software-based technology – Security isn’t an option. Most security vulnerabilities are a result of coding errors that go undetected in the development stage, making secure software development imperative.

    Road to perfection: when is an application “secure enough”?

    The National Institute of Standards and Technology (NIST) reports that 64% of software vulnerabilities stem from programming errors and not a lack of security features. Security is a real threat and makes secure software development imperative

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*