Electronics trends for 2016

Here is my list of electronics industry trends and predictions for 2016:

There was a huge set of  mega mergers in electronics industry announced in 2015. In 2016 we will see less mergers and how well the existing mergers went. Not all of the major acquisitions will succeed. Probably the the biggest challenge in these mega-mergers is “creating merging cultures or–better yet–creating new ones”.

Makers and open hardware will boost innovation in 2016. Open source has worked well in the software community, and it is coming more to hardware side. Maker culture encourages people be creators of technology rather than just consumers of itA combination of the maker movement and robotics is preparing children for a future in which innovation and creativity will be more important than ever: robotics is an effective way for children as young as four years old to get experience in the STEM fields of science, technology, engineering, mathematics as well as programming and computer science. The maker movement is inspiring children to tinker-to-learn. Popular DIY electronics platforms include Arduino, Lego Mindstorms, Raspberry Pi, Phiro and LittleBits. Some of those DIY electronics platforms like Arduino and Raspberry Pi are finding their ways into commercial products for example in 3D printing, industrial automation and Internet of Things application fields.

Open source processors core gains more traction in 2016. RISC-V is on the march as an open source alternative to ARM and Mips. Fifteen sponsors, including a handful of high tech giants, are queuing up to be the first members of its new trade group for RISC-V. Currently RISC-V runs Linux and NetBSD, but not Android, Windows or any major embedded RTOSes. Support for other operating systems is expected in 2016. For other open source processor designs, take a look at OpenCores.org, the world’s largest site/community for development of hardware IP cores as open source.

crystalball

GaN will be more widely used and talked about in 2016. Gallium nitride (GaN) is a binary III/V direct bandgap semiconductor commonly used in bright light-emitting diodes since the 1990s. It has special properties for applications in optoelectronic, high-power and high-frequency devices. You will see more GaN power electronics components because GaN – in comparison to the best silicon alternative – will enable higher power density through the ability to switch at high frequencies. You can get GaN devices for example from GaN Systems, Infineon, Macom, and Texas Instruments. The emergence of GaN as the next leap forward in power transistors gives new life to Moore’s Law in power.

Power electronics is becoming more digital and connected in 2016. Software-defined power brings to bear critical need in modern power systems. Digital Power was the beginning of software-defined power using a microcontroller or a DSP. Software-defined power takes this to another level. Connectivity is the key to success for software-defined power and the PMBus will enable the efficient communication and connection between all power devices in computer systems. It seems that power architectures to become software defined, which will take advantage of digital power adaptability and introduce software control to manage the power continuously as operating conditions change. For example  adaptive voltage scaling (AVS) is supported by the AVSBus is contained in the newest PMBus standard V 1.3. The use of power-optimization software algorithms and the concept of the Software Defined Power Architecture (SDPA) are all being seen as part of a brave new future for advanced board-power management.

Nanowires and new forms of memory like RRAM (resistive random access memory) and spintronics are also being researched, and could help scale down chips. Many “exotic” memory technologies are in the lab, and some are even in shipping product: Ferroelectric RAM (FRAM), Resistive RAM (ReRAM), Magnetoresistive RAM (MRAM), Nano-RAM (NRAM).

Nanotube research has been ongoing since 1991, but there has been long road to get practical nanotube transistor. It seems that we almost have the necessary parts of the puzzle in 2016. In 2015 IBM reported a successful auto-alligment method for placing them across the source and drain. Texas Instruments is now capable of growing wafer scale graphene and the Chinese have taken the lead in developing both graphene and nanotubes according to Lux Research.

While nanotubes provide the fastest channel material available today, III-V materials like gallium arsenide (GaAs) and indium gallium arsenide (InGaAs) are all being explored by IBM, Intel, Imec and Samsung as transistor channels on silicon substrates. Dozen of researchers worldwide are experimenting with black phosphorus as an alternative to nanotubes and graphene for the next generation of semiconductors. Black phosphorus has the advantage of having a bandgap and works well alongside silicon photonics device. 3-Molybdenum disulphide MoS2 is also a contender for the next generation of semiconductors, due to its novel stacking properties.

Graphene has many fantastic properties and there has been new finding in it. I think it would be a good idea to follow development around magnetized graphene. Researchers make graphene magnetic, clearing the way for faster everything. I don’t expect practical products in 2016, but maybe something in next few years.

Optical communications is integrating deep into chips finally. There are many new contenders on the horizon for the true “next-generation” of optical communications with promising technologies in development in labs and research departments around the world. Silicon photonics is the study and application of photonic systems which use silicon as an optical medium. Silicon photonic devices can be made using existing semiconductor fabrication. Now we start to have technology to build optoelectronic microprocessors built using existing chip manufacturing. Engineers demo first processor that uses light for ultrafast communications. Optical communication could also potentially reduce chips’ power consumption on inter-chip-links and enable easily longer very fast links between ICs where needed. Two-dimensional (2D) transition metal dichalcogenides (TMDCs), which may enable engineers to exceed the properties of silicon in terms of energy efficiency and speed, moving researchers toward 2D on-chip optoelectronics for high-performance applications in optical communications and computing. To build practical systems with those ICs, we need to figure out how make easily fiber-to-chip coupling or how to manufacture practical optical printed circuit board (O-PCB).

Look development at self-directed assembly.Researchers from the National Institute of Standards and Technology (NIST) and IBM have discovered a trenching capability that could be harnessed for building devices through self-directed assembly. The capability could potentially be used to integrate lasers, sensors, wave guides and other optical components into so called “lab-on-a-chip” devices.

crystalball

Smaller chip geometries are come to mainstream in 2016. Chip advancements and cost savings slowed down with the current 14-nanometer process, which is used to make its latest PC, server and mobile chips. Other manufacturers are catching to 14 nm and beyond. GlobalFoundries start producing a central processing chip as well as a graphics processing chip using 14nm technology. After a lapse, Intel looks to catch up with Moore’s Law again with with upcoming 10-nanometer and 7-nm processes. Samsung revealed that it will soon begin production of a 10nm FinFET node, and that the chip will be in full production by the end of 2016. This is expected to be at around the same time as rival TSMC. TSMC 10nm process will require triple patterning. For mass marker products it seems that 10nm node, is still at least a year away. Intel delayed plans for 10nm processors while TSMC is stepping on the gas, hoping to attract business from the likes of Apple. The first Intel 10-nm chips, code-named Cannonlake, will ship in 2017.

Looks like Moore’s Law has some life in it yet, though for IBM creating a 7nm chip required exotic techniques and materials. IBM Research showed in 2015 a 7nm chip will hold 20 billion transistors manufactured by perfecting EUV lithography and using silicon-germanium channels for its finned field-effect transistors (FinFETs). Also Intel revealed that the end of the road for Silicon is nearing as alternative materials will be required for the 7nm node and beyond. Scaling Silicon transistors down has become increasingly difficult and expensive and at around 7nm it will prove to be downright impossible. IBM development partner Samsung is in a race to catch up with Intel by 2018 when the first 7nm products are expected. Expect Silicon Alternatives Coming By 2020One very promising short-term Silicon alternative is III-V semiconductor based on two compounds: Indium gallium arsenide ( InGaAs ) and indium phosphide (InP). Intel’s future mobile chips may have some components based on gallium nitride (GaN), which is also an exotic III-V material.

Silicon and traditional technologies continue to be still pushed forward in 2016 successfully. It seems that the extension of 193nm immersion to 7nm and beyond is possible, yet it would require octuple patterning and other steps that would increase production costs. IBM Research earlier this year beat Intel to the 7nm node by perfecting EUV lithography and using silicon-germanium channels for its finned field-effect transistors (FinFETs). Taiwan Semiconductor Manufacturing Co. (TSMC), the world’s largest foundry, said it has started work on a 5nm process to push ahead its most advanced technology. TSMC’s initial development work at 5nm may be yet another indication that EUV has been set back as an eventual replacement for immersion lithography.

It seems that 2016 could be the year for mass-adoption of 3D ICs and 3D memory. For over a decade, the terms 3D ICs and 3D memory have been used to refer to various technologies. 2016 could see some real advances and traction in the field as some truly 3D products are already shipping and more are promised to come soon. The most popular 3D category is that of 3D NAND flash memory: Samsung, Toshiba, Sandisk, Intel and Micron have all announced or started shipping flash that uses 3D silicon structure (we are currently seeing 128Gb-384Gb parts). Micron’s Hybrid Memory Cube (HMC) uses stacked DRAM die and through-silicon vias (TSVs) to create a high-bandwidth RAM subsystem with an abstracted interface (think DRAM with PCIe). Intel and Micron have announced production of a 3D crosspoint architecture high-endurance (1,000× NAND flash) nonvolatile memory.

The success of Apple’s portable computers, smartphones and tablets will lead to the fact that the company will buy as much as 25 per cent of world production of mobile DRAMs in 2016. In 2015 Apple bought 16.5 per cent of mobile DRAM.

crystalball

After COP21 climate change summit reaches deal in Paris environmental compliance 2016 will become stronger business driver. Increasingly, electronics OEMs are realizing that environmental compliance goes beyond being a good corporate citizen. On the agenda for these businesses: climate change, water safety, waste management, and environmental compliance. Keep in mindenvironmental compliance requirements that include the Waste Electrical and Electronic Equipment (WEEE) directive, Restriction of Hazardous Substances Directive 2002/95/EC (RoHS 1), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH). It’s a legal situation: If you do not comply with regulatory aspects of business, you are out of business. Some companies are leading the parade toward environmental compliance or learning as they go.

Connectivity is proliferating everything from cars to homes, realigning diverse markets. It needs to be done easily for user, reliably, efficiently and securely.It is being reported that communications technologies are responsible for about 2-4% of all of carbon footprint generated by human activity. The needs for communications and faster speeds is increasing in this every day more and more connected world – penetration of smart devices there was a tremendous increase in the amount of mobile data traffic from 2010 to 2014.Wi-Fi has become so ubiquitous in homes in so many parts of the world that you can now really start tapping into that by having additional devices. When IoT is forecasted to be 50 billion connections by 2020, with the current technologies this would increase power consumption considerably. The coming explosion of the Internet of Things (IoT) will also need more efficient data centers that will be taxed to their limits.

The Internet of Things (IoT) is enabling increased automation on the factory floor and throughout the supply chain, 3D printing is changing how we think about making components, and the cloud and big data are enabling new applications that provide an end-to-end view from the factory floor to the retail store. With all of these technological options converging, it will be hard for CIOs, IT executives, and manufacturing leaders keep up. IoT will also be hard for R&D.Internet of Things (IoT) designs mesh together several design domains in order to successfully develop a product. Individually, these design domains are challenging. Bringing them all together to create an IoT product can place extreme pressure on design teams. It’s still pretty darn tedious to get all these things connected, and there’s all these standards battles coming on. The rise of the Internet of Things and Web services is driving new design principles as Web services from companies such as Amazon, Facebook and Uber are setting new standards for user experiences. Designers should think about building their products so they can learn more about their users and be flexible in creating new ways to satisfy them – but in such way that the user’s don’t feel that they are spied on what they do.

Subthreshold Transistors and MCUs will be hot in 2016 because Internet of Things will be hot in 2016 and it needs very low power chips. The technology is not new as cheap digital watches use FETs operating in the subthreshold region, but decades digital designers have ignored this operating region, because FETs are hard to characterize there. Now subthreshold has invaded the embedded space thanks to Ambiq’s new Apollo MCU. PsiKick Inc. has designed a proof-of-concept wireless sensor node system-chip using conventional EDA tools and a 130nm mixed-signal CMOS that operates with sub-threshold voltages and opening up the prospect of self-powering Internet of Things (IoT) systems. I expect also other sub-threshold designs to emerge. ARM Holdings plc (Cambridge, England) is also working at sub- and near-threshold operation of ICs.  TSMC has developed a series of processes characterized down to near threshold voltages (ULP family for ultra low power are processes). Intel will focus on its IoT strategy and next-generation low voltage mobile processors.

FPGAs in various forms are coming to be more widely use use in 2016 in many applications. They are not no longer limited to high-end aerospace, defense, and high-end industrial applications. There are different ways people use FPGA. Barrier of entry to FPGA development have lowered so that even home makers can use easily FPGAs with cheap FPGA development boards, free tools and open IP cores. There was already lots of interest in 2015 for using FPGA for accelerating computations as the next step after GPU. Intel bought Altera in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016. Examples of applications that would be well-suited for use of ARM-based FPGAs, including industrial robots, pumps for medical devices, electric motor controllers, imaging systems, and machine vision systems. Examples of ARM-based FPGAs are such as Xilinx’s Zynq-7000 and Altera’s Cyclone V intertwine. Some Internet of Things (IoT) application could start to test ARM-based field programmable gate array (FPGA) technology, enabling the hardware to be adaptable to market and consumer demands – software updates on such systems become hardware updates. Other potential benefits would be design re-use, code portability, and security.

crystalball

The trend towards module consolidation is applicable in many industries as the complexity of communication, data rates, data exchanges and networks increases. Consolidating ECU in vehicles is has already been big trend for several years, but the concept in applicable to many markets including medical, industrial and aerospace.

It seems to be that AXIe nears the tipping point in 2016. AXIe is a modular instrument standard similar to PXI in many respects, but utilizing a larger board format that allows higher power instruments and greater rack density. It relies chiefly on the same PCI Express fabric for data communication as PXI. AXIe-1 is the uber high end modular standard and there is also compatible AXIe-0 that aims at being a low cost alternative. Popular measurement standard AXIe, IVI, LXI, PXI, and VXI have two things in common: They each manage standards for the test and measurement industry, and each of those standards is ruled by a private consortium. Why is this?  Right or wrong, it comes down to speed of execution.

These days, a hardware emulator is a stylish, sleek box with fewer cables to manage. The “Big Three” EDA vendors offer hardware emulators in their product portfolios, each with a distinct architecture to give development teams more options. For some offerings emulation has become a datacenter resource through a transaction-based emulation mode or acceleration mode.

LED lighting is expected to become more intelligent, more beautiful, more affordable in 2016. Everyone agrees that the market for LED lighting will continue to enjoy dramatic year-on-year growth for at least the next few years. LED Lighting Market to Reach US$30.5 Billion in 2016 and Professional Lighting Markets to See Explosive Growth. Some companies will win on this growth, but there are also losers. Due currency fluctuations and price slide in 2015, end market demands in different countries have been much lower than expected, so smaller LED companies are facing financial loss pressures. The history of the solar industry to get a good sense of some of the challenges the LED industry will face. Next bankruptcy wave in the LED industry is possible. The LED incandescent replacement bulb market represents only a portion of a much larger market but, in many ways, it is the cutting edge of the industry, currently dealing with many of the challenges other market segments will have to face a few years from now. IoT features are coming to LED lighting, but it seem that one can only hope for interoperability

 

 

Other electronics trends articles to look:

Hot technologies: Looking ahead to 2016 (EDN)

CES Unveiled NY: What consumer electronics will 2016 bring?

Analysts Predict CES 2016 Trends

LEDinside: Top 10 LED Market Trends in 2016

 

961 Comments

  1. Tomi Engdahl says:

    Super Thin ICs are Coming
    http://hackaday.com/2016/02/02/super-thin-ics-are-coming/

    An ordinary integrated circuit is made of layers of material. Typically a layer is made from some material (like silicon dioxide, polysilicon, copper, or aluminum). Sometimes a process will modify parts of a layer (for example, using ion implantation to dope regions of silicon). Other times, some part of the layer will be cut away using a photolithography process.

    Researchers at MIT have a new technique that allows super thin layers (1-3 atoms thick) and–even more importantly–enables you to use two materials in the same layer. They report that they have built all the basic components required to create a computer using the technique.

    The prototype chips use two materials: molybdenum disulfide and graphene. The method apparently works with materials that combine elements from group six of the periodic table, such as chromium, molybdenum, and tungsten, and elements from group 16, such as sulfur, selenium, and tellurium.

    New chip fabrication approach
    Depositing different materials within a single chip layer could lead to more efficient computers.
    http://news.mit.edu/2016/new-chip-fabrication-approach-0127

    Reply
  2. Tomi Engdahl says:

    Power from Paper
    http://hackaday.com/2016/02/02/power-from-paper/

    Comedian Steven Wright used to say (in his monotone way):

    “We lived in a house that ran on static electricity. If we wanted to cook something, we had to take a sweater off real quick. If we wanted to run a blender, we had to rub balloons on our head.”

    Turns out, all you need to generate a little electricity is some paper, Teflon tape and a pencil. A team from EPFL, working with researchers at the University of Tokyo, presented just such a device at a MEMS conference. (And check out their video, below the break.)

    What Steven Wright was doing uses the triboelectric effect — a fancy name for rubbing two insulators together to electrically charge them. In the EPFL device, the paper and the Teflon are insulators. The pencil graphite acts as a conductor to carry the charge away. The interesting part is this: by using sandpaper imprinting, the researchers produced a rough surface on both the tape and paper, increasing the charge-producing area. Empirically, the output went up over six-fold.

    Pushing the paper and Teflon sandwich as seldom as 1.5 times a second produced enough power to drive tiny sensors. The device is about three inches by one inch and can generate up to three volts.

    Producing electrical power with cardboard, tape, and a pencil
    http://actu.epfl.ch/news/producing-electrical-power-with-cardboard-tape-and/

    Reply
  3. Tomi Engdahl says:

    A guideline to success in the challenging MIL/Aero Power arena
    http://www.edn.com/design/power-management/4441300/A-guideline-to-success-in-the-challenging-MIL-Aero-Power-arena?_mc=NL_EDN_EDT_EDN_weekly_20160204&cid=NL_EDN_EDT_EDN_weekly_20160204&elq=876791d5fc54490cadcb7787f8ed19ca&elqCampaignId=26845&elqaid=30694&elqat=1&elqTrackId=d488bef460574c11997138a8c1e01041

    In this article we will discuss the military/aerospace grade power supply and what the key differences are as compared to commercial off-the-shelf or COTS designs. We will also discuss some design techniques used to help ensure high reliability which is so critical in these types of designs.

    Reliability

    Military supplies are typically used in aircraft, sea-going vessels and transportation equipment. These obviously need to have a more robust construction for the extreme environments in which they will operate than a typical commercial power supply. Power supplies need a far better mean time between failure (MTBF) rating in military usage because reliable operation without failure can save lives.

    The defense market, in many cases, may build systems using COTS power supplies, but there may be risks involved. Things like obsolescence, process changes, environmental vulnerability, faulty electrical performance or EMI issues need to be addressed to ensure reliable performance in combat or critical situations. There is a balance that must be struck in the design between the high cost of a full military class power supply and the manpower spent on designing in modifications in a COTS supply

    Obsolescence

    Obsolescence is out of the question on programs that are mission critical and need to be in field operation for 20-30 years. Ultimately form, fit, and function replacement for field supplies need to ensure no or acceptable minimal documentation and specifications differences from the original power supply.

    Redundancy

    Redundancy in design ensures that no single failure will disable the supply or hinder its proper operation. Over and above redundancy the system must not be compromised via any effect on other equipment with fire, smoke, noise or any other hindrance to other equipment.

    Design Margins

    The use of commercial components may be considered as long as a comfortable and generous design margin is chosen between the component worst case operating point and its rated data sheet specifications. It is a fact that as temperature goes up, reliability will begin to degrade.

    FET ORing2

    Redundancy can be achieved in Fault-tolerant Power supplies by using the diode ORing technique of the outputs on multiple power supplies. Since diodes can be inefficient due to their large forward voltage drop, especially in low voltage designs, more efficient FETs are typically used. Since a FET is able to conduct in both directions, unlike a uni-directional diode, the added controller must be used to turn off the reverse path.

    More Electric Aircraft Power Systems (MEAPS)3

    In the last ten years the aircraft industry, military and commercial, has been making a concerted effort to move away from traditional power presently used to a more electric approach in non-propulsive power systems as a first order of business. This secondary power has been in the form of hydraulic, pneumatic, electrical and mechanical power extracted from the engines of the aircraft.

    Energy efficiency is a big driver of this effort

    The industry is looking into 270VDC systems as a high voltage DC power distribution main. This will mean lower currents at higher voltages, but lower currents give way to less copper and thus lower weight on the aircraft. This translates into better fuel economy.

    A 3 Phase, 4-Leg Inversion Power Supply for MEA

    Powering many legacy 115VAC/400Hz loads on an aircraft is facilitated by an inversion power supply. The most recent effort in this area has been with Space Vector Pulse Width Modulation (SVPWM) controlled 3-Phase, 3-Leg inverter architectures. Unfortunately, this architecture cannot operate in an unbalanced load condition, especially in the event of a short-circuit. This will not be acceptable in military or commercial power in an aircraft.

    Designers have tried Sinusoidal pulse width modulation (SPWM) controlled 3-Leg, 4-Phase inverter architectures, but its DC voltage utility is too low to meet the higher voltage bus levels of 270VDC

    Next-gen MIL vehicle power6

    Next generation MIL vehicles will use higher voltage energy storage plants such as Nickel-Metal Hydride or Lithium-Ion Batteries, or Super Capacitors. These will typically operate at around 300VDC to optimize hybrid motor operation. These higher voltage power units have the ability to provide significant amounts of power with drastically reduced audible and thermal signatures, the availability of this higher voltage power presents some significant advantages to the export power conversion system. Distribution currents are reduced by 90% (for example from 500A to 50A) from the traditional 28V case

    Most times export power converters will be located outside the crew compartment and will be exposed to a very harsh and wet environment. Operating temperatures typically will range from –46 to +54o C. Mechanical vibration and shock requirements are characterized in MIL-STD-810F, Methods 514.5 and 516.5.

    Power conversion densities in a vehicle can range from 3.4W/in3 to as high as 10.3W/in3 . Most cooling requirements will necessitate either forced air cooling or liquid cooling

    An indirect system which has no direct air blowing onto internal components is known as a “wind tunnel” design. This system uses fans to cool heat sinks and allows the electronic components to be protected from the environment by locating them on the other side of the aluminum heat exchanger. A drawback with this technique is a larger package

    In the case where a fully environmentally sealed package is needed, a liquid cooling architecture is often used. In this type of a system the heat exchanger may be located in the front of the vehicle. The normal cooling liquid employed is either water or a mixture of water and ethylene glycol (a.k.a., antifreeze).

    The cold plate approach has its drawbacks though. This architecture necessitates a flat shapes for heat transfer. This will spread the layout over a larger area, which can make electromagnetic noise difficult to deal with.

    Summary

    The Military/Aerospace sector is not all that different than any other design using COTS products. There are more stringent temperature, shock, humidity and vibration expectations due to the harsh environment these designs might encounter as well as the increased reliability that is expected and needed.

    Reply
  4. Tomi Engdahl says:

    Home> Community > Blogs > Anablog
    Isolated potentiometers
    http://www.edn.com/electronics-blogs/anablog/4441316/Isolated-potentiometers?_mc=NL_EDN_EDT_EDN_analog_20160204&cid=NL_EDN_EDT_EDN_analog_20160204&elq=44f90067c62e4daf8b5ad0077f30539f&elqCampaignId=26833&elqaid=30682&elqat=1&elqTrackId=15320d1101fb4839a9ce59ae4fe8ac5b

    Digitally-actuated potentiometers, or digipots, are commercially available from multiple suppliers but not analog-actuated potentiometers, or anapots. Dennis Feucht discusses the isolated anapot, or isopot, a design that is intended as a design “template” for general application wherever a manually-actuated pot is to be replaced and actuated by an analog voltage.

    Reply
  5. Tomi Engdahl says:

    MIT Neural Network IC Aims At Mobiles
    http://www.eetimes.com/document.asp?doc_id=1328877&

    MIT researchers have designed a chip to implement neural networks and claim it is 10 times as efficient as a mobile GPU. This could be used to allow mobile devices to run artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.
    MIT researchers presented their findings at the International Solid State Circuits Conference in San Francisco.

    “Deep learning is useful for many applications, such as object recognition, speech, face detection,”

    The chip is called Eyeriss and the idea is that training up for the network for specific functions could be done “in the cloud” and then the training configuration – the weighting of each node in the network could be exported to the mobile device.

    The chip has 168 cores each with its own memory

    Reply
  6. Tomi Engdahl says:

    Making the Most of GPUs Without Draining Batteries
    http://www.eetimes.com/document.asp?doc_id=1328869&

    Launched in January and awarded a grant of 2.97 million Euros from the European Union’s Horizon 2020 research and innovation program, the “Low-power GPU2 (LPGPU2)” research project is a European initiative uniting researchers and graphics specialists from Samsung Electronics UK, Codeplay, Think Silicon and TU Berlin to research and develop a novel tool chain for analyzing, visualizing, and improving the power efficiency of applications on mobile GPUs.

    Running for the next two and a half years, the research project aims to define new industry standards for resource and performance monitoring to be widely adopted by embedded hardware GPU vendors. The consortium will define a methodology for accurate power estimations for embedded GPU and will try to enhance existing Dynamic Voltage and Frequency Scaling (DVFS) mechanisms for optimum power management with sustained performance.

    Ideally, the result will be a unique power and performance visualization tool which informs application and GPU device driver developers of potential power and performance improvements.

    Reply
  7. Tomi Engdahl says:

    TSMC’s 300mm Chinese Wafer Fab Wins Approval
    http://www.eetimes.com/document.asp?doc_id=1328860&

    The Taiwanese government has given foundry chipmaker Taiwan Semiconductor Manufacturing Co. Ltd. (Hsinchu, Taiwan) approval to build a 300mm wafer fab in China, according to reports that reference the Investment Commission of the Ministry of Economic Affairs.

    TSMC already operates one fab in China, but until the latest ruling the Taiwan government forbade Taiwanese companies from owning and operating fabs that process the more cost efficient larger wafer size

    Reply
  8. Tomi Engdahl says:

    ARM Announces New 28nm POP IP For UMC Foundry
    by Andrei Frumusanu on February 4, 2016 1:05 PM EST
    http://www.anandtech.com/show/10012/arm-announces-new-28nm-pop-ip-for-umc-foundry

    Today ARM announces a new POP IP offering directed at UMC’s new 28HPCU manufacturing process. To date we haven’t had the opportunity to properly explain what ARM’s POP IP actually is and how it enables vendors to achieve better implementation of ARM’s IP offerings. While for today’s pipeline announcement we’ll be just explaining the basics, we’re looking forward to a more in-depth article in the following months as to how vendors take various IPs through the different stages of development.

    When we talk about a vendor licensing an ARM IP (CPU for example), this generally means that they are taking the RTL (Register Transfer Level) design of an IP. The RTL is just a logical representation of the functioning of a block, and to get to from this form to one that can be implemented into actual silicon requires different development phases which is generally referred to as the physical implementation part of semiconductor development.

    It’s here where ARM’s POP IP (Which by the way is not an acronym) comes into play: Roughly speaking, POP IP is a set of tools and resources that are created by ARM to accelerate and facilitate the implementation part of SoC development. This includes standard cell libraries, memory compilers, timing benchmarks, process optimized design changes and in general implementation knowledge that ARM is able to amass during the IP block development phase.

    The main goal is to relieve the vendor from re-doing work that ARM has already done and thus enable a much better time-to-market compared to vendors which have their in-house implementation methodology (Samsung and Qualcomm, among others, for example). ARM explains this can give an up to 5-8 month time to market advantage which is critical in the fast-moving mobile SoC space.

    One aspect that seemed to be misunderstood, and even myself had some unclear notions about, is that POP IP is not a hard-macro offering but rather all the resources that enable a vendor to achieve that hard-macro (GDSII implementation).

    Reply
  9. Tomi Engdahl says:

    Boffins smear circuitry onto contact lenses
    ‘Eyeballing a computer’ gets closer thanks to Anglo-Australian collaboration
    http://www.theregister.co.uk/2016/02/05/aussie_prof_makes_step_towards_google_glass_for_contact_lenses/

    University of South Australia associate professor Drew Evans has created proof-of-concept work that could in the future lead to computerised contact lenses.

    The conducting polymer lens is an early step into what could lead to circuitry being etched into contact lenses.

    The work is combination of the University’s Future Industries Institute work into the PEDOT polymer and lenses developed by a UK contact lens firm understood to be Contamac.

    “We have been researching in this area for the last decade in the military and automotive industries, but this is the first time we have been able to bring our polymer into contact lens technology,” Evans told Vulture South.

    “It is a milestone simply because the polymer is biocompatible, meaning the body finds it friendly.

    “There have been companies over the last decade building circuitry into lenses but that is using materials like copper – you’d be taking a big risk to stick that in your eye.”

    The PEDOT polymer was first discovered in the late 1970s by three scientists who later won the Nobel Prize in 2000. Evans’ work improves on the polymer improving its properties for use in wearable devices.

    Reply
  10. Tomi Engdahl says:

    Graphene Batteries Appear, Results Questionable
    http://hackaday.com/2016/02/07/graphene-batteries-appear-results-questionable/

    If you listen to the zeitgeist, graphene is the next big thing. It’s the end of the oil industry, the solution to global warming, will feed and clothe millions, cure disease, is the foundation of a space elevator that will allow humanity to venture forth into the galaxy. Graphene makes you more attractive, feel younger, and allows you to win friends and influence people. Needless to say, there’s a little bit of hype surrounding graphene.

    With hype comes marketing, and with marketing comes products making dubious claims. The latest of which is graphene batteries from HobbyKing. According to the literature, these lithium polymer battery packs for RC planes and quadcopters, ‘utilize carbon in the battery structure to form a single layer of graphene…

    For the last several years, one of the most interesting potential applications for graphene is energy storage. Graphene ultracapacitors are on the horizon, promising incredible charge densities and fast recharge times. Hopefully, in a decade or two, we might see electric cars powered not by traditional lithium batteries, but graphene supercapacitors.

    No one expected graphene batteries to show up now, though, and especially not from a company whose biggest market is selling parts to people who build their own quadcopters. How do these batteries hold up? According to the first independent review, it’s a good battery, but the graphene is mostly on the label.

    http://www.rcgroups.com/forums/showthread.php?t=2592234

    Reply
  11. Tomi Engdahl says:

    Local Hacker Discovers Card Edge Connectors
    http://hackaday.com/2016/02/08/local-hacker-discovers-card-edge-connectors/

    When [turingbirds] was looking around for the absolute minimum connector for a JTAG adapter, he wanted something small, that didn’t require expensive adapters, and that could easily and reliably connect a few JTAG pins to a programmer. This, unsurprisingly, is a problem that’s been solved many times over, but that doesn’t mean there isn’t room for improvement. [turingbirds] found his better solution by looking at some old card edge connectors.

    Instead of 0.1″ pitch pin headers, weirder and more expensive connectors, the Tag Connect, or even pogo pins, [turingbirds] came up with a JTAG adapter that required no additional parts, had a small footprint, and could be constructed out of trash usually found behind any busy hackerspace or garage. The connector is based on the venerable PCI connector, chopped up with a Dremel and soldered to a JTAG or ISP programmer.

    https://github.com/turingbirds/con-pcb-slot

    Reply
  12. Tomi Engdahl says:

    The Coming Age of 3D Integrated Circuits
    http://hackaday.com/2016/02/08/the-coming-age-of-3d-integrated-circuits/

    The pedagogical model of the integrated circuit goes something like this: take a silicone wafer, etch out a few wells, dope some of the silicon with phosphorous, mask some of the chip off, dope some more silicon with boron, and lay down some metal in between everything. That’s an extraordinarily basic model of how the modern semiconductor plant works, but it’s not terribly inaccurate. The conclusion anyone would make after learning this is that chips are inherently three-dimensional devices. But the layers are exceedingly small, and the overall thickness of the active layers of a chip are thinner than a human hair. A bit of study and thought and you’ll realize the structure of an integrated circuit really isn’t in three dimensions.

    Recently, rumors and educated guesses coming from silicon insiders have pointed towards true three-dimensional chips as the future of the industry. These chips aren’t a few layers thick like the example above. Instead of just a few dozen layers, 100 or more layers of transistors will be crammed into a single piece of silicon. The reasons for this transition range from shortening the distance signals must travel, reducing resistance (and therefore heat), and optimizing performance and power in a single design.

    The main chip on the Raspberry Pi Zero is actually two ICs. The bottom is the ARM processor, while the top is the DRAM. This is known as a Package on Package (POP) assembly.

    These Package on Package devices can be seen – albeit at an oblique angle – on dozens of devices. The large chip on the Raspberry Pi Zero, Model A, and Model B are POP devices, with the RAM on the top chip connected directly to the Broadcom CPU. The latest, highest capacity RAM modules also use this technique.

    While the idea of 3D chips constructed out of multiple layers of silicon is an old idea, only recently have we seen this sort of technology make it into consumer devices. In 2013, Samsung moved into the 3D Flash market with V-NAND, regarded as the first true production-grade 3D transistor technology.

    While putting multiple dies on a single piece of silicon will be a boon for Intel – especially with the Altera IP in their portfolio – it’s not exactly a true three-dimensional chip. That will have to wait a while; we’ve only had 3D Flash for a few years now, and 3D RAM won’t be public for another two years. Making a 3D CPU is a much more complex engineering challenge, and for that we may be waiting the better part of a decade.

    Reply
  13. Tomi Engdahl says:

    The new memory greatly increases the storage density

    Last planar or planar flash memory chips to store ystyvät already one gigabit to square millimeter. Micron presented the ISSCC conference technology, which can be 4.29 gigabits of data to square millimeter. It is a new kind of 3D memory technology.

    Micron’s new memory relies on the familiar flash-planar circuits to the floating gate. Stored in one memory cell, three bits was obtained. Micron presented the ISSCC: in the prototype circuit, with a capacity of as much as 768 gigabits. The district is a gigantic NAND chip, because of its physical size was 179.2 square millimeters.

    At the same time, the Korean Samsung demonstrated the VNAND its own circuit, the recording density will reach 2.6 gigabits per square millimeter.

    A floating gate structure is 3D very difficult to manufacture.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3957:uusi-muisti-moninkertaistaa-tallennustiheyden&catid=13&Itemid=101

    Reply
  14. Tomi Engdahl says:

    EUV Gets $500M Center
    Globalfoundries, New York partner on center
    http://www.eetimes.com/document.asp?doc_id=1328894&

    Globalfoundries and SUNY Polytechnic Institute will spend a total of $500 million over five years to create a new R&D center to accelerate the introduction of Extreme Ultraviolet (EUV) lithography into the 7nm process node and beyond. The move is the latest sign EUV will finally make its way into production fabs, albeit probably not until 2018 or later.

    The Advanced Patterning and Productivity Center (APPC) will be located at the Colleges of Nanoscale Science and Engineering (CNSE) in Albany, N.Y. It will have a ASML NXE:3300 EUV scanner and a staff of about 100 researchers.

    “I consider this a very positive sign,” said Gary Patton, chief technology officer and senior vice president of worldwide R&D at Globalfoundries. EUV “has gone through a lot of ups and down from unrealistic exuberance four or five years ago to pessimism two or three years ago to our current thinking it’s going to be real pretty soon,” Patton said.

    EUV could be ready for use in production fabs “as early as 2018-2019,” said Patton.

    The center will work on the full range of outstanding issues to make EUV viable for production fabs including ecosystem elements such as masks, resists, and EDA software. Partners including IBM and Tokyo Electron are expected to take part in the work.

    Reply
  15. Tomi Engdahl says:

    Permanent R&D Tax Credit Long-Fought Win for U.S. Economy, Innovation
    U.S. R&D tax credit here to stay
    http://www.eetimes.com/author.asp?section_id=189&doc_id=1328906&

    The permanent R&D credit will help promote U.S. innovation, job creation, and economic growth by providing reliable tax relief to U.S. businesses that invest in research.

    In Washington, persistence can pay off.

    That’s the lesson learned from the decades-long pursuit of a permanent R&D tax credit, an effort that reached a successful conclusion in December. A permanent R&D credit will promote U.S. innovation, job creation, and economic growth by providing much-needed, reliable tax relief to U.S. businesses that invest in research.

    Enactment of a permanent R&D credit was particularly gratifying for the semiconductor industry, which had a lead role in securing passage of the original credit in the early 1980s and has been a vigorous advocate for making it permanent ever since.

    To understand the impact a permanent R&D credit will have on our industry and the U.S. economy, it helps to understand how the original credit came to be. In the late 1960s, private investments in R&D in the United States began to decline

    In 1981, semiconductor pioneer Robert Noyce of Intel—one of the founding fathers of our industry, of Silicon Valley, and of modern technology—testified before Congress about the need for a credit to incentivize research. Later that year, Congress enacted the original R&D credit, thanks in part to the input of Noyce and other key voices.

    Since then, the credit has lapsed and been extended 17 times. Throughout these decades of erratic implementation, the semiconductor industry has been on the front lines of the effort to make the credit permanent. That work paid off on Dec. 18 when Congress at long last made the credit permanent.

    Since its inception, the R&D credit has been a key driver of technological discoveries and economic growth. It has been particularly impactful for the U.S. semiconductor industry, which invests one-fifth of revenue in R&D annually—a greater share than any other U.S. industry. These investments have given rise to new discoveries that fuel our industry and the overall economy. Making the R&D credit permanent gives U.S. semiconductor companies certainty and stability, allowing them to plan research investments for years to come.

    The R&D credit is also a proven job-creator. The semiconductor industry directly employs nearly 250,000 people in high-skilled, high-wage jobs across America, many of which are in the areas of research and innovation.

    Reply
  16. Tomi Engdahl says:

    Temperature sensors are improving, but select carefully
    http://www.edn.com/design/sensors/4441379/Temperature-sensors-are-improving–but-select-carefully?_mc=NL_EDN_EDT_EDN_today_20160211&cid=NL_EDN_EDT_EDN_today_20160211&elqTrackId=0bf62a185b9647f596db8c7e2674ad92&elq=1e55c4deda4945c69e198aa2952e4df8&elqaid=30788&elqat=1&elqCampaignId=26929

    From thermocouples and thermistors to resistance-temperature-detectors (RTDs), temperature sensors are varied and ubiquitous, but it would be wise not to take them for granted. Each type comes with its own set of inherent pros and cons in terms of cost, reliability, linearity, and ease of use. In this feature we will take you through some classics, while also updating you on the state of the art, how to make best of them, and a good example or two of temperature sensor and silicon integration.

    Temperature sensors really are everywhere: used in automotive, infrastructure, industrial, military/aerospace, consumer electronics, medical, transportation, power, process control, petro-chemical, and geo-physical, agriculture, and communications applications. When used in combination with other sensors like strain and pressure sensors, they’re making the applications they serve more intelligent, safer, and more reliable. In many systems, temperature monitoring and control is fundamental.

    By far, the biggest challenge designers face when using temperature sensors is how and where the sensor is placed with respect to the object or environment being measured (the measurand). Even the type of package and how it is mounted can make a difference between satisfactory and unsatisfactory measurements. With signal levels so low, losses in the connecting wires are but one of the issues to be overcome, but more on that in upcoming features.

    RTD users beware! An RTD’s response time is very slow (in the few seconds range). It is also highly susceptible to lead resistance effects thus requiring lead-wire compensation, limiting its use in 3-wire and 4-wire configurations. (2-wire configurations cannot compensate for lead-wire resistances).

    Silicon integration levels for temperature sensors are on the rise. A designer can purchase not just the sensor element, but also the analog-to-digital converter (ADC), signal-conditioning, filtering, and other functions all on a single chip.

    One of the more recent devices is the MCP9600-I/MX from Microchip Technology
    It is the industry’s first integrated thermocouple IC that enables the display of thermocouple temperatures in °C, simplifying the design of thermocouple sensing systems and reducing development costs. It includes precision instrumentation, precision temperature measurement, a high-resolution analog-to-digital converter (ADC), a temperature-data digital filter, a math engine, and is pre-programmed with firmware for a broad range of thermocouple types like K, J, T, N, S, E, B, and R.

    Reply
  17. Tomi Engdahl says:

    Improving current control for better stepper motor motion quality
    http://www.edn.com/design/power-management/4441361/Improving-Current-Control-for-Better-Stepper-Motor-Motion-Quality?_mc=NL_EDN_EDT_EDN_weekly_20160211&cid=NL_EDN_EDT_EDN_weekly_20160211&elqTrackId=23b68660f7a043c89be75d26d52668c0&elq=47c0069f95944aad8205fdc33fe9724d&elqaid=30798&elqat=1&elqCampaignId=26939

    Bipolar stepper motors are used in many applications, from driving paper through a printer to moving an XY stage in industrial equipment. Typically, the motors are driven and controlled by inexpensive and dedicated stepper motor driver ICs. Unfortunately, most of these ICs use a simple current control method that causes imperfections in the motor current waveforms and results in less-than-optimal motion quality. Implementing internal, bi-directional current sensing inside a stepper motor driver IC results in improved motion quality with lower system cost than legacy solutions.

    Reply
  18. Tomi Engdahl says:

    Transistors Minus Semiconductors
    Iron quantum-dot studded nanotubes
    http://www.eetimes.com/document.asp?doc_id=1328898&

    Everybody already knows that semiconductors are quickly approaching the atomic-level at under 5 nanometers, but most proposed solutions are based on variations-on-a-theme, such as going to a different “semiconductor” like graphene. Why not scrap semiconductors, instead, and use tunneling field effect transistors (TFETs)? The answer is that most materials require cryogenic cooling to make TFETs, according to Professor Yoke Khin Yap at Michigan Tech. Yap, however, has found a room-temperature solution using quantum-dot studded nanotubes.

    Michigan Tech (Michigan Technological University, Houghton) is not all the way there yet but does have a room-temperature tunneling FET proof-of-concept using iron quantum-dots aligned on boron-nitride nanotubes. Yip claims this solution can not only replace semiconductors but will be flexible enough to create super-small wearable technologies that will perform at levels beyond our wildest imaginations for semiconductors today.

    “We already know that the turn-on voltage for a typical QD-BNNT channel can be below 0.1 volts,” Yap told EE Times. “But for the current proof of concept work, it is higher at (about 15 volts) due to the long channel length on our STM-TEM [scanning tunneling microscope-tunneling electron microscope] holder.”

    Reply
  19. Tomi Engdahl says:

    Don’t over-constrain in formal property verification (FPV) flows
    http://www.edn.com/design/integrated-circuit-design/4441345/Don-t-over-constrain-in-formal-property-verification–FPV–flows?_mc=NL_EDN_EDT_EDN_today_20160209&cid=NL_EDN_EDT_EDN_today_20160209&elqTrackId=a8f8a1aa68744ea3a71d080d521b4ec1&elq=4e936b037e494c3e8057e3afc6f09c4b&elqaid=30758&elqat=1&elqCampaignId=26900

    Formal property verification (FPV) is increasingly being used to complement simulation for system-on-chip (SoC) verification. Adding FPV to your verification flow can greatly accelerate verification closure and find tough corner-case bugs, but it is important to understand the differences between the technologies. The main difference is that FPV uses properties, i.e., assertions and constraints, instead of a testbench. Assertions are used in simulation as well, but the role of constraints is different. An understanding of constraints is necessary for successful use of FPV.

    Reply
  20. Tomi Engdahl says:

    Home> Power-management Design Center > How To Article
    Power management can cause latchup in CMOS chips
    http://www.edn.com/design/power-management/4441325/Power-management-can-cause-latchup-in-CMOS-chips?_mc=NL_EDN_EDT_EDN_today_20160209&cid=NL_EDN_EDT_EDN_today_20160209&elqTrackId=c113bd7a91e148f791dd0242261598a7&elq=4e936b037e494c3e8057e3afc6f09c4b&elqaid=30758&elqat=1&elqCampaignId=26900

    If you leave signals applied to the inputs of a CMOS chip with the power turned off, the chip might explode when you re-apply power. This is called latchup. Similarly, if you drag the outputs of a CMOS chip above or below the power supply rails you can latchup the part. Latchup is not always destructive. Sometimes the part heats up, but when you remove all power and signals, the chip has survived the latchup condition. It does not matter if the CMOS IC is a microcontroller, an operational amplifier, an analog-to-digital converter (ADC), logic, or analog multiplexor.

    Latchup becomes a real problem when you try to power up and down different sections of your design to save power. It is also a problem when you have cables or inputs from other devices going directly to your chip. Another common problem is when a CMOS output is connected to a large capacitive load. The part will go into latchup the moment you turn off its power. As long as you don’t turn the power back on for a few moments you are fine, the energy in the capacitor dissipates and the CMOS part is out of latchup. But if someone cycles power quickly, or if there is a momentary dropout or glitch, boom, the part blows its lid.

    Reply
  21. Tomi Engdahl says:

    EUV Gets $500M Center
    Globalfoundries, New York partner on center
    http://www.eetimes.com/document.asp?doc_id=1328894&

    Globalfoundries and SUNY Polytechnic Institute will spend a total of $500 million over five years to create a new R&D center to accelerate the introduction of Extreme Ultraviolet (EUV) lithography into the 7nm process node and beyond. The move is the latest sign EUV will finally make its way into production fabs, albeit probably not until 2018 or later.

    Reply
  22. Tomi Engdahl says:

    Taiwan Quake: TSMC, UMC Assess Impact; Expect Recovery in Days
    http://www.eetimes.com/document.asp?doc_id=1328886

    Taiwan Semiconductor Manufacturing Co. (TSMC) and United Microelectronics Corp. (UMC), among the world’s three largest foundries, said they are assessing the impact from a 6.4 magnitude earthquake that rocked southern Taiwan during the wee hours of February 6.

    The chipmakers said they expect to recover most of their operations within two to three days. TSMC added that no more than 1 percent of its production for the first quarter of this year will be affected.

    Taiwan makes nearly a third of the world’s semiconductors.

    Reply
  23. Tomi Engdahl says:

    Micron, Samsung in Flash Battle
    768 Gbit Micron chip beats Samsung’s V-NAND
    http://www.eetimes.com/document.asp?doc_id=1328874&

    Micron described a novel flash design that on paper beats the vertical NAND technology Samsung has been using to drive its leadership in non-volatile memory. The two gave competing presentations at this week’s International Solid-State Circuits Conference (ISSCC) that entertained and stumped even veteran flash-memory gurus.

    Micron showed a 768 Gbit 3D NAND device using three-bit/cell floating gate technology. It tucked control circuits under the flash array to deliver a density of 4.29 Gbits/mm2 compared to 2.6 Gb/mm2 for the most dense 256Gbit 3D NAND chip Samsung ships today. By contrast, the planar NAND chips most companies currently sell pack just 1 Gbit/mm2.

    Executives at Micron have not yet decided whether they will commit to manufacturing the design as a prod

    Reply
  24. Tomi Engdahl says:

    Battery Research Claims 10x Gain
    Lithium ion takes new anode role
    http://www.eetimes.com/document.asp?doc_id=1328888&

    Scientists from Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have developed a method that makes silicon lithium-ion battery anodes a possibility. Such anodes could store 10 times more energy per charge than existing commercial anodes and make high-performance batteries smaller and lighter.

    The three-step method was described in Nature Energy and on the SLAC website.

    https://www6.slac.stanford.edu/news/2016-01-28-graphene-cage-boosts-battery-performance.aspx

    Reply
  25. Tomi Engdahl says:

    MEMS Microphone Market to Hit 13% CAGR
    http://www.eetimes.com/document.asp?doc_id=1328892&

    The microelectromechanical-systems (MEMS) microphones market will grow from about 3.6 billion units in 2015 to over 6 billion units in 2019, according to market research company IHS Technology.

    Over the same period the market value will grow from about $800 million in 2015 to about $1.3 billion in 2019, a compound annual growth rate of 13 percent.

    Apple is a key purchaser of MEMS microphones although its significance is set to diminish as the devices become more mainstream. Apple shifted from three MEMS microphones in the iPhone 6 line to four in the iPhone 6S line and is set to purchase about 1.5 billion MEMS microphones in 2016, about one third of a market valued at about $900 million according to IHS.

    Microsoft and Motorola introduced smartphones with four MEMS microphones, before Apple but at lower volumes

    Four or five MEMS microphones
    Four microphones help with hands-free calling and voice commands for Siri, Google Now, Cortana and other applications and MEMS microphones are being added for richer audio fidelity in video recording, noise cancellation and better call and recording performance.

    “It will be harder for manufacturers to justify a move to five microphones in the coming years, unless clear and potentially popular use cases are identified,” Boustany said. “So far, Motorola’s Droid Turbo is the only handset with five MEMS microphones to become widely available.

    Reply
  26. Tomi Engdahl says:

    Comparing CPLD-Based Circuit Board Power Management Architectures
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328902&

    As the complexity of board-level systems has grown, hardware management systems have begun to consume a disproportionate share of design effort and BOM costs.

    The growing complexity of board-level designs has begun to strain the capabilities of today’s hardware/power management architectures. While any one of the four most commonly used management architectures — as described later in this paper — can be used to support these complex designs, they each require different sets of compromises and design trade-offs in terms of scalability, design effort, and/or cost.

    Recently, a fifth board management architecture has emerged that provides the best possible performance, safety, and flexibility while requiring far less design effort and implementation cost. This article will explore this new architecture, primarily with a focus on the power management functions it provides.

    Reply
  27. Tomi Engdahl says:

    New Efficiency Standards for Wall Warts in the US
    http://hackaday.com/2016/02/12/new-efficiency-standards-for-wall-warts-in-the-us/

    The common household wall wart is now under stricter regulation from the US Government. We can all testify to the waste heat produced by many cheap wall warts. Simply pick one at random in your house, and hold it; it will almost certainly be warm. This regulation hopes to save $300 million in wasted electricity, and reap the benefits, ecologically, of burning that much less fuel.

    However, it does look like most warts will go from a mandated 50-ish percent efficiency to 85% and up. This is a pretty big change, and some hold-out manufacturers are going to have to switch gears to newer circuit designs if they want to keep up.

    Energy Efficiency Standard Effective This Week Affects Virtually All of Us
    http://switchboard.nrdc.org/blogs/pdelforge/energy_efficiency_standard_eff.html

    With 5 to 10 external power supplies in the average U.S. household, the new efficiency standards are projected to save consumers $300 million a year in electricity costs and reduce the carbon pollution that fuels dangerous climate change.

    The standards, which will make new external power supplies up to 33 percent more efficient, are an important step to achieving President Obama’s goal of reducing carbon pollution by at least 3 billion metric tons by 2030 through efficiency rules for appliances and federal buildings.

    NRDC has long regarded power supplies as a hidden opportunity for significant energy savings. California, ever the environmental trendsetter, established the first efficiency standards for external power supplies in 2004 with NRDC’s help. ENERGY STAR® began to cover them in 2005 in order for manufacturers to be able to attach the label signifying the most energy efficient models, and national mandatory external power supply energy efficiency standards were established in 2008.

    Eight years later, updated federal standards took effect yesterday (Feb. 10) after a lengthy public input process. The revised standards strengthen efficiency requirements, and extend them to new types of power adapters not previously covered. This makes sure that the vast majority of these devices now use technology best practices to minimize energy wasted as heat (efficient power supplies are much cooler to touch, and much smaller in size, than their predecessors were 12 years ago before the first standards went into effect).

    According to an analysis by the Appliance Standards Awareness Project, which has worked with NRDC to promote energy efficiency, the typical household will realize a savings of up to about 30 kilowatt-hours a year once all adapters in the home comply with the new standards. This may not be huge savings per household but nationally it adds up to 93 billion kilowatt hours over the next 30 years and 47 million metric tons of carbon dioxide pollution, equivalent to the annual emissions from nearly 10 million cars.

    The power adapters convert power from a wall outlet to the lower voltages needed to charge laptop computers, smartphones and devices. External power supplies were a good candidate for efficiency standards because they draw power – so-called no load power consumption — when plugged in, even if disconnected from a device such as a phone, or connected to a fully charged device. This idle load contributes to the $19 billion a year Americans spend on “always-on” energy use by inactive appliances, electronics and miscellaneous electrical devices.

    Battery chargers next

    Separately, NRDC is pressing DOE to finalize the first federal efficiency standards for the roughly 500 million battery chargers sold annually in the United States. Battery chargers include not just the external power supply, but also the battery itself and the charge control circuitry component of devices that use rechargeable batteries, such as smart phones and laptop computers.

    Energy Conservation Program: Energy Conservation Standards for External Power Supplies
    https://www.federalregister.gov/articles/2014/02/10/2014-02560/energy-conservation-program-energy-conservation-standards-for-external-power-supplies

    Reply
  28. Tomi Engdahl says:

    Flexible Phototransistor Will Make Everything Subtly Better In The Future
    http://hackaday.com/2016/02/13/flexible-phototransistor-will-make-everything-subtly-better-in-the-future/

    University of Wisconsin-Madison is doing some really cool stuff with phototransistors. This is one of those developments that will subtly improve all our devices. Phototransistors are ubiquitous in our lives. It’s near impossible to walk anywhere without one collecting some of your photons.

    The first obvious advantage of a flexible grid of phototransistors is the ability to fit the sensor array to any desired shape. For example, in a digital camera the optics are designed to focus a “round” picture on a flat sensor. If we had a curved surface, we could capture more light without having to choose between discarding light, compensating with software, or suffering the various optical distortions.

    Record-setting flexible phototransistor revealed
    https://www.sciencedaily.com/releases/2015/10/151030153123.htm

    Inspired by mammals’ eyes, University of Wisconsin-Madison electrical engineers have created the fastest, most responsive flexible silicon phototransistor ever made.

    The innovative phototransistor could improve the performance of myriad products — ranging from digital cameras, night-vision goggles and smoke detectors to surveillance systems and satellites — that rely on electronic light sensors. Integrated into a digital camera lens, for example, it could reduce bulkiness and boost both the acquisition speed and quality of video or still photos.

    Reply
  29. Tomi Engdahl says:

    M. Mitchell Waldrop / Nature:
    Semiconductor industry roadmap to abandon pursuit of Moore’s law for the first time as computing becomes increasingly mobile

    The chips are down for Moore’s law
    http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

    The semiconductor industry will soon abandon its pursuit of Moore’s law. Now things could get a lot more interesting.

    Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.

    A rule of thumb that has come to dominate computing, Moore’s law states that the number of transistors on a microprocessor chip will double every two years or so — which has generally meant that the chip’s performance will, too. The exponential improvement that the law describes transformed the first crude home computers of the 1970s into the sophisticated machines of the 1980s and 1990s, and from there gave rise to high-speed Internet, smartphones and the wired-up cars, refrigerators and thermostats that are becoming prevalent today.

    None of this was inevitable: chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.

    Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.

    The industry road map released next month will for the first time lay out a research and development plan that is not centred on Moore’s law. Instead, it will follow what might be called the More than Moore strategy: rather than making the chips better and letting the applications follow, it will start with applications — from smartphones and supercomputers to data centres in the cloud — and work downwards to see what chips are needed to support them. Among those chips will be new generations of sensors, power-management circuits and other silicon devices required by a world in which computing is increasingly mobile.

    Reply
  30. Tomi Engdahl says:

    Birth of the programmable optical chip
    http://www.nature.com/nphoton/journal/v10/n1/full/nphoton.2015.265.html

    Advances in silicon photonics, compound III–V semiconductor technology and hybrid integration now mean that powerful, programmable optical integrated circuits could be within sight.

    The recent acquisition of Altera, the pioneer of programmable logic chips, for US$16.7 billion by the well-known chip maker Intel provides clear recognition of the perceived importance of field-programmable gate array (FPGA) technology. In essence, an FPGA chip is a universal signal processing chip that can be programmed or configured after fabrication to perform a specific task — be it speech recognition, computer vision, cryptography, or something else.

    Originally commercialized in the mid-1980s, by two US Silicon Valley firms Altera and Xilinx (who today between them hold an ~80% share of the market), the FPGA chip has grown from humble origins and niche applications to ubiquity. The technology is found inside everything from digital cameras and mobile phones through to sophisticated medical imaging devices, telecommunications equipment and robotics. In the heart of an FPGA is a large array of logic blocks that are wired up by reconfigurable interconnects, allowing the chip to be reconfigured or programmed via specialized software. The use of a standard common hardware platform makes FPGAs far more flexible and cost effective compared with application specific integrated circuits (ASICs) — complex chips that are custom designed for a specific task.

    What’s potentially exciting is that there are now signs that the optical equivalent of an FPGA is on the horizon. Improvements in both silicon photonics and III–V compound semiconductor technology, such as InP and GaAs, mean that optical researchers are starting to build designs of programmable optical signal processors on a chip by cascading arrays of coupled waveguide structures that feature phase shifters to control the flow of light through the array and thus support reconfigurability. The theory of how such arrays behave has been analysed in depth by David Miller from Stanford University in the US who has published several papers on the topic.

    The prospect of an optical equivalent to the FPGA excites many in the photonics community. “Similar to the invention of electronic FPGAs in 1985, the availability of large-scale programmable optical chips would be an important step forwards towards ultrafast and wide-band signal processing,”

    Yao says that the ultrafast processing capabilities of optical chips could be useful for ultrahigh-speed ADC, all-optical signal processing in communications networks, or fast image processing.

    He says that at present, the design of optical circuits to perform a specific task is leading to a situation where there are almost as many technologies as there are applications. This fragmentation hinders cost-effective, mass-volume manufacture of a photonic solution, a situation that an optical programmable chip would help remedy.

    Reply
  31. Tomi Engdahl says:

    Standard High Voltage Opto-diode Product Information
    http://www.voltagemultipliers.com/Aliases/LPs/OZ150_and_OZ100.html

    OZ150SG Datasheet – 15kV high gain, stable long-term gain, high isolation opto-diode.

    The OZ150SG is useful in high voltage switching applications, or integrated into an opto-coupler. It is also useful in instrumentation or environments where high precision is required.

    Reply
  32. Tomi Engdahl says:

    Phase Change Memory: The Discontinuity & Melting Question
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328935&

    Melting and quenching during RESET are essential to the operation of a phase change memory (PCM) device. This follow-up article explores the role of melting during threshold switching and the post-threshold switching conducting state prior to SET state crystallization.

    Reply
  33. Tomi Engdahl says:

    Imagination’s CEO Steps Down
    http://www.eetimes.com/document.asp?doc_id=1328932&

    Sir Hossein Yassaie, the CEO of graphics core licensor Imagination Technologies Group plc, has stepped down as chief executive amid a growing financial crisis at the company he has guided for many years.

    The company announced a widening half-year loss for the first six months of its financial year back in December and has announced it expects a loss for the full year. The company has also been talking about putting its Pure equipment subsidiary up for sale.

    Reply
  34. Tomi Engdahl says:

    Consumer Fuel Cells to Gain Traction from Drones?
    http://www.eetimes.com/document.asp?doc_id=1328924&

    Cartridge-based hydrogen and methanol fuel cells have been around for a long time, with many demonstrators showcased by consumer electronics companies including Nokia, NEC, Toshiba, Fujitsu, and Hitachi.

    Yet, while promising quick recharges and very long device autonomies, the technology never really took off in the consumer world, but this is about to change, according to Intelligent Energy.

    The company licenses fuel cell platforms and technology IP for partners to produce the goods. Currently, it derives about 95% of its revenues from licencing its IP for large fuel cells (in the multiple kW range), such as those used to power remote telecom power sites (often replacing diesel power generators), or to be designed into cars.

    Reply
  35. Tomi Engdahl says:

    Red Hat Drives FPGAs, ARM Servers
    FPGA summit set for March
    http://www.eetimes.com/document.asp?doc_id=1328930&

    FPGA vendors and users will meet next month in an effort to define a standard software interface for accelerators. The meeting is being convened by Red Hat’s chief ARM architect, who gave an update (Wednesday) on efforts to establish ARM servers.

    “There’s a trend towards high-level synthesis so an FPGA programmer can write in OpenCL up front but the little piece that’s been ignored is how OpenCL talks to Linux,” said Jon Masters, speaking at the Linley Data Center event here.

    OS companies don’t ship drivers for OpenCL, so software developers need to understand the intimate details of the FPGA as well as the Linux kernel to make the link. Often it also involves developing a custom direct-memory access engine and fine tuning Java libraries.

    Masters did just that as part of a test board called Trilby that ran a simple search algorithm on an FPGA mounted on a PCI Express card. “Ninety percent of the effort is interface to the FPGA,” he said.

    To fix the problem, Masters has called a meeting of interested parties in March. It will be hosted by a neutral organization. He hopes to have “all the right players” involved, including major FPGA vendors.

    If the meeting is successful, the group will hammer out “in the open” one or more interfaces for standard OS drivers so users can load and configure an FPGA bit stream. It’s a significant hole, and not the only one on the road to taking FPGA accelerators into mainstream markets, according to Masters.

    FPGAs also need to become full citizens in the software world of virtualized functions where telecos in particular are rallying around new standards for network functions virtualization. Separately, programmers are using high-level synthesis especially with OpenCL to write code for FPGAs, however, experts are still needed to map and optimize the results of synthesis to the underlying hardware, he said.

    Reply
  36. Tomi Engdahl says:

    Soft Machines: Promising, Not Proven
    Latest simulations look impressive
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328939&

    Veteran microprocessor analyst Kevin Krewell plumbs startup Soft Machines’ VISC technology following a recent release of updated simulation data of its promising multicore architecture.

    Soft Machines is working on a new architecture that, if successful, will represent a major breakthrough in single- and multicore CPU performance. The company claims it can build a multicore processor where hardware orchestration logic allows multiple CPU cores to act as one, significantly improving instruction per cycle (IPC) performance over a single CPU core and allowing multicore processors to perform significantly better on single-threaded code.

    The company has built a multi-national team, raised $175 million and is now close to demonstrating the first real products using its VISC technology. Soft Machine’s business model is flexible – offering a mix of both chips and licensable CPU IP.

    The company’s first test silicon was built in late 2014 in a 28nm process. The details of the original test chip were reported back in 2014. The original demo didn’t silence the skeptics, but was enough to convince a number of investors to put more money in Soft Machines.

    This year the company plans to tape out an SoC code named Mojave in a 16nm FinFET process based on a core named Shasta. The company also revealed an ambitious roadmap at The Linley Group Processor Conference in 2015 to deliver a new CPU and SoC every year for the next three years.

    Reply
  37. Tomi Engdahl says:

    Microchip Unveils Online MPLAB IDE and $10 Board
    http://hackaday.com/2016/02/15/microchip-unveils-online-mplab-ide-and-10-board/

    Today, Microchip released a few interesting tools for embedded development. The first is a free online IDE called MPLAB Xpress, the second is a $10 dev board with a built-in programmer. This pair is aimed at getting people up and running quickly with PIC development. They gave us an account before release, and sent over a sample board. Let’s take a look!

    The new software is called MPLAB Xpress. It’s an in-browser IDE that stores your code online and compiles server-side. It spits out a .hex file that is downloaded by your browser and flashed to the target device, and it is capable of interfacing with traditional debugging hardware.

    To me this feels very much like Microchip is making a bid for the hobby market. It is unlikely that PIC veterans will drop MPLAB X (the offline IDE) for this in-browser version. But this is ideal for teaching first time embedded and well suited for a quick hack.

    At $10 it tweaks that “ah, why not?” string in your brain. The target device is a PIC16F1885 which carries 14 KB of program memory, 1 KB of RAM, has a host of peripherals and an internal oscillator configurable for 1-32 MHz.

    On the board you’ll find four user LEDs, a trimpot, and one push button. The designers have done a fine job of breaking out pins, which are labelled on the top of the board, along with 3.3V, 5V, and four GND connections.

    Interestingly there is a mikroBUS footprint which includes female pin headers, so those who are invested in that ecosystem should be quite happy.

    This is great. The value for the board, the features of the browser-based suite, and online community are a boon to development on PIC hardware.

    For now the XC8 compiler is the only one supported but I’m told XC16 and XC32 will be implemented and live before the end of this year. The free version of the compiler is used, but those who have purchased the upgrades can unlock them online as well.

    https://www.microchip.com/mplab/mplab-xpress

    Reply
  38. Tomi Engdahl says:

    Forget Batteries, Hydrogen Fuel Cells Can Power Your Devices
    http://www.designnews.com/author.asp?section_id=1395&doc_id=279639&cid=nl.x.dn16.edt.aud.dn.20160208&dfpPParams=ind_184,industry_consumer,kw_36,bid_22,aid_279639&dfpLayout=blog&dfpPParams=ind_184,industry_consumer,kw_36,bid_22,aid_279639&dfpLayout=blog&dfpPParams=ind_184,industry_consumer,kw_36,bid_22,aid_279639&dfpLayout=blog

    A UK-based energy company is laying plans for a consumer electronics future that would employ hydrogen fuel cells instead of batteries in mobile phones, laptops, tablets, and even drones.

    Intelligent Energy, which has more than a thousand patents and another thousand pending, demonstrated proof of its concepts at the recent Consumer Electronics Show (CES) 2016 in Las Vegas, where it showed off two iPhones, a laptop, a Surface Pro 3 tablet computer, and a drone – all powered by hydrogen.

    “This device gives you the power to charge your phone for a week,”

    Reply
  39. Tomi Engdahl says:

    Finland Sweden clearly behind the trade of components

    Dmass Organization (Distributors ‘and Manufacturers’ Association of Semiconductor Specialists), the trade component in the Nordic countries increased last year by almost a fifth. I’m sorry for our purposes is the fact that growth came almost exclusively from the Swedish market.

    Dmass states that Finland is one of Eastern Europe and along the Baltic region, which continued to suffer from the transition to production to Eastern Europe or Asia.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4003:suomi-selvasti-ruotsia-jaljessa-komponenttikaupassa&catid=13&Itemid=101

    Reply
  40. Tomi Engdahl says:

    Fiber optic sensing: The past, present, and exciting future
    http://www.edn.com/design/sensors/4441412/Fiber-optic-sensing–The-past–present–and-exciting-future?_mc=NL_EDN_EDT_EDN_today_20160217&cid=NL_EDN_EDT_EDN_today_20160217&elqTrackId=81a4f9bf644a4d4e8c5b3dae43cde805&elq=174ddc5eae854b0b9d7851ca667b9f9c&elqaid=30871&elqat=1&elqCampaignId=27013

    Over the past 60 years, fiber optic sensing (FOS) has been used to enhance and test the integrity, efficiency, safety, and durability of structures, vehicles, medical devices, and more across a multitude of industries. Advancements over the past five years have enabled FOS to expand its abilities to include unprecedented levels of data and sensing density across applications in aerospace, energy, and even the medical field. This is helping engineers solve problems they are faced with today, and innovate to advance their designs. Today there are a vast number of real-world implications for fiber optic technology, as well as a realm of possibilities for the future.

    This article will discuss the recent advancements in intrinsic FOS technology, including 3D shape sensing and optical frequency domain reflectometry. It will also address how engineers can utilize the technology today, and provide a preview of what to expect in the future.

    Reply
  41. Tomi Engdahl says:

    LT6375 – ±270V Common Mode Voltage Difference Amplifier
    http://www.linear.com/product/LT6375

    ±270V Common Mode Voltage Range
    97dB Minimum CMRR (LT6375A)
    0.0035% (35ppm) Maximum Gain Error (LT6375A)
    1ppm/°C Maximum Gain Error Drift
    2ppm Maximum Gain Nonlinearity
    Wide Supply Voltage Range: 3.3V to 50V
    Rail-to-Rail Output

    575kHz –3dB Bandwidth (Resistor Divider = 7)
    375kHz –3dB Bandwidth (Resistor Divider = 20)

    The LT®6375 is a unity-gain difference amplifier which combines excellent DC precision, a very high input common mode range and a wide supply voltage range. It includes a precision op amp and a highly-matched thin film resistor network. It features excellent CMRR, extremely low gain error and extremely low gain drift.

    Comparing the LT6375 to existing difference amplifiers with high common mode voltage range, the selectable resistor divider ratios of the LT6375 offer superior system performance by allowing the user to achieve maximum SNR, precision and speed for a specific input common mode voltage range.

    Applications

    High Side or Low Side Current Sensing
    Bidirectional Wide Common Mode Range Current Sensing
    High Voltage to Low Voltage Level Translation
    Precision Difference Amplifier
    Industrial Data-Acquisition Front-Ends
    Replacement for Isolation Circuits

    Reply
  42. Tomi Engdahl says:

    TSMC Says Recovery from Taiwan Quake to Take Longer Than Expected
    http://www.eetimes.com/document.asp?doc_id=1328964&

    Taiwan Semiconductor Manufacturing Co. (TSMC), the world’s largest foundry, today revised its expectations on the impact from a February 6 earthquake, saying the recovery will take longer than the company originally forecast.

    “We still expect to see wafer delivery delays in the first quarter,” the company said in a press statement. For Fab 14A, wafer delivery will be delayed by 10 to 50 days, and delivery of about 100,000 12-inch wafers will be delayed from the first quarter to the second quarter, according to TSMC. For Fab 6, the wafer delivery delay will be 5 to 20 days, with 20,000 8-inch wafers delayed to the second quarter. For Fab 14B, the delivery delay will negligible, the company said.

    The impact to the company’s operations was limited to the Southern Taiwan Science Park, where TSMC has Fab 14 and Fab 6. TSMC initially said it expected to recover most of its operations within two to three days and that no more than 1 percent of its production for the first quarter of this year would be affected.

    Reply
  43. Tomi Engdahl says:

    5 V Rail Clean-Up
    http://www.eeweb.com/company-blog/recom/5-v-rail-clean-up/

    When analog and digital circuits share a common 5 V supply rail, there can be problems with high frequency interference from the digital to the analogue ICs. This is particularly noticeable in audio or video applications where the superimposition of the digital noise on the analogue signals can cause bars to appear on the image or unwanted hiss to be heard on the audio.

    Another variation on this theme is the very clean 5 V supply circuit shown in Figure 2, where a linear regulator is used to provide a very low noise output rail. The linear regulator cannot be simply placed in series with the 5 V input as even a low drop-out (LDO) regulator still needs a few hundred millivolts of headroom. In this application example, a dual 3.3 V output SMD DC/DC converter is used to provide 6.6 V, which then can be regulated down to 5 V by any suitable low noise linear regulator.

    Reply
  44. Tomi Engdahl says:

    Tools and Libraries for Motor Control
    http://www.eeweb.com/company-blog/microchip/tools-and-libraries-for-motor-control/

    This article presents Microchip tools and libraries that expands support for motor control. These tools and libraries include the dsPICDEMTM MCLV Development Board, dsPIC33FJ32MC204 Plug-in Module, MPLAB IDE Version 8.15, and application notes AN1208 and AN1206.

    Reply
  45. Tomi Engdahl says:

    Analog filters minimize distortion
    http://www.edn.com/electronics-products/other/4441424/Analog-filters-minimize-distortion?_mc=NL_EDN_EDT_EDN_analog_20160218&cid=NL_EDN_EDT_EDN_analog_20160218&elqTrackId=61fabde7574a4a0e8a77a827d2c4c57d&elq=585f63ce10014054ba3001eee3f28b1a&elqaid=30884&elqat=1&elqCampaignId=27023

    The D68 series of 8-pole analog filters from Frequency Devices exhibits THD (total harmonic distortion) levels as low as -100 dB with near theoretical frequency response. Offered in low-pass and high-pass versions with Butterworth, Bessel, elliptic, and constant-delay transfer functions, the D68 series provides linear active filtering in a small 32-pin DIP package.

    Each model comes factory-tuned to a user-specified corner frequency between 1 Hz and 100 kHz. Units can be combined to create custom band-pass or band-reject filters. The self-contained devices require no external components or adjustments

    http://www.freqdev.com/products/filters/d68.html#main

    Reply
  46. Tomi Engdahl says:

    One-wire interface over power pin for sensor signal conditioner calibration
    http://www.edn.com/design/analog/4441438/One-wire-interface-over-power-pin-used-in-the-calibration-of-sensor-signal-conditioners?_mc=NL_EDN_EDT_EDN_analog_20160218&cid=NL_EDN_EDT_EDN_analog_20160218&elqTrackId=13cf266202cd409f8009391b0797507d&elq=585f63ce10014054ba3001eee3f28b1a&elqaid=30884&elqat=1&elqCampaignId=27023

    Calibration is a crucial step in the manufacturing process of many sensors (or transmitters) such as pressure, temperature and position sensors. One key component needed for calibration is the communication interface between the sensor and the calibration system. This communication interface involves hardware and software, both in the sensor and calibration system. The communication interface must have the ability to calibrate multiple sensors simultaneously.

    In the context of sensor calibration, reducing the number of sensor pins needed specifically to support communication during calibration – and correspondingly the number of harness wires or cables – is a benefit because it reduces costs and the sensor solution hardware size (including pins and wires). The one-wire communication interface (OWI) provides these benefits by allowing communication with the device to take place over one wire. Moreover, in the case of two-wire transmitters, OWI over the power line provides even further benefits.

    Two-wire transmitters eliminate the need for an additional pin because data and power are sent over the same wire. In this article we specifically discuss the communication interfaces used during sensor calibration. Additionally, we address OWI for two-wire transmitters focusing on the challenges related to OWI over the power line, and present a solution to overcome these challenges.

    Reply
  47. Tomi Engdahl says:

    Home> Analog Design Center > How To Article
    Less than one in a Quadrillion: A test method for measuring ADC conversion error rate
    http://www.edn.com/design/analog/4441443/Less-than-one-in-a-Quadrillion–A-test-method-for-measuring-ADC-conversion-error-rate?_mc=NL_EDN_EDT_EDN_analog_20160218&cid=NL_EDN_EDT_EDN_analog_20160218&elqTrackId=e99a120a71844992af0c14dcfb62e4f7&elq=585f63ce10014054ba3001eee3f28b1a&elqaid=30884&elqat=1&elqCampaignId=27023

    To err is to be human. But what claims can be made about your system’s analog to digital converter (ADC)? We will review the extent of our conversion error rate (CER) testing and analysis of high speed ADCs. The ADC CER measurement process may take weeks or months to complete depending upon the sample rate and the target limit required. Often, testing beyond the first error rate occurrence is needed for a high confidence level (CL) (Redd, 2000). For those systems that require a low conversion error rate, it takes this kind of detailed attention and effort to quantify. When we get done with it all, the error rate can be established with high confidence better than <1e-15.

    Many real world high-speed sampling systems, such as electrical test and measurement equipment, vital systems health monitoring, radar and electronic warfare countermeasures cannot tolerate a high rate of ADC conversion errors. These systems are looking for an extremely rare or small signal across a wide spectrum of noise. False alert triggers in these systems can cause system failure. Therefore, it is important to be able to quantify the frequency and magnitude of a high speed ADC conversion error rate.

    Reply
  48. Tomi Engdahl says:

    Power Integrity: It’s not just decoupling caps
    http://www.edn.com/electronics-blogs/all-aboard-/4441394/Power-Integrity–It-s-not-just-decoupling-caps?_mc=NL_EDN_EDT_EDN_weekly_20160218&cid=NL_EDN_EDT_EDN_weekly_20160218&elqTrackId=85ae9c99aeac4f689d32f186c624647c&elq=67b4966efe9f430b8e0133b911161c89&elqaid=30892&elqat=1&elqCampaignId=27031

    If you’re looking to distinguish yourself in an up-and-coming area of engineering, power integrity (PI) could be for you.

    Gone are the days when simple decoupling-cap rules-of-thumb were effective. Even lower speed circuitry needs more care than ever before as IC processes reach gigahertz levels. According to Steve Sandler, one of PI’s leading researchers and practitioners, the field is about 15 years behind SI (signal integrity). Wow – talk about opportunity.

    Power integrity encompasses the entire power system, from VRM to PCB planes to capacitors to the chips themselves. I learned so many things at DesignCon (though mostly, I learned just how much I don’t know), such as:

    Impedance-match the PDN (power distribution network) just as you would a transmission line (except we’re talking milliohms, not 50Ω).
    The PDN impedance should be as flat as possible. When multiple anti-resonant peaks are excited, rogue waves can result, pushing supply voltage way out of spec.
    Ceramic caps can be TOO good; consider using ESR-controlled parts.
    IC manufacturers sometimes sabotage you with poor package design, such that it’s impossible to decouple adequately.
    Evaluate regulators based on output inductance. It’s one of the key specs leading to a good PDN.

    Reply
  49. Tomi Engdahl says:

    Non-Invasive Glucose Monitor Uses Far Infrared Light
    http://www.medicaldesignbriefs.com/component/content/article/1104-mdb/news/23984

    Diabetes patients traditionally monitor their daily blood glucose levels by sampling blood from the finger tips. Tohoku University researchers have developed a non-invasive method of measuring blood glucose using far infrared light.

    The technique works on the premise that near infrared light of some specific wavelengths are selectively absorbed by glucose in the blood. In contrast, far infrared light with wavelengths of around 10 micron is strongly absorbed by glucose, making it possible, in theory, for patients to get more sensitive and accurate measurements.

    Results from experiments show blood glucose levels sensitively detected and accurately measured with a less than 20% margin of error.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*