Electronics industry hopefully starts to glow after not so good year 2012. It’s safe to say that 2012 has been a wild ride for all of us. The global semiconductor industry has demonstrated impressive resilience in year 2012, despite operating in a challenging global macroeconomic environment. Many have already ratcheted back their expectations for 2013. Beyond 2012, the industry is expected to grow steadily and moderately across all regions, according to the WSTS forecast. So we should see moderate growth in 2013 and 2014. I hope this happens.
The non-volatile memory market is growing rapidly. Underlying technologies for non-volatile memories article tells that non-volatile memory applications can be divided into standalone and embedded system solutions. Standalone applications tend to be driven primarily by costs is dominated by NAND FLASH technology. The embedded market relies mainly on NOR Flash for critical applications and NAND for less critical data storage. Planar CT NAND and 3D NAND could fast become commercially viable this year or in few years. MRAM, PCRAM, and RRAM will need more time and new material innovation to become major technologies.
Multicore CPU architectures are a little like hybrid vehicles: Once seen as anomalies, both are now encountered on a regular basis and are widely accepted as possible solutions to challenging problems. Multi-core architectures will find their application but likely won’t force the extinction of single-core MCUs anytime soon. Within the embedded community, a few applications now seem to be almost exclusively multicore, but in many others multicore remains rare. There are concerns over the complexity and uncertainty about the benefits.
FPGAs as the vanishing foundation article tells that we are entering a new environment in which the FPGA has faded into the wallpaper – not because it is obsolete, but because it is both necessary and ubiquitous. After displacing most functions of ASICs, DSPs, and a few varieties of microcontrollers, it’s fair to ask if there is any realm of electronic products where use of the FPGA is not automatically assumed. Chances are, in the next few years, the very term “FPGA” might be replaced by “that ARM-based system on a chip” from Xilinx, Altera, Lattice, or other vendor.
Are products owned when bought? The trend in recent decades has been an increase in the dependence of the buyer on the seller.
More than 5 billion wireless connectivity chips will ship in 2013, according to market research firm ABI Research. This category includes standalone chips for Bluetooth, Wi-Fi, satellite positioning, near-field communications and ZigBee as well as so called “combo” chips that combine multiple standards. Broadcom seen retaining lead in connectivity chips. Bluetooth Smart, WiGig and NFC are all are seeing increased adoption in fitness, automotive and retail applications. Combo chips are also a growing opportunity based on the popularity of smart phones, tablet computers and smart televisions.
Signal integrity issues are on the rise as both design complexity and speed increase all the time. The analog world is moving faster than ever. Learning curves are sharper, design cycles are shorter, and systems more complex. Add to all this the multidisciplinary, analog/digital nature of today’s designs, and your job just gets more complicated.
High-speed I/O: On the road to disintegration? article tells that increases in data rates driven by a need for higher bandwidth (10Gbps, 40Gbps, 100Gbps networking) means the demands on system-level and chip-to-chip interconnects are increasingly challenging design and manufacturing capabilities. For current and future high-performance, high-speed serial interfaces featuring equalization could well be the norm and high levels of SoC integration may no longer be the best solution.
For a long time, the Consumer Electronics Show, which began in 1967, was the Super Bowl of new technology, but now consumer electronics show as a concept is changing and maybe fading out in some way. The social web has replaced the trade show as a platform for showcasing and distributing products and concepts and ideas.
NFC, or near-field communications, has been around for 10 years, battling its own version of the chicken-and-egg question: Which comes first, the enabled devices or the applications? Near-field communications to go far in 2013 article expects that this is the year for NFC. NFC is going to go down many different paths, not just mobile wallet.
3-D printing was hot last year and is still hot. We will be seeing much more on this technology in 2013.
Inexpensive tablets and e-readers will find their users. Sub-$100 tablets and e-readers will offer more alternatives to pricey iPads and Kindles. Also sub-$200 higher performance tablet group is selling well.
User interfaces will evolve. Capacitive sensing—Integrating multiple interfaces and Human-machine interfaces enter the third dimension. Ubiquitous sensors meet the most natural interface–speech.
Electronic systems in the automotive industry is accelerating at a furious pace. The automotive industry in the United States is steadily recovering and nowadays electronics run pretty much everything in a vehicle. Automotive electronics systems trends impact test and measurement companies Of course, with new technologies come new challenges: faster transport buses, more wireless applications, higher switching power and sheer amount and density of electronics in modern vehicles.
Next Round: GaN versus Si article tells that the wide-band gap (WBG) power devices have shown up as Gallium Nitride (GaN) and Silicon Carbide (SiC). These devices provide low RDSON with higher breakdown voltage.
Energy harvesting was talked quite much in 2012 and I expect that it will find more and more applications this year. Four main ambient energy sources are present in our environment: mechanical energy (vibrations, deformations), thermal energy (temperature gradients or variations), radiant energy (sun, infrared, RF) and chemical energy (chemistry, biochemistry). Peel-and-stick solar cells are coming.
Wireless charging of mobile devices is get getting some popularity. Wireless charging for Qi technology is becoming the industry standard as Nokia, HTC and some other companies use that. There is a competing AW4P wireless charging standard pushed by Samsung ja Qualcomm.
In recent years, ‘Low-carbon Green Growth’ has emerged as a very important issue in selling new products. LED lighting industry analysis and market forecast article tells that ‘Low-carbon Green Growth’ is a global trend. LED lighting is becoming the most important axis of ‘Low-carbon Green Growth’ industry. The expectations for industry productivity and job creation are very large.
A record number of dangerous electrical equipment has been pulled from market by Finnish Safety and Chemicals Agency’s control. Poor equipment design have been found in a lot, especially in LED light bulbs. Almost 260 items were taken from the market and very many of them were LED lights. With high enthusiasm we went to the new technology and then forgotten the basic electrical engineering. CE marking is not in itself guarantee that the product is safe.
The “higher density,” “higher dynamic” trend also is challenging traditional power distribution technologies within systems. Some new concepts are being explored today. AC vs DC power in data center discussion is going strong. Redundant power supplies are asked for in many demanding applications.
According to IHS, global advanced meter shipments are expected to remain stable from 2012 through 2014. Smart electricity meters seen doubling by 2016 (to about 35 percent penetration). In the long term, IHS said it anticipates that the global smart meter market will depend on developing economies such as China, Brazil and India. What’s next after smart power meter? How about some power backup for the home?
Energy is going digital article claims that graphical system design changes how we manipulate, move, and store energy. What defines the transition from analog to digital and how can we tell when energy has made the jump? First, the digital control of energy, in the form of electricity, requires smart sensors. Second, digital energy systems must be networked and field reconfigurable to send data that makes continuous improvements and bug fixes possible. Third, the system must be modeled and simulated with high accuracy and speed. When an analog technology goes digital, it becomes an information technology — a software problem. The digital energy revolution is enabled by powerful software tools.
Cloud is talked a lot in both as design tool and service where connected devices connect to. The cloud means many things to many people, but irrespective of how you define it, there are opportunities for engineers to innovate. EDA companies put their hope on Accelerating embedded design with cloud-enabled development platforms. They say that The Future of Design is Cloudy. M2M companies are competing in developing solutions for easily connecting embedded devices to cloud.
Trend articles worth to check out:
13 Things That Went Obsolete In 2012
Five Technologies to Watch in 2013
Hot technologies: Looking ahead to 2013
Hot technologies: Looking ahead to 2013
Technology predictions for 2013
Prediction for 2013 – Technology
Slideshow: Top Technologies of 2013
10 hot consumer trends for 2013
Popular designer articles from last year that could give hints what to expect:
Top 10 Communications Design Articles of 2012
Top 10 smart energy articles of 2012
Slideshow: The Top 10 Industrial Control Articles of 2012
Looking at Developer’s Activities – a 2012 Retrospective
626 Comments
Tomi Engdahl says:
The truth about Cloud security
http://www.edn.com/electronics-blogs/practical-chip-design/4414580/The-truth-about-Cloud-security
There are times when it seems as if a rumor gets started and then it grows on itself until we just accept it as a truth. Nobody seems to really question it even though many, including those that quote it, know of its shaky derivation. One that comes to mind is that 70% of development is spent in verification. Nobody even knows if this means number of people, elapsed time, cost or some other measure, but we do know that 70% of something is taken up by this task.
Over the past few months another such myth has started to emerge and that is what I want to discuss today. The myth is that nobody will trust their design to the Cloud and this is why EDA in the Cloud has not worked.
It was from a small, independent IP provider and basically said – if someone steals my design, then I take it as a sign that it has value and that I did a good job. I would rather get paid for it, and most of the companies that are reputable will do so, because it is not worth them stealing something and getting caught.
Next I spoke to Mohamed Kassem, an entrepreneur who is constructing a yet to be announced cloud-based semiconductor company. He told me that there is a big difference based on the size of company that you talk to. He said that large semiconductor companies talk a lot about the security of their data and yet they don’t really walk the talk.
There is also an issues regarding small versus large cloud providers. People are more concerned with knowing where their data is located and how it is protected. Some people may have an issue with using Amazon Cloud services and may prefer a small dedicated service provider that makes it very clear how the system is organized and maintained. Others would rather trust a company that is putting it reputation on the line.
But this comes back to the point Mohamed was making. He said that most companies IT infrastructure is so badly put together and maintained that it provides less protection than they think.
If the data is stored on the cloud and used in the cloud, it actually provides a much easier upgrade, maintenance and control environment compared to the situation today where they have to ship stuff to many customer sites and then have no visibility into how it is used. They can see who access it, uses it and they can track stuff.
So, in the past it has been the large EDA companies that have tried and failed with the Cloud and offering cloud-based services. It may be that their top few customers are concerned about the Cloud and not yet ready to take the leap.
Tomi Engdahl says:
The (next) final frontier
http://www.edn.com/electronics-blogs/fpga-gurus/4414926/The–next–final-frontier
FPGAs have been taking small steps into base station infrastructure by displacing multicore DSPs, but the newest collaboration between Xilinx and Sumitomo suggests that RF and IF blocks may be linked directly to FPGAs for wireless systems of the future.
Ever since viable multicore processors began to be implemented on FPGAs in the mid-2000s, the barriers to displacing multicore DSPs and integer processors have been falling one by one. FPGAs have displaced standard processors in a wide range of data center and wireline network infrastructure systems, and in recent years, it appeared that the full gamut of wireless infrastructure systems could take advantage of FPGAs soon. Still, the final links between IF devices and the RF discrete products closest to the antenna seemed to be one favoring ASSPs.
This is why Xilinx may be able to take full advantage of the deal with Sumitomo, not only to have a good GaN III-V power amp in its portfolio, but also to promote its IF cores to a broader audience.
In the real world, design-ins for IF and RF discrete standard products can have a long lifetime, and it is unlikely that products from Texas Instruments, Analog Devices, and high-power specialists will vanish quickly. But the transition from 3G to 4G/LTE networks is a time when new base stations and dedicated antenna subsystems will be deployed by major carriers. Since many small-cell base stations will be placed on traffic lights or on the sides of buildings, real-estate efficiency is an absolute must, as is power reduction.
Tomi Engdahl says:
UL-certified software streamlines development for consumer functional safety
http://www.edn.com/electronics-products/other/4414785/UL-certified-software-streamlines-development-for-consumer-functional-safety
New UL-certified software packages from Texas Instruments are helping make designing functional safety consumer applications using C2000 real-time control microcontrollers (MCUs) easier and faster.
The software in the SafeTI software packages is UL-certified, as recognized components, to the UL 1998:2008 Class 1 standard, and is compliant with IEC 60730-1:2010 Class B, both of which include home appliances, arc detectors, power converters, power tools, e-bikes, and many others.
The SafeTI software packages are available for select TI C2000 MCUs and can be embedded in applications using these MCUs to help customers simplify certification for functional safety-compliant consumer devices.
The software packages include ready-to-run, simple application examples with software libraries UL-certified, as recognized components, for select TI C2000 MCUs
The software package includes UL-certified software libraries and example projects ready to run on controlCARD-based hardware, a user manual and a safety manual to help speed development for many consumer products requiring IEC 60730 and UL 1998 standard compliance or certification.
Tomi Engdahl says:
Component obsolescence: Is your process up to the challenge?
http://www.edn.com/design/components-and-packaging/4414186/Component-obsolescence–Is-your-process-up-to-the-challenge-
Electronic component obsolescence presents unique product design challenges such as continuity, quality, cost management, and support for the total life of the product. There are constraints on components and substitutions based on the original board design, customer requirements (e.g., cost, time, functional, environmental requirements), ability to match components for identical drop-in capability versus re-design or interposer modules to resize and allow for drop-in, impact of semiconductor die shrinks on functionality(e.g., noise susceptibility, drive capability, signal integrity, different edge rates, etc.), and supply issues for the immediate and life of the product time frames.
As a result, obsolescence or end of life (EOL) component challenges can prove to be critical events. Importantly, as an engineer, being able to design solutions that enable your products to perform to specifications and environmental requirements without an impact on the product quality, functionality, or otherwise for the customer is a real challenge.
EOL events are not just about component supply though; obviously, the design challenges for maintaining end-products, particularly for board designs, are compounded during EOL events.
EOL events are not just about component supply though; obviously, the design challenges for maintaining end-products, particularly for board designs, are compounded during EOL events.
Tomi Engdahl says:
Web JTAG tool supports innovative fast board failure test
http://www.edn.com/electronics-products/other/4414807/Web-JTAG-tool-supports-innovative-fast-board-failure-test
A suite of JTAG tools that is downloadable from the Web is supporting an innovative technique to quickly check the failure of components on a board.
JTAGLive Studio is a comprehensive package of Python-based JTAG boundary-scan tools that enable electronic designers and manufacturing test engineers to develop complete PCB test and programming applications. Studio establishes a new class of PCB test and device programming tool-set that dramatically lowers the cost of entry for test and hardware engineers, while still offering the many traditional benefits of JTAG/boundary-scan alongside newer technologies, like processor-controlled test.
A key feature is that JTAGLive Studio works with or without design netlist data and can be used to test interconnects (from individual nets to an entire board), logic clusters, memories and more
Tomi Engdahl says:
First online development environment for the Internet of Things
http://www.edn.com/electronics-products/other/4414112/First-online-development-environment-for-the-Internet-of-Things
Thingsquare has announced Thingsquare Code, to help connect products such as light bulbs, thermostats, and smart city systems to smartphone apps.
Thingsquare Code is claims to be the world’s first online interactive development environment (IDE) for the Internet of Things and works with a number of recent chips that target the emerging Internet of Things market, from leading chip vendors Texas Instruments and ST Microelectronics.
Thingsquare Code lets developers of Internet of Things products program their wireless chips from a web browser.
“The latest IP/6LoWPAN solutions for IoT applications from Texas Instruments (TI) will be ready for Thingsquare Code,” said Oyvind Birkenes , general manager, Wireless Connectivity Solutions, TI. “Thingsquare opens the door to developers from various disciplines to connect their products faster to the Internet.”
http://thingsquare.com/
http://thingsquare.com/code/
Tomi Engdahl says:
Broadcom: Time to prepare for the end of Moore’s Law
http://www.eetimes.com/electronics-news/4415006/Broadcom–Time-to-prepare-for-the-end-of-Moore-s-Law
MOUNTAIN VIEW, Calif. – The party’s not over yet, but it’s getting time we start thinking about calling a cab. That’s Henry Samueli’s view of the semiconductor industry in a nutshell.
The chief technology officer of Broadcom Corp. was shockingly frank in an on-stage interview at an event celebrating the 40th anniversary of Ethernet.
“Moore’s Law is coming to an end—in the next decade it will pretty much come to an end so we have 15 years or so,”
“Standard CMOS silicon transistors will stop scaling around 5 nm and everything will plateau,” he said.
“I am comfortable we will get to terabit networking speeds, but I’m not sure I see a path to petabit speeds,”
“We still have another 15 years or so to enjoy, but we need to prepare at some point for a network that doesn’t double in bandwidth every two years,” he added.
The end of CMOS scaling “has been one of my biggest concerns for some time,” Samueli told EE Times after participating in a panel discussion. “We’ve been talking to customers about this for awhile,” he said.
Samueli said he has briefed customers that prices for leading edge chips will increase, starting with the 20 nm generation due to rising fabrication costs.
Stacking chips into so-called 3-D ICs promises a one-time boost in their capabilities, “but it’s expensive,” said Samueli. Broadcom expects to use 3-D stacks to add a layer of silicon photonics interconnects to its high end switch chips, probably starting in 2015 or later, he said.
Another industry veteran and EE on a panel with Samueli took issue with the Broadcom exec’s predictions. “The real situation is we have 10-15 years visibility and beyond that we just don’t know how we will solve [the problems of CMOS scaling] yet,” said Dave House, chairman of switch maker Brocade and a veteran of 23 years at Intel.
In the 1970s I started preaching Moore’s Law will solve all our problems, and Gordon stopped me and said, ‘Ten years out, I don’t think it can continue,’” House said. “Ten years later, Gordon said again, ‘I only see about ten years here.’
Tomi Engdahl says:
How to lower PLC software costs
http://www.controleng.com/single-article/how-to-lower-plc-software-costs/18bc910cda28627743db7e33ada0e473.html
PLC vendors make much more profit from software licenses than they do selling hardware. Smart shopping can avoid some of those fees. Advice for lowering PLC licensing fees follows.
Programmable logic controllers (PLCs) and now programmable automation controllers (PACs) are the mainstay of discrete factory automation, and they are increasingly being used in process control. While market research companies predicted the death of PLCs some years ago, claiming that industrial computers would run them out of business, PLCs have continued to dominate automation.
PLC vendors have improved the processors, memory, I/O (input/output), communications, and capabilities of PLCs and PACs over the years to the point where they can compete with distributed control systems (DCSs), motion controllers, CNCs, and specialty controllers, such as robot controls and vision systems. At the same time, the cost of PLCs and PACs has dropped considerably, to the point where a fully functional PLC is available for several hundred dollars.
Competition among the major PLC vendors also helps to keep the prices down. PLCs get bigger, faster, smarter, and more capable, but hardware prices remain fairly constant.
All this capability has come at a very high price: namely, in the software needed to connect the factory floor to business operations. Understanding how software licenses work can help you understand how you may spend 10 times as much on software as on PLC hardware.
The reasoning is simple: Users of PLC-based equipment are captive customers and have few options but to use the PLC vendor’s software. Software licenses are lucrative for vendors.
How much do these licenses cost? That’s hard to say and almost impossible to find out. And there’s no “typical” cost because needs vary greatly between user and industry. The cost is often driven by the number of data points, software features, and user-level discounts. For this reason—including competitive advantage—open price lists are not readily available, especially for PLC software.
On the other hand, it is possible to find almost anything on the Internet. A little hunting found these (vendor omitted):
Redundancy module for $18,400: This links two systems to create a fault-tolerant redundant pair.
HMI/SCADA module for $6,975 creates HMI/SCADA clients. (HMI is human machine interface software. SCADA stands for supervisory control and data acquisition.)
SQL data logging module for $950 provides basic SQL (structured query language) data logging.
Reporting module for $2,200 creates dynamic, database-driven Adobe PDF reports.
It is difficult to know if these prices are high, low, or typical, because there is little to compare them to.
What software do you need?
A typical PLC application usually requires purchase of:
PLC programming software (Figure 2)
HMI development software
SQL / database license
Data I/O server
Add-on tools and components
HMI run-time license
Each, of course, has a separate software license.
Tomi Engdahl says:
Step-motor-based systems stay competitive
http://www.controleng.com/single-article/step-motor-based-systems-stay-competitive/99570165fcec1b2637a4636a226ff94e.html
Motion control: Traditional stepper-motor systems represent the only motion-control technology able to operate in open loop—although the addition of position feedback to enhance performance is on the rise. Simpler controls, lower cost components, and other innovations keep stepper systems competitive with servo motion systems in numerous applications.
Stepper motion systems shine in many applications that require less than critical speed and position accuracy. Among the technology’s drawing points are no need for system tuning (versus servo-based motion), motors less costly to produce, and simpler controls and cabling. System price advantage remains even when adding a low-cost feedback device. Other benefits of stepper technology include minimal system setup time, less need of user expertise, and better motor inertia matching for driven loads.
Cost savings is a common theme voiced by stepper product suppliers. “Step motors have always had a cost advantage over servos, but traditionally this has come at the price of performance,”
Hummel listed three attributes that allow today’s stepper systems to take on many applications requiring servos in the past—while still maintaining a considerable cost advantage:
Elimination of stalling (desynchronization)
Ability to automatically adjust running current to load requirements
Ability to provide torque against an overriding load.
Standard step motors run at relatively high currents, resulting in excess heat generation that limits their duty cycle. Recently, Oriental Motor USA (and others) have developed a class of “constant-duty” step motors with higher energy efficiency than standard motors, noted Todd Walker, OM’s national marketing manager. It’s the result of several factors: control of running current to reduce heating losses, using higher grade steel laminations, magnetic flux pattern enhancement in the motor teeth, and a more efficient electronic drive. This combination enables torque production with lower running currents.
Another significant stepper system development has been the integration of the motor, drive, and ancillary components into one package. Benefits include simpler (less) wiring, matched drive and motor, plus options for closed-loop mode using motors with a pre-assembled encoder or with self-correction options similar to servo systems. A number of manufacturers offer such motor-drive packages.
Properly sized for the application, a stepper system running in open loop offers repeatable positioning—typically within a half-step or better accuracy (0.9° or better for a 200 step/rev motor), according to B&R Automation. “Overall positioning accuracy depends on a number of factors associated with the driver and motor construction,” Morton noted. “Microstepping (for example, 1/10th of a full step or 0.18°) is a commonly used method to reduce vibrations associated with stepper motors as well as to improve positioning accuracy.”
In open-loop control, a generous torque margin (up to 50%) is used to select the step motor. This serves as a safety factor against unexpected load changes. Feedback control minimizes or eliminates the need for this provision.
For applications requiring better positioning accuracy or overall performance improvement, stepper motors can be fitted with different position sensors such as incremental or absolute encoders and resolvers. “A simple implementation may use the sensor just to verify reaching a commanded position and initiate correction moves. A more advanced implementation may use full closed-loop positioning, similar to servo system operation,” Morton added.
Oriental Motor sees stepper applications in CNC machines and short move x, y, z positioning for assembly or automation machines. “Applications that require short, quick moves (1 or 2 shaft revs)—or fine positioning and repeatability—are ideal for steppers,”
Tomi Engdahl says:
Hot new battery technologies need a cooling off period
IBM boffins want to run your car on air – and lithium
http://www.theregister.co.uk/2013/05/28/hot_lithium_ion_batts/
Scientists began buzzing about electrochemical energy cells in the 18th century; consumers bought their first low-density Lithium-ion batteries in the late 1980s, and industry became hooked on the things in the 1990s. Ever since, the comedy electronic-device conflagration has been as much a staple of tech news kibble as the hilarious satnav blunder.
But despite a long gestation period and some significant advances in the state of the art, Lithium-ion cells today still suffer the kind of problems that generate embarrassing headlines.
The most explosive incidents likely came as a result of iron filings, not lithium, entering batteries during manufacturing, possibly when crimping the batteries shut. Over time, these pieces of metal managed to create shorts between anodes and cathodes, causing rapid heating and thermal runaway – essentially metal particles were short-circuiting the batteries.
But lithium-ion remains the most common and popular rechargeable battery technology that we have.
Rechargeable lithium-ion batteries have evolved to deliver ever higher energy densities, longer life and greater reliability at lower cost. As a result they have been put to work in a wide range of applications from computers to medical devices, consumer electronics to electric cars.
It’s the electric car application that’s now driving research, development, innovation – and some hype – in the lithium world. The battery industry is by nature quite conservative and operates on fairly linear lines. But labs and research hubs all over the world are now working on improved lithium-ion technologies that could prove to be game-changers, transforming multiple industries if they could only surmount overheating and combustion issues while delivering a step-change in efficiency, durability and recharge times.
Lithium-sulphur technology is one area that could be getting closer to becoming commercial, through the efforts of companies like Sion Power and Polyplus in the US.
And there are other possibilities on the horizon. IBM believes that lithium-air batteries could deliver the significant improvements required to transform the weight, cost and reliability of the next generation of rechargeable batteries
“The safety issues of next-generation lithium-ion batteries must be resolved first before ever thinking about commercial applications. A really big game-change would require a technology like lithium-sulphur or possibly lithium-air. But any system using Lithium Metal is still regarded as inherently unsafe, and this hasn’t really entered into the game yet in current Lithium-air projects.”
“Firstly, any membrane must entirely remove all water and carbon dioxide so that only completely dry air enters the battery system. Then there are problems of reversible cycling and reducing over-potential, or the difference in voltage between charge and discharge. Next a stable electrolyte needs to be identified that allows cycles over many years in an application such as a car battery. And that’s on top of any problems with the anode. So for me the technology is still a long way away.”
Air technology is already proving useful in other applications. Zinc-air batteries, for example, have had success in hearing aid technology, despite not being rechargeable. But can lithium-air really power a car over anything like a 500-mile range? A Nissan Leaf can currently travel an absolute maximum of 100 miles on an expensive battery. If it could go 100 miles on a cheap battery, that would also be game changing, in a different way
Tomi Engdahl says:
Multiple Studies Show Used Electronics Exports To Third World Mostly Good
http://hardware.slashdot.org/story/13/05/27/2245231/multiple-studies-show-used-electronics-exports-to-third-world-mostly-good
“Bloomberg News reporter Adam Minter writes in today’s Opinion section that several studies show that there’s nothing really remarkable or scandalous about exports of used equipment to developing nations. ‘Some is recycled; some is repaired and refurbished for reuse; and some is thrown into landfills or incinerators. Almost none of it, however, is “dumped” overseas.’ ”
“‘A 2011 study by the United Nations Environment Program determined that only 9 percent of the used electronics imported by Nigeria — a country that is regularly depicted as a dumping ground for foreign e-waste — didn’t work or were unrepairable, and thus bound for a recycler or a dump.”
Tomi Engdahl says:
New and interesting electric motors
http://www.electronicproducts.com/Electromechanical_Components/Motors_and_Controllers/New_and_interesting_electric_motors.aspx
Motor efficiency has become a major specification in today’s energy-conscious world. Small universal electric motors typically have an efficiency of about 30%, while 2-kW three-phase machines can reach efficiencies of 95% or higher — when they are operated at near full load.
Electric motors lose efficiency in several areas, the most common being copper losses, iron losses, stray losses, and mechanical losses. The efficiency level of an electric motor depends on actual electric motor load versus rated load. The highest efficiency occurs near the rated load and falls off rapidly for under- and over-load conditions. This is why proper motor sizing is needed for greatest efficiency.
This is true for the many electric automobiles available now. Efficiency means range as well as their very important effective MPG figure. A few key automotive motors:
Tomi Engdahl says:
Engineer develops flat, spray-on optical lens
http://www.electronicproducts.com/Optoelectronics/Image_Sensors_and_Optical_Detectors/Engineer_develops_flat_spray-on_optical_lens.aspx
Breakthrough could change the way imaging devices like cameras and scanners are designed
Kenneth Chau, an assistant professor in the School of Engineering at the University of British Columbia’s Okanagan campus, with help from a team of U.S.-based researchers, has developed a substance that can be affixed to surfaces and turn them into flat lenses for ultraviolet light imaging of biological specimens.
“The idea of a flat lens goes way back to the 1960s when a Russian physicist came up with the theory,” Chau says. “The challenge is that there are no naturally occurring materials to make that type of flat lens. Through trial and error, and years of research, we have come up with a fairly simple recipe for a spray-on material that can act as that flat lens.”
“This is the closest validation we have of the original flat lens theory,” he explains. “The recipe, now that we’ve got it working, is simple and cost-effective.”
Tomi Engdahl says:
Six road blocks in high-voltage connector design (and how to overcome them)
http://www.edn.com/design/components-and-packaging/4415044/Six-road-blocks-in-high-voltage-connector-design–and-how-to-overcome-them-
Designing a high-voltage cable assembly and connectors can be very challenging. There are many aspects of the cable assembly that must be considered.
Before beginning the design you must answer some general, but very important questions like: What is the operating voltage and current? What type of environment will the cable be operating in? Will the cable be required to meet any specifications? Are there any specific requirements from the customer?
After you have answered these first round questions you can begin drafting a design. We have come up with the six most common design road blocks and tips on how to avoid and overcome them.
Size Constraints
The industry is constantly moving towards smaller lighter assemblies with increased operating thresholds. Using spacing to offset high voltages has become a thing of the past.
Designing around an existing connector
Designing around an existing connector can prove to be extremely difficult.
Cost
With the economy in its current state, a major hurdle in HV design can be keeping the cost down. This can prove to be difficult when trying to design a reliable, effective, robust connector that meets or exceeds the customer’s requirements. In many cases performance ends up taking a back seat to reduced cost.
Manufacturability
The manufacturability of the design is a key component.
Materials
High-voltage cable assemblies and connectors require materials that perform well both electrically as well as mechanically.
Using dissimilar materials can prove to be problematic. A good adhesion between materials is critical when trying to offset high voltages
Corona
A corona-free design can sometimes prove to be extremely difficult. A well designed corona-free cable assembly will ensure reliable, resilient cable assemblies that stand the test of time.
Tomi Engdahl says:
Leading a horse to higher-level water
http://www.edn.com/electronics-blogs/fpga-gurus/4415033/Leading-a-horse-to-higher-level-water
Altera Corp has announced the availability of a full SDK for the OpenCL language, underscoring the efforts the company has made over two years to have OpenCL accepted as a high-level method for designing FPGAs. OpenCL certainly has a broad coalition for support, but can any such effort make inroads into the standard design tools and HDLs used within the FPGA design community?
Applied Micro (formerly AMCC) threw a spanner in the works in the summer of 2010 by acquiring TPACK in order to link the designs with Applied’s own work in OTN and Ethernet.
Don Faria, Senior Director of Corporate Strategy for Altera’s communication and broadcast divison, said that over the long term, Altera could use the combined IP from Avalon and TPACK to create unique single-chip broadband FPGAs that utilized Altera’s physical serdes transceiver blocks rated at 10 Gbits/sec and above, as well as the company’s experimental on-chip transmit and receive optical subassemblies (TOSAs and ROSAs). Such designs may not become commonplace in high-speed networking for a few years, however.
While Applied Micro has been heavily involved in designs with 64-bit and multicore 32-bit ARM processors in recent months, Jones wanted to dispel the notion that Applied Micro was moving the company more toward control-plane designs operating at higher layers in the OSI protocol stack.
Tomi Engdahl says:
Sub-microsecond interconnects for processor connectivity—The opportunity
http://www.edn.com/design/wireless-networking/4414961/-Sub-microsecond-interconnects-for-processor-connectivity-The-opportunity
As Moore’s Law has continued to drive the performance and integration of processors ever higher, the need for higher-speed interconnects has continued to grow as well. Today’s interconnects commonly sport speeds ranging from 1 to 40 Gigabits per second and have roadmaps leading to hundreds of gigabits per second.
In the race to faster and faster speeds for interconnects what is often not discussed are the types of transactions supported, the latency of communications, the overhead of communications and what sorts of topologies can be easily supported. We tend to think of all interconnects being created equal and having a figure of merit based solely on peak bandwidth. Reality is quite different. Much as there are different forms of processors that are optimized for general purpose, signal processing, graphics and communications applications, interconnects are also designed and optimized to solve different connectivity problems.
The following table presents the typical bandwidth and lane configurations for PCI Express, RapidIO and 10 Gig Ethernet as used in processor connectivity applications.
This article will focus, not on the raw bandwidths of the interconnect technologies, but rather on the inherent protocol capabilities, supported topologies and latency design targets for each of these interconnects. By doing this we gain a better understanding of where it makes sense to use each technology.
Tomi Engdahl says:
How to use ECC to protect embedded memories
http://www.edn.com/design/test-and-measurement/4415063/Use-ECC-to-protect-embedded-memories-
The scaling of semiconductor technologies has led to a lower operating voltage in semiconductor devices, which, in turn, reduces the charge available on the capacitors for volatile memories. The overall effect of this is that devices are generally more sensitive to soft or transient errors, because even low-energy alpha particles can easily flip the bits stored in storage cells or change the values stored in sequential logic elements, producing erroneous results.
Increasing memory density, system-on-chip (SoC) memory content, performance, and technology-scaling combined with reduced voltages increases the probability of multi-bit transient errors. Notably, transient errors are no longer restricted to aerospace applications. Now applications such as biomedical, automotive, networking, and high-end computing are susceptible to transient errors and have a need for high reliability and safety.
In this article we discuss transient error detection and correction methods using advanced error correction code (ECC) based solutions for embedded memories in order to meet the requirements of today’s high-reliability applications.
Tomi Engdahl says:
Many digital power functions moving to state machine-based control
http://www.edn.com/electronics-blogs/powersource/4414978/Many-digital-power-functions-moving-to-state-machine-based-control
One of the trends I saw at APEC 2013 this year was a move away from full –blown DSP/high level processor control to a “keep it simple” state-machine, less powerful, but lower power control solution for many digital power applications.
Designers are choosing the state machine approach for simpler digital function solutions, while the more complex digital power functions still use the bigger, more powerful processors.
Tomi Engdahl says:
Key factors determine an independent test laboratory’s credentials
http://www.edn.com/design/test-and-measurement/4415232/Key-factors-determine-an-independent-test-laboratory-s-credentials-
Sustained, significant, global demand for mobile computing devices, particularly tablets and smartphones, is driving an increase in the complexity of chip architectures and the growth of system-in-package (SiP) and system-on-chip (SoC) architectures. With these changes, the need to ensure quality through accredited, best-in-class technical laboratories to conduct functional and anti-counterfeit testing has increased.
Tomi Engdahl says:
Top 10 challenges for Analog
http://www.edn.com/electronics-blogs/anablog/4415122/Top-10-challenges-for-Analog
Tomi Engdahl says:
Motorola shows off insane electronic tattoo and ‘vitamin authentication’ prototype wearables
http://www.theverge.com/2013/5/29/4377892/motorola-shows-electronic-tattoo-and-vitamin-authentication-prototypes
The first was a prototype electronic tattoo on her arm from MC10, about which she quipped “teenagers might not want to wear a watch, but you can be sure they’ll wear a tattoo just to piss of their parents.”
The second technology was even wilder: a pill from Proteus Digital Health that you can swallow and which is then powered by the acid in your stomach. Once ingested, it creates an 18-bit signal in your body — and thereby makes your entire person an “authentication token.” Dugan called it “vitamin authentication.” Motorola CEO Dennis Woodside, who earlier confirmed the Moto X phone, added that the Proteus pill was approved by the FDA.
“There are a lot of problems in wearables,”
Tomi Engdahl says:
Ultracapacitors in light rail in Arizona?
http://www.edn.com/electronics-blogs/powersource/4415072/Ultracapacitors-in-light-rail-in-Arizona-
After we published the blog about ultracapacitors in light rail and regenerative braking, Bonnie Baker from TI asked:
The capacitive array is on the roof of this vehicle. I am wondering what the effects of temperature changes on this stored energy.
Maxwell works closely with designers to develop a robust design in any type of environment and they will provide the proper charts and graphs to the customer for this effort. In this case the ultracaps should be moved from the top of the vehicle to somewhere lower and out of direct sunlight.
The general rule of thumb is that for every 10oC reduction in temperature the life estimate can be doubled.
Tomi Engdahl says:
Global GDP woes dragging on chip growth
http://www.eetimes.com/electronics-news/4415347/Global-GDP-woes-dragging-on-chip-growth
Market research firm IC Insights Inc. Wednesday (May 29) cut its forecast for 2013 semiconductor market growth to 5 percent from 6 percent, citing tepid first quarter gross domestic product (GDP) estimates in many of the world’s largest economies.
According to IC Insights, the correlation between worldwide GDP growth and IC market growth has been excellent over the past few years. The firm expects this trend to continue in 2013.
Tomi Engdahl says:
Taiwan’s electronic-medicine outperforms drugs
http://www.eetimes.com/design/medical-design/4415346/Taiwan-s-electronic-medicine-outperforms-drugs
Taiwan is perhaps best known for its expertise in consumer electronics, but by pioneering solutions to pressing problems in medicine, the island country hopes to take the lead in medical electronics too. At its National Taiwan Hospital located on the campus of National Taiwan University (NTU), Taiwan’s national aspirations to become a medical electronics leader are taking shape.
At the beginning of the innovation cycle is Lu’s latest prototype–an experimental wireless medical implant that cures chronic pain with a system-on-chip (SoC) that injects electrical signals into the affected nerves on-demand.
“Today this type of therapy requires a surgical procedure that exposes the nerve so that doctors can inject the pain relieving signal into it,”
The pain-relieving SoC, which can be left in the patient indefinitely (since it has no battery) is currently being passivated to keep the body from rejecting it.
Another non-invasive therapy using ultrasound instead of electricity, is being pioneered by IEEE Fellow Pai-Chu Li, who has worked with medical equipment giant Genesis Logic to refine ultrasound to 30 micron resolution enabling cheap accurate noninvasive realtime diagnostics.
“The advantage of ultrasound over radio frequencies is that ultrasound can be focused very precisely in order to transfer more energy in a shorter amount of time,” said Li.
Hi-resolution MRI’s
Anther non-invasive technology being pioneered at NTU puts an endoscope–a tiny video camera for inspecting the inside of the stomach, intestines and veins–inside a pill that can be swallowed
Tomi Engdahl says:
Interfacing to sensors using Data Converter IP
http://www.chipestimate.com/tech-talks/2013/02/05/Synopsys-Interfacing-to-sensors-using-Data-Converter-IP?elq_mid=4619&elq_cid=303473
Interactions between people and electronic systems have evolved to the point that we can reasonably expect these systems to be “aware” of their environment-analyzing information about its position, acceleration, temperature, touch, light, pressure, flow, and so on, and then to act upon the information. To complete this process, the electronic system must acquire the information, convert it to an electrical quantity, and process it, which typically occurs through digital signal processing.
A system’s signal processing chain includes the sensor itself and its control or biasing circuitry, the analog signal conditioning circuits, an analog-to-digital converter (ADC) and, finally the digital signal processor,
MCU-based solutions are effective in systems using discrete sensors. Alternatively, a dedicated system-on-chip (SoC) with an embedded sensor can be a more effective total system solution. For example, micro-electro-mechanical systems (MEMS)-based sensors can be embedded with the corresponding digital signal processing block, thus integrating the complete chain in a single die
Sensor signal processing can be broadly classified in terms of the bandwidth of the signal (the speed at which the physical quantity being measured changes) and the resolution required to obtain meaningful information. Figure 2 captures the different applications and their characteristics. It also shows that the ADC required to cover the complete range can be built out of two very effective architectures: Successive Approximation Register (SAR) and oversampling Sigma-Delta (SD) ADCs.
The ADC typically used in the sensor signal processing chain is implemented using one of two architectures: SAR or SD ADC. These architectures are the most efficient when applied to match the characteristics of the signal being processed.
Modern implementations of the SAR architecture are extremely compact and low-power. They are most effective for resolutions up to 12-bit or 14-bit.
The SD ADC oversamples the signal at a high rate and generates a bit stream whose density is proportional to the amplitude of the signal.
Oversampling SD-based ADCs can effectively move quantization noise to an out-of-the-signal band (“noise shaping”), resulting in performance in excess of 20-bit to 24-bit over a relatively narrow signal band, making them more suitable for sensor applications such as energy metering in smart grids.
Similar to the SAR ADC, the SD ADC architecture is also highly suitable for integration in digital circuits. In fact, the ADC is, to a large extent, a digital circuit, with a single critical analog element: the amplifier in the integration loop.
Tomi Engdahl says:
Intel CEO Makes First Purchase: ST-Ericsson GPS Unit
Read More At Investor’s Business Daily: http://news.investors.com/technology/052813-657933-new-intc-ceo-makes-first-acquisition.htm#ixzz2UmgNRPWJ
Tomi Engdahl says:
FPGAs Drive Digital I/O Solutions
http://108.58.179.125/CATFiles/RTC0912.pdf
FPGAs are particularly well suited to handle not only the digital I/O
requirements for a range of popular interfaces, but also to support complex
protocols and extended signal processing functions for embedded
systems
FPGAs fill the bill for digital I/O functions better than any other component.
For example, the Xilinx Virtex-6 FPGA supports many different types of
single-ended CMOS outputs with push-pull drivers equipped with high-impedance tri-state enables. In addition, both the output current and slew rate can be specified to meet special I/O requirements.
In most cases the direct electrical lines or optical links to the connected
peripheral will be buffered, translated or converted with special devices matched to the peripheral. This provides full compliance to safety, shock, over-voltage and static discharge, while protecting the
FPGA from damage. The flexibility of the FPGA I/O pins allows compatibility with virtually every type of interface chip
Tomi Engdahl says:
Wireless plus medical implants—The answer to antibiotic resistance?
http://www.edn.com/electronics-blogs/from-the-edge-/4415272/Wireless-plus-medical-implants-The-answer-to-antibiotic-resistance-
Amazing work is being done by John Rogers and his team at the University of Illinois at Urbana-Champaign. They’ve created remote controlled electronic implants, bio-absorbable electronic circuits, that are designed to attack microbes, provide pain relieve, stimulate bone growth, and then disappear.
Tomi Engdahl says:
First Standalone Devices with Multiple Energy Sources
http://www.eeweb.com/company-news/fujitsu_semiconductor/first-standalone-devices-with-multiple-energy-sources/
Fujitsu Semiconductor America introduced two new power management ICs (PMICs) with converters designed to harvest energy from solar, thermal, vibrational and other sources for use in a variety of applications.
The new energy-harvesting products are the MB39C811 and MB89C31. Each is designed for use in systems that derive energy from solar, thermal, or kinetic sources, and store the energy for use in autonomous devices or products such as those used in wireless sensor networks, wearable electronics products, and similar applications.
The MB39C811 is an ultra-low-power buck converter with a dual, full-wave bridge rectifier compatible with multiple energy harvesters. It is the industry’s first standalone device to derive energy from solar, thermal and vibration sources at the same time.
Tomi Engdahl says:
Hon Hai sets up display R&D company in Japan
http://www.reuters.com/article/2013/05/31/us-honhai-japanplant-idUSBRE94U0EL20130531
Taiwan’s Hon Hai Precision Industry Co Ltd, said it set up a research and development company in Japan for displays and touch panels, its latest effort to strengthen its display business.
The world’s largest contract electronics manufacturer, which purchased a 38 percent stake in a Sharp Corp TV panel plant in Sakai, Japan last year, has been investing to integrate its display business as the company looks to diversify from the low-margin contract manufacturing business.
Last year, Hon Hai’s chairman, Terry Gou, purchased a 38 percent stake in Sharp’s Sakai plant in western Japan, the world’s most advanced and only 10th-generation LCD factory.
The company is also seeking a stake in Sharp, but talks have stalled since the Japanese company’s share price plunged last year in the wake of losses.
Tomi Engdahl says:
Samsung sells MCU business
http://www.eetimes.com/design/microcontroller-mcu/4415262/Mobile-focused-Samsung-sells-MCU-business-to-Ixys
Power IC vendor Ixys Corp. (Milpitas, Calif.), also parent of Zilog Inc., is buying the 4- and 8-bit microcontroller business of Samsung Electronics Co. Ltd. for about $50 million.
Under the terms of the agreement Ixys will receive nearly 80 products, inventories, intellectual property and other assets related to the 4- and 8-bit business.
Samsung will continue to make the products under an expanded foundry deal with Ixys.
“We have executed a management decision based on the focus on mobile solutions that we are taking in our semiconductor business,” said Ben Suh, senior vice president of Samsung Electronics System LSI strategic planning team, in a statement issued by Ixys.
Tomi Engdahl says:
Analyst: Chip sales were weak in April
http://www.eetimes.com/electronics-news/4415433/Analyst-Chip-sales-were-weak-in-April
The three-month average of global chip sales is likely to be reported at $23.5 billion for April, unchanged compared with March, according to Bruce Diesen, an analyst at Carnegie Group (Oslo, Norway).
PC market has been particularly slow and Apple will have a slow second quarter in handsets
“We recently cut our semiconductor sales [growth] estimate to minus 1 percent in 2013 from our old estimate of plus 5 percent,”
Tomi Engdahl says:
Motorola’s Moto X Phone Will Be Made in America
http://abcnews.go.com/Technology/motorolas-moto-phone-made-america/story?id=19284860#.Uae5x79Y5mQ
Motorola’s next flagship phone won’t only have sensors that will know when you are going to take a photo or when it is in your pocket, but it will be the first smartphone assembled in the U.S.
The phone will be made at Flextronics’ 500,000-square foot facility in Fort Worth, Texas, which was once used to make Nokia phones. While the phone will be designed, engineered and assembled in the U.S., not all the components in the phone will be made in the U.S. The processor and screen, for example, will be made overseas.
“There are several business advantages to having our Illinois and California-based designers and engineers much closer to our factory,” Motorola said in a statement. “For instance, we’ll be able to iterate on design much faster, create a leaner supply chain, respond much more quickly to purchasing trends and demands, and deliver devices to people here much more quickly.
Tomi Engdahl says:
Wanted for the Internet of Things: Ant-Sized Computers
http://www.technologyreview.com/news/514101/wanted-for-the-internet-of-things-ant-sized-computers/
A computer two millimeters square is the start of an effort to make chips that can put computer power just about anywhere for the vaunted “Internet of Things.”
If the Internet is to reach everywhere—from the pills you swallow to the shoes on your feet—then computers will need to get a whole lot smaller. A new microchip that is two millimeters square and contains almost all the components of a tiny functioning computer is a promising start.
The KL02 chip, made by Freescale, is shorter on each side than most ants are long and crams in memory, RAM, a processor, and more.
“The Internet of things is ultimately about services, like your thermostat connecting to the Internet and knowing when you’re coming home,”
Freescale will start offering the KL02, and some slightly larger microcontrollers, with Zigbee or low-power Bluetooth wireless integrated later this year. Wireless connectivity is added by adding the guts of a radio chip to the current designs. The company is also working to refine technology for packaging chips and other components together to enable many more millimeter-scale computers.
“All these heterogeneous things need to come together and be integrated,” says Karimi, “but we have to figure out how these components can coexist without degrading their performance.”
“Such wafer-scale packaging is getting close to ‘smart dust’ design points,”
Karimi agrees that batteries are a problem, saying that Freescale is working with partners developing energy harvesting components—of heat, radio waves, or light—that could power very small devices.
KL02: Kinetis KL02 Chip-Scale Package (CSP)
http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=KL02
Tomi Engdahl says:
RF: FinFET battles at unprecedented feature sizes
http://www.edn.com/electronics-blogs/fpga-gurus/4415372/FinFET-battles-at-unprecedented-feature-sizes-
We should have known that the FPGA market leaders would not let Intel Corp’s FPGA foundry partners—Achronix, Tabula, and recent addition Microsemi Corp—gain any quarter based on Intel’s 3-D transistor capabilities. On May 29, Xilinx announced a “FinFast” program based on collaborating with TSMC on that company’s 16-nm FinFET process. No immediate FPGAs are expected, though Xilinx is expected to receive its first test chips from TSMC in 2013, to have first products ready in calendar 2014.
Keep in mind, it was only two months ago that Achronix announced its Speedster22i FPGAs, based on Intel’s 22-nm FinFET process. Of course, 22 nanometers can seem archaic in a realm where “sub-20-nm” will be considered the new table stakes.
Tomi Engdahl says:
Freescale rolls IoT toolkit
http://www.eetimes.com/electronics-news/4415715/Freescale-rolls-IoT-toolkit-at-Computex-
With engineers in short supply, and with the most experienced Internet of Things (IoT) designers commanding top salaries when they can be found, Freescale has attempted to lower the expertise bar for developing context-aware applications by offering an open-source toolkit called its intelligent sensing framework (ISF) at Computex. The Xtrinsic ISF makes it easier for designers to differentiate IoT devices by simplifying the backend work of collecting and fusing sensor data for context-aware use cases.
“Sensor fusion remains important, but is no longer enough alone is not enough anymore for context-aware connected devices,” said Babak Taheri, vice president and general manager of Freescale’s sensor and actuator solutions division. “Developers need an open platform that enables data fusion and facilitates the creation of differentiated, context-aware devices that drive the Internet of Things.”
Pre-built adapters handle all the communications details for collecting sensor data and built-in power-management, host proxy and command interpreter functions as well as simplify the coding effort especially for novel sensor arrays mixing sensors from various manufacturers.
Tomi Engdahl says:
Intel’s Haswell puts more analog inside
http://www.eetimes.com/electronics-news/4415471/Intel-s-Haswell-puts-more-analog-inside
Intel designed an on-chip voltage regulator and external inductors to support it for Haswell, its next-generation x86. What’s not clear is to what extent the work is driving the digital giant into more design of analog components and processes.
Haswell has two power rails coming into its package compared to six for Ivy Bridge, its prior generation CPU. Inside the Haswell package, Intel splits up the two rails into as many as 12.
The Haswell package includes 20 or more Intel-designed inductors, one for each phase of each power rail. The inductors, like the on-die voltage regulator, are Intel made parts, replacing what used third-party analog parts on the motherboard outside the package.
The move shaves $2-$10 off the motherboard bill of materials
“There is a small net adder in power consumption [by putting the voltage regulator on die] but we’re also getting more power rails on the die controlled at finer granularity,”
Tomi Engdahl says:
High-performance oscillators target cloud computing
http://www.edn.com/electronics-products/other/4415436/High-performance-oscillators-target-cloud-computing-
Silicon Labs has introduced a new family of crystal oscillators for ultra-low jitter reference timing for 10G, 40G and 100G cloud computing and networking equipment.
The devices offer jitter performance of <200 femtoseconds (fs) RMS jitter (integrated from 10 kHz to 1 MHz) for common Ethernet and Fibre Channel reference frequencies. They support LVDS and LVPECL output formats at 2.5 V and 3.3 V and offer both ±20 ppm and 31.5 ppm total stability, simplifying interfacing to a wide variety of processors, switches, PHYs and FPGAs.
To support the increasing demand for cloud computing-based services, data center equipment is migrating to higher speed serial data transmission, often 10G or faster. In parallel, there is a significant trend to maximize energy efficiency by consolidating switching, storage and computing resources into fewer components. These trends have given rise to processors, Ethernet switch ICs and FPGAs with integrated High-Speed serializer-deserializer (SerDes) technology that requires low-jitter timing references. Silicon Labs’ Si535/536 oscillators provide the ultra-low jitter and ±20 ppm stability required by state-of-the-art cloud computing and networking infrastructure equipment.
“Cloud computing switches, routers and storage equipment are moving to higher speed serial data links, increasing the necessity for high-performance timing,”
Tomi Engdahl says:
Home> Tools & Learning > Products > Product Brief
RF gain blocks operate from 30 MHz to 6 GHz
http://www.edn.com/electronics-products/other/4415468/RF-gain-blocks-operate-from-30-MHz-to-6-GHz
The ADL5544, ADL5545, ADL5610, and ADL5611 single-ended amps operate over a frequency range of 30 MHz to 6 GHz and are internally matched to 50 Ω at the input and output.
Tomi Engdahl says:
Component obsolescence: Is your process up to the challenge? (Part 2)
http://www.edn.com/design/components-and-packaging/4415454/Component-obsolescence–Is-your-process-up-to-the-challenge—Part-2-
Tomi Engdahl says:
Improving high speed ADC harmonic performance for unbuffered ADCs
http://www.edn.com/design/analog/4415469/Improving-high-speed-ADC-harmonic-performance-for-unbuffered-ADCs
High performance unbuffered input ADCs present a considerable challenge in characterization and application to deliver a clean signal to the inputs while exposing the best possible dynamic range through the ADC. Using a recent 14bit, 500MSPS ADC, comparisons between simple 2 transformer inputs and a low power amplifier interface for the SFDR through the FFT will be made. A consistent, and in some cases considerable, improvement in SFDR will be shown using this newer amplifier based interface.
Tomi Engdahl says:
Wi-Fi signals enable gesture recognition throughout entire home
http://www.washington.edu/news/2013/06/04/wi-fi-signals-enable-gesture-recognition-throughout-entire-home/
“This is repurposing wireless signals that already exist in new ways,” said lead researcher Shyam Gollakota, a UW assistant professor of computer science and engineering. “You can actually use wireless for gesture recognition without needing to deploy more sensors.”
The UW research team that includes Shwetak Patel, an assistant professor of computer science and engineering and of electrical engineering and his lab, published their findings online this week. This technology, which they call “WiSee,” has been submitted to The 19th Annual International Conference on Mobile Computing and Networking.
The concept is similar to Xbox Kinect – a commercial product that uses cameras to recognize gestures – but the UW technology is simpler, cheaper and doesn’t require users to be in the same room as the device they want to control. That’s because Wi-Fi signals can travel through walls and aren’t bound by line-of-sight or sound restrictions.
When a person moves, there is a slight change in the frequency of the wireless signal. Moving a hand or foot causes the receiver to detect a pattern of changes known as the Doppler frequency shift.
These frequency changes are very small – only several hertz – when compared with Wi-Fi signals that have a 20 megahertz bandwidth and operate at 5 gigahertz. Researchers developed an algorithm to detect these slight shifts. The technology also accounts for gaps in wireless signals when devices aren’t transmitting.
Tomi Engdahl says:
Chip sales: WSTS reduces 2013 market growth estimate
http://www.eetimes.com/electronics-news/4415806/Chip-sales-WSTS-reduces-2013-market-growth-estimate-
Alongside reporting weak three-month averaged global chip sales for April the Semiconductor Industry Association has said that the World Semiconductor Trade Statistics (WSTS) organization has reduced its estimates for chip market growth for 2013 and 2014.
As a result of the downgrade WSTS now predicts the global chip market will be worth $297.8 billion in 2013, up 2.1 percent compared with 2012 and worth $312.9 billion in 2014, up 5.1 percent. For 2015 WSTS is predicting another low growth year of 3.8 percent.
Tomi Engdahl says:
DO-254 Requirements Traceability
http://www.edn.com/design/systems-design/4415666/DO-254-Requirements-Traceability
DO-254 enforces a strict requirements-driven process for the development of commercial airborne electronic hardware. For DO-254, requirements must drive the design and verification activities, and requirements traceability helps to ensure this.
Tomi Engdahl says:
Requirements automation
http://www.edn.com/electronics-blogs/practical-chip-design/4415239/Requirements-automation
While everyone inherently knows what requirements and specifications are, we don’t often stop to think about them and how we could perhaps better use them in a flow. We tend to think about the specification as being what we want to create and the requirements as being the things that get in our way – the constraints. Alternatively when requirements tracking is mentioned, we tend to think about large mil/aero projects where all kinds of arduous documentation is involved and everything takes ten times as long. As an industry, we have focused on the specification and the requirements are barely utilized in any kind of automation, and that to me is an opportunity lost.
Tomi Engdahl says:
First Observation of Spin Hall Effect in a Quantum Gas Is Step Toward ‘Atomtronics’
http://www.scienceworldreport.com/articles/7389/20130607/first-observation-spin-hall-effect-quantum-gas-step-toward-atomtronics.htm
A new phenomenon discovered in ultracold atoms of a Bose-Einstein condensate (BEC) could offer new insight into the quantum mechanical world and be a step toward applications in “atomtronics”—the use of ultracold atoms as circuit components. Researchers at the National Institute of Standards and Technology (NIST) have reported the first observation of the “spin Hall effect” in a cloud of ultracold atoms.
Tomi Engdahl says:
Image sensors go organic
http://www.eetimes.com/design/industrial-control/4416210/Image-sensors-go-organic
Isorg SA, a 2010 startup company that can print organic optoelectronic sensors, has teamed up with electronics display developer Plastic Logic Ltd. to develop what they claim is the world’s first conformable organic image sensor on a plastic substrate.
With conformable AMOLED displays becoming a key area of interest for the production of curvilinear electronic equipment, such as novel designs of mobile phones, the ability to produce image sensors on the same substrate is clearly of interest.
A typical Isorg organic photodetector material is PEDOT-PSS (polythylenedioxythiophene mixed with polystyrenesulfonate). It has the advantage that it can be processed in solution on to low-cost plastic or glass substrates under ambient air and ambient temperature conditions rather than using expensive vacuum and high temperature processes.
Tomi Engdahl says:
Computer memory can be read with a flash of light
Prototype device combines speed and durability.
http://www.nature.com/news/computer-memory-can-be-read-with-a-flash-of-light-1.13169
Modern computer-memory technologies come with a trade-off. There is speedy but short-term storage for on-the-fly processing — random-access memory, or RAM — and slow but enduring memory for data and programs that need to be stored long term, typically on a hard disk or flash drive. But a prototype memory device described today in Nature Communications1 combines speed, endurance and low power consumption by uniting electronic storage with a read-out based on the physics that powers solar panels.
In conventional computer memory, information is stored in cells that hold different amounts of electric charge, each representing a binary ’1′ or ’0′. Bismuth ferrite, by contrast, can represent those binary digits, or bits, as one of two polarization states, and can switch between these states when a voltage is applied — a property called ferroelectricity. Ferroelectric RAM based on other materials is already on the market. It is speedy, but the technology has not found widespread use. One problem is that the electrical signal used to read out a bit erases it, so the data must be rewritten every time. This leads to reliability problems over time.
Ramesh and Wang realized that they could take advantage of another property of bismuth ferrite to read these memory arrays in a nondestructive way. In 2009, researchers at Rutgers University in Piscataway, New Jersey, demonstrated2 that the material has a photovoltaic response to visible light — meaning that when it is hit by light, a voltage is created. The size of the voltage depends on which polarization state the material is in, and can be read out using electrodes or transistors. Crucially, shining light on the material doesn’t change its polarization, and so does not erase the data stored in it.
“This is an important step towards real technological applications of the ferroelectric photovoltaic effect,” says Sang-Wook Cheong, a condensed-matter physicist at Rutgers, who led the 2009 study.
Mutec Shs says:
There is perceptibly a bundle to identify about this. I feel you made various good points in features also.
Tomi Engdahl says:
Signal distortion from high-K ceramic capacitors
http://www.edn.com/design/analog/4416466/Signal-distortion-from-high-K-ceramic-capacitors
Multilayer ceramic capacitors (MLCCs) are used extensively in modern electronics because they offer high volumetric efficiencies and low equivalent series resistances at attractive prices. These advantages make MLCCs nearly ideal for a wide range of applications, including output capacitors for power supplies and local decoupling capacitors for integrated circuits.
The various types of MLCCs are delineated primarily by their temperature coefficient, which is the amount of variation in their capacitance over a specified temperature range.
The temperature coefficient of an MLCC is a direct effect of the materials used in the ceramic that forms the capacitor dielectric. Furthermore, the dielectric material also determines the electrical characteristics of the capacitor.
The benefit of increased relative permittivity of the dielectric material is that high-k MLCCs are available in much larger capacitance values and smaller packages than C0G types.
Unfortunately, these advantages come with a downside: high-K MLCCs exhibit a substantial voltage coefficient, meaning their capacitance varies depending on the applied voltage. In AC applications this phenomenon manifests itself as waveform distortion and can compromise the overall system performance. When printed circuit board (PCB) area and cost are major design constraints, board and system level designers may be tempted to use high-K MLCCs in circuits where they can introduce significant distortion into the signal path.
Active filter circuits, anti-aliasing filters for data converters, and feedback capacitors in amplifiers are examples of circuits where the use of a high-K MLCC may introduce distortion.
Because passive component distortion increases with higher signal levels, it follows that filter circuit distortion should be greatest when the capacitors experience maximum applied voltage
The performance of an analog circuit can be dramatically affected by the type of capacitors used in its construction. An active filter was used to demonstrate this principle. When the circuit was constructed with C0G capacitors, it delivered a high level of performance. However, once the capacitors were changed to those of the X7R dielectric type, the circuit’s performance was degraded considerably. X7R capacitors introduced a large number of harmonics into the signal path, with odd harmonics being the dominate contributors to the THD+N measurement. Specifically, X7R capacitors in 0603 packages exhibited the worst performance, and X7R capacitors in 1206 packages provided only marginally improved performance.