Computer technologies for 2012

ARM processor becomes more and more popular during year 2012. Power and Integration—ARM Making More Inroads into More Designs. It’s about power—low power; almost no power. A huge and burgeoning market is opening for devices that are handheld and mobile, have rich graphics, deliver 32-bit multicore compute power, include Wi-Fi, web and often 4G connectivity, and that can last up to ten hours on a battery charge.The most obvious among these are smartphones and tablets, but there is also an increasing number of industrial and military devices that fall into this category.

The rivalry between ARM and Intel in this arena is predictably intense because try as it will, Intel has not been able to bring the power consumption of its Atom CPUs down to the level of ARM-based designs (Atom typically in 1-4 watt range and a single ARM Cortex-A9 core in the 250 mW range). ARM’s East unimpressed with Medfield, design wins article tells that Warren East, CEO of processor technology licensor ARM Holdings plc (Cambridge, England), is unimpressed by the announcements made by chip giant Intel about the low-power Medfield system-chip and its design wins. On the other hand Android will run better on our chips, says Intel. Look out what happens in this competition.

Windows-on-ARM Spells End of Wintel article tells that Brokerage house Nomura Equity Research forecasts that the emerging partnership between Microsoft and ARM will likely end the Windows-Intel duopoly. The long-term consequences for the world’s largest chip maker will likely be an exit from the tablet market as ARM makes inroads in notebook computers. As ARM is surely going to keep pointing out to everyone, they don’t have to beat Intel’s raw performance to make a big splash in this market, because for these kinds of devices, speed isn’t everything, and their promised power consumption advantage will surely be a major selling point.

crystalball

Windows 8 Release Expected in 2012 article says that Windows 8 will be with us in 2012, according to Microsoft roadmaps. Microsoft still hinting at October Windows 8 release date. It will be seen what are the ramifications of Windows 8, which is supposed to run on either the x86 or ARM architectures. Windows on ARM will not be terribly successful says analyst but it is left to be seen is he right. ARM-based chip vendors that Microsoft is working with (TI, Nvidia, Qualcomm) are now focused on mobile devices (smartphones, tablets, etc.) because this is where the biggest perceived advantages of ARM-based chips lie, and do not seem to be actively working on PC designs.

Engineering Windows 8 for mobile networks is going on. Windows 8 Mobile Broadband Enhancements Detailed article tells that using mobile broadband in Windows 8 will no longer require specific drivers and third-party software. This is thanks to the new Mobile Broadband Interface Model (MBIM) standard, which hardware makers are reportedly already beginning to adopt, and a generic driver in Windows 8 that can interface with any chip supporting that standard. Windows will automatically detect which carrier it’s associated with and download any available mobile broadband app from the Windows store. MBIM 1.0 is a USB-based protocol for host and device connectivity for desktops, laptops, tablets and mobile devices. The specification supports multiple generations of GSM and CDMA-based 3G and 4G packet data services including the recent LTE technology.

crystalball

Consumerization of IT is a hot trend that continues at year 2012. Uh-oh, PC: Half of computing device sales are mobile. Mobile App Usage Further Dominates Web, Spurred by Facebook article tells that the era of mobile computing, catalyzed by Apple and Google, is driving among the largest shifts in consumer behavior over the last forty years. Impressively, its rate of adoption is outpacing both the PC revolution of the 1980s and the Internet Boom of the 1990s. By the end of 2012, Flurry estimates that the cumulative number of iOS and Android devices activated will surge past 1 billion, making the rate of iOS and Android smart device adoption more than four times faster than that of personal computers (over 800 million PCs were sold between 1981 and 2000). Smartphones and tablets come with broadband connectivity out-of-the-box. Bring-your-own-device becoming accepted business practice.

Mobile UIs: It’s developers vs. users article tells that increased emphasis on distinctive smartphone UIs means even more headaches for cross-platform mobile developers. Whose UI will be a winner? Native apps trump the mobile Web.The increased emphasis on specialized mobile user interface guidelines casts new light on the debate over Web apps versus native development, too.

crystalball

The Cloud is Not Just for Techies Anymore tells that cloud computing achieves mainstream status. So we demand more from it. That’s because our needs and expectations for a mainstream technology and an experimental technology differ. Once we depend on a technology to run our businesses, we demand minute-by-minute reliability and performance.

Cloud security is no oxymoron article is estimated that in 2013 over $148 billion will be spent on cloud computing. Companies large and small are using the cloud to conduct business and store critical information. The cloud is now mainstream. The paradigm of cloud computing requires cloud consumers to extend their trust boundaries outside their current network and infrastructure to encompass a cloud provider. There are three primary areas of cloud security that relate to almost any cloud implementation: authentication, encryption, and network access control. If you are dealing with those issues and software design, read Rugged Software Manifesto and Rugged Software Development presentation.

Enterprise IT’s power shift threatens server-huggers article tells that as more developers take on the task of building, deploying, and running applications on infrastructure outsourced to Amazon and others, traditional roles of system administration and IT operations will morph considerably or evaporate.

Explosion in “Big Data” Causing Data Center Crunch article tells that global business has been caught off-guard by the recent explosion in data volumes and is trying to cope with short-term fixes such as buying in data centre capacity. Oracle also found that the number of businesses looking to build new data centres within the next two years has risen. Data centre capacity and data volumes should be expected to go up – this drives data centre capacity building. Data centre capacity and data volumes should be expected to go up – this drives data centre capacity building. Most players active on “Big Data” field seems to plan to use Apache Hadoop framework for the distributed processing of large data sets across clusters of computers. At least EMC, Microsoft, IBM, Oracle, Informatica, HP, Dell and Cloudera are using Hadoop.

Cloud storage has been very popular topic lately to handle large amount of data storage. The benefits have been told very much, but now we can also see risks of that to realize. Did the Feds Just Kill the Cloud Storage Model? article claims that Megaupload Type Shutdowns and Patriot Act are killing interest to Cloud Storage. Many innocent Megaupload users have had their data taken away from them. The MegaUpload seizure shows how personal files hosted on remote servers operated by a third party can easily be caught up in a government raid targeted at digital pirates. In the wake of Megaupload crackdown, fear forces similar sites to shutter sharing services?. If you use any of these cloud storage sites to store or distribute your own non-infringing files, you are wise to have backups elsewhere, because they may be next on the DOJ’s copyright hit list.

Did the Feds Just Kill the Cloud Storage Model? article tells that worries have been steadily growing among European IT leaders that the USA Patriot Act would give the U.S. government unfettered access to their data if stored on the cloud servers of American providers. Escaping the grasp of the Patriot Act may be more difficult than the marketing suggests. “You have to fence yourself off and make sure that neither you or your cloud service provider has any operations in the United States”, “otherwise you’re vulnerable to U.S. jurisdiction.” And the cloud computing model is built on the argument data can and should reside anywhere around the world, freely passing between borders.

crystalball

Data centers to cut LAN cord? article mentions that 60GHz wireless links are tested in data centers to ease east-west traffic jams. According to a recent article in The New York Times, data center and networking techies are playing around with 60GHz wireless networking for short-haul links to give rack-to-rack communications some extra bandwidth for when the east-west traffic goes a bit wild. The University of Washington and Microsoft Research published a paper at the Association of Computing Machinery’s SIGCOMM 2011 conference late last year about their tests of 60GHz wireless links in the data center. Their research used prototype links that bear some resemblance to the point-to-point, high bandwidth technology known as WiGig (Wireless Gigabit), which among other things is being proposed as a means to support wireless links between Blu-ray DVD players and TVs, replacing HDMI cables (Wilocity Demonstrates 60 GHz WiGig (Draft 802.11ad) Chipset at CES). 60 GHz band is suitable for indoor, high-bandwidth use in information technology.. There are still many places for physical wires. The wired connections used in a data center are highly reliable, so “why introduce variability in a mission-critical situation?”

820 Comments

  1. Tomi Engdahl says:

    Why Tablets Will Become Our Primary Computing Device
    http://blogs.forrester.com/frank_gillett/12-04-23-why_tablets_will_become_our_primary_computing_device

    Tablets aren’t the most powerful computing gadgets. But they are the most convenient.

    They’re bigger than the tiny screen of a smartphone, even the big ones sporting nearly 5-inch screens.

    They have longer battery life and always-on capabilities better than any PC

    And tablets are very good for information consumption, an activity that many of us do a lot of. Content creation apps are appearing on tablets. They’ll get a lot better as developers get used to building for touch-first interfaces, taking advantage of voice input, and adding motion gestures.

    So let’s define what we mean by a tablet:

    “Touch first” slab computers that weigh less than 800 grams (1.75 pounds), have a 7- to 14-inch diagonal screen area, feature always-on operation, and 8-hour battery life.

    As a result of the compelling user experience of Apple’s iPad and the content-focused experience of the Amazon Kindle Fire, and other tablets, global tablets sales will continue to grow sharply over the next five years. We forecast sales rising from 56 million in 2011 to 375 million in 2016. Given that a majority of tablets will be retired within three years of purchase, we forecast that there will be 760 million tablets in use globally by 2016. One-third of these tablets will be purchased by businesses, and emerging markets will drive about 40% of sales.

    Reply
  2. Tomi Engdahl says:

    Intel to buy key assets from supercomputer maker Cray
    http://news.cnet.com/8301-13924_3-57420387-64/intel-to-buy-key-assets-from-supercomputer-maker-cray/?part=rss&subj=news&tag=title

    Supercomputer maker Cray will sell its interconnect hardware development program and related intellectual property to Intel for $140 million in cash, the two companies announced today.

    Up to 74 Cray employees will join Intel, Cray said.

    ntel said it gains access to “Cray’s world-class interconnect personnel and intellectual property.” Interconnect technology is a high-speed link between high-performance computers.

    Reply
  3. Tomi Engdahl says:

    Lava Xolo X900 Review – The First Intel Medfield Phone
    http://www.anandtech.com/show/5770/lava-xolo-x900-review-the-first-intel-medfield-phone

    For Intel, the road to their first real competitive smartphone SoC has been a long one.

    While Moorestown was never the success that Intel was hoping for, it paved the way for something that finally brings x86 both down to a place on the power-performance curve that until now has been dominated by ARM-powered SoCs, and includes all the things hanging off the edges that you need (ISP, encode, decode, integrated memory controller, etc), and it’s called Medfield. With Medfield, Intel finally has a real, bona fide SoC that is already in a number of devices shipping before the end of 2012.

    In both an attempt to prove that its Medfield platform is competitive enough to ship in actual smartphones, and speed up the process of getting the platform to market, Intel created its own smartphone Form Factor Reference Design (FFRD). While the act of making a reference device is wholly unsurprising since it’s analogous to Qualcomm’s MSM MDPs or even TI’s OMAP Blaze MDP, what is surprising is its polish and aim.

    The purpose and scope of this review is ambitious and really covers two things – both an overview of Intel’s Medfield platform built around the Atom Z2460 Penwell SoC, and a review of the Xolo X900 smartphone FFRD derivative itself.

    For Intel, answering the looming ARM threat is obviously hugely important for the future, and it recognizes that

    The fire was lit with the impending arrival of Windows On ARM (WOA), at which point the line between traditional ARM-dominated smartphone/tablet SoCs and a real desktop class compute platform will start getting blurry, fast.

    The Atom Z2460 in the X900 is a competent dual-core Cortex A9 competitor with competitive battery life and power draw, and no doubt Z2580 (its dual core, SGX544MP2 high end counterpart clearly targeted at Windows 8 platforms) will be equally as competitive against quad core A9s. If Intel’s goal with both Medfield and the X900 was to establish a foothold in the smartphone SoC space and demonstrate that it can indeed deliver x86 in a smaller form factor and lower power profile than ever before then it truly is mission accomplished.

    The x86 power myth is finally busted. While the X900 doesn’t lead in battery life, it’s competitive with the Galaxy S 2 and Galaxy Nexus. In terms of power efficiency, the phone is distinctly middle of the road – competitive with many of the OMAP 4 based devices on the market today.

    There is however a big difference between middle of the road and industry leading, which is really the next step that we need to see from Intel.

    The performance side is obviously even more competitive. Atom isn’t always industry leading in our tests, but the X900 is rarely more than a couple places away from the top (with the exception of GPU performance of course, but that’s a matter of licensing a different IP block in future versions). For a reference design that an Intel partner can just buy, barely customize, and ship – that’s not bad at all. Smartphone vendors spend a considerable amount of time building phones that perform well, Intel’s offer to internalize much of that can be either scary or amazing depending on who you’re talking to.

    The software compatibility story, like the concern over power consumption, is also a non-issue. The vast majority of apps we tried just worked, without any indication that we were running something intended for a different instruction set. There are still a few rough edges (e.g. Netflix), but if Intel is able to get things working this well at launch, the situation will only improve going forward.

    Ultimately Intel’s first smartphone is a foot in the door. It’s what many said couldn’t be done, and it’s here now. What it isn’t however is a flagship. To lead, Intel needs an updated Atom architecture

    On the one hand it’s a good thing that you can’t tell an Intel smartphone apart from one running an ARM based SoC, on the other hand it does nothing to actually sell the Intel experience.

    That’s what Intel needs to really build credibility in the smartphone space. A little was earned by getting this far, but its reputation will be made based on what happens next.

    Reply
  4. Good job says:

    dificille de comprendre de tel chose

    Reply
  5. Tomi Engdahl says:

    HP says mission critical features of HP-UX will move to Linux
    http://www.theinquirer.net/inquirer/news/2170380/hp-mission-critical-developers-hp-ux-linux

    HP says mission critical features of HP-UX will move to Linux

    HP recently told The INQUIRER that it will commit to Linux in the mission critical market, however it said its HP-UX Unix implementation will be the proving ground for features that the firm will push in Linux and Windows. According to Kate O’Neill, product marketing manager for HP’s Business Critical Systems unit, the firm wants to bring a “UNIX-like experience to Linux and Windows”.

    “We continue to drive and innovate in HP-UX because it is what we consider to be the design centre for mission critical, we have to stay at the bleeding edge of mission critical so we can cascade those technologies into [the] Windows and Linux environments,” said O’Neill. “It will drive us to be better, not just in that environment itself, but in this emerging mission critical Windows and Linux also.”

    O’Neill said, “Customers are hesitant to make the transition to Windows and Linux when uptime and planned and unplanned downtime is critical to them, but they do recognise in the future that could be a possibility, so they want to make sure there are options to them as they look down the road.”

    HP’s decision to use HP-UX as a proving ground for mission critical features that it will eventually push into Linux serves three purposes. HP’s operating system stays ahead of Linux, while the Linux community gets to see whether new features of HP-UX are worth incorporating and potentially the ability to convince the conservative suits that Linux has resilient, high availability features similar to those found in expensive, proprietary operating systems.

    Reply
  6. Tomi Engdahl says:

    IHS boosts 2012 chip market forecast
    http://www.edn.com/article/521608-IHS_boosts_2012_chip_market_forecast.php?cid=EDNToday_20120426

    Market research firm IHS iSuppli Wednesday (April 25) lifted its forecast for the 2012 semiconductor market, citing strong ongoing consumer demand for wireless products like cell phones and media tablets.

    IHS (El Segundo, Calif) is now forecasting that total chip sales will reach an estimated $324.6 billion in 2012, up 4.3% from last year.

    Gartner now expects chip sales to increase 4% to $3.16 billion this year.

    “In particular, semiconductor suppliers can anticipate an exceptionally robust third quarter this year in preparation for strong holiday sell-through,” Jelinek said.

    HS predicted that the Intel-backed Ultrabook low-power notebook PC platform would have only a minimal impact on 2012 semiconductor revenue. But the forthcoming introduction of Microsoft Corp’s Windows 8 — the first version of the PC operating system that will support touchscreen capability — means that Ultrabooks have the potential to become a key market revenue driver in 2013, IHS said.

    Reply
  7. Tomi Engdahl says:

    ‘Geek’ image scares women away from tech industry
    http://www.theregister.co.uk/2012/04/26/girls_in_ict_day/

    Women don’t consider IT careers because “the popular media’s ‘geek’ image of the technology field” along with other factors including a lack of female role models and support at home and work “tend to dissuade talented girls from pursuing a tech career.”

    “Misguided school-age career counselling” is another problem, as it often suggests to young women that ICT careers are too hard or somehow unfeminine.

    That’s the conclusion of a “high-level dialogue” hosted by the International Telecommunication Union (ITU) in New York yesterday.

    In his welcoming remarks, ITU Secretary-General Dr Hamadoun Touré said that the ICT industries need women.

    “Over the coming decade, there are expected to be two million more ICT jobs than there are professionals to fill them,” he said. “This is an extraordinary opportunity for girls and young women – in a world where there are over 70 million unemployed young people.”

    “Encouraging girls into the technology industry will create a positive feedback look – in turn creating inspiring new role models for the next generation,” he added.

    Reply
  8. Tomi Engdahl says:

    Ivy Bridge narrows AMD’s graphics lead
    http://www.edn.com/article/521576-Ivy_Bridge_narrows_AMD_s_graphics_lead.php?cid=EDNToday_20120427

    Intel is expected to formally roll out today Ivy Bridge, its first processors using its 22nm tri-gate technology and aimed at ultra thin and light notebooks. An analyst said the chips will narrow archrival AMD’s lead in graphics performance and inject new life into the notebook market under attack from tablets such as the Apple iPad.

    AMD is said to be on the cusp of rolling out its next-generation CPUs, called Trinity. The chips are built in a 32nm process. AMD is not expected to field chips using the still scarce 28-nm process until 2013.

    By that time Intel will be moving on to Haswell, its first new microarchitecture to use its 22-nm process. Intel typically gains a 10 to 20% performance advantage with the first chips, such as Ivy Bridge, to use a new process and a bigger performance boost for a new design, such as Haswell, optimized for that process, said Nathan Brookwood, principal of market watcher Insight64 (Saratoga, Calif.).

    Reply
  9. Tomi Engdahl says:

    Big Data’s Big Problem: Little Talent
    http://online.wsj.com/article/SB10001424052702304723304577365700368073674.html

    It seems that the markets are as much in love with “Big Data”—the ability to acquire, process and sort vast quantities of data in real time—as the technology industry.

    Big Data refers to the idea that an enterprise can mine all the data it collects right across its operations to unlock golden nuggets of business intelligence. And whereas companies in the past have had to rely on sampling, Big Data, or so the promise goes, means you can use your entire corpus of digitized corporate knowledge. It is, by all accounts, the next big thing.

    However, according to a report published last year by McKinsey, there is a problem. “A significant constraint on realizing value from Big Data will be a shortage of talent, particularly of people with deep expertise in statistics and machine learning, and the managers and analysts who know how to operate companies by using insights from Big Data,” the report said. “We project a need for 1.5 million additional managers and analysts in the United States who can ask the right questions and consume the results of the analysis of Big Data effectively.” What the industry needs is a new type of person: the data scientist.

    It is this ability to turn data into information into action that presents the most challenges. It requires a deep understanding of the business to know the questions to ask. The problem that a lot of companies face is that they don’t know what they don’t know, as former U.S. Defense Secretary Donald Rumsfeld would say. The job of the data scientist isn’t simply to uncover lost nuggets, but discover new ones and more importantly, turn them into actions. Providing ever-larger screeds of information doesn’t help anyone.

    Reply
  10. Tomi Engdahl says:

    Firms shouldn’t reject ‘bring your own device’, says McAfee CTO [Video]
    http://www.theinquirer.net/inquirer/news/2171123/firms-shouldnt-reject-bring-device-mcafee-cto-video

    SECURITY OUTFIT McAfee’s CTO Raj Samani told The INQUIRER at the 2012 Infosecurity Conference in London that businesess shouldn’t reject ‘bring your own device’ (BYOD) due to its level of productivity. Instead, organisations should consider the security risks and impliment appropriate protection to take advantage of the trend.

    Reply
  11. Tomi Engdahl says:

    CIOs see Cisco gaining, Juniper in trials
    UBS survey maps CIO plans, observations for 2012
    http://www.networkworld.com/news/2012/042912-cios-see-cisco-gaining-juniper-258779.html?page=1

    The top three priorities for CIOs in 2012 are security, wireless LAN and Ethernet switching, according to a survey conducted by investment firm UBS.

    In querying 100 CIOs (60 in the United States and 40 in Europe), UBS found that not only were these topics high priorities, but by an even wider margin this quarter than in its previous surveys.

    Reply
  12. Tomi Engdahl says:

    Deliberate excellence: Why Intel leads the world in semiconductor manufacturing
    http://www.extremetech.com/computing/127987-deliberate-excellence-why-intel-leads-the-world-in-semiconductor-manufacturing

    When Intel launched Ivy Bridge last week, it didn’t just release a new CPU — it set a new record. By launching 22nm parts at a time when its competitors (TSMC and GlobalFoundries) are still ramping their own 32/28nm designs, Intel gave notice that it’s now running a full process node ahead of the rest of the semiconductor industry.

    Bohr attributes Intel’s success to several factors.

    First, Intel is virtually the only IDM (Integrated Device Manufacturer) left in the microprocessor business. Because it manufacturers all its own hardware – design and implementation are treated as a joint effort at every level — including when things go wrong.

    “Copy Exactly!” is Intel’s method for duplicating successful chip designs across its various factories worldwide.
    As Copy Exactly! was developed and deployed, the company’s yields synchronized across the various fabs.

    Tick-tock’s cadence organizes and deploys new technologies in a consistent fashion.

    Intel typically spends more on R&D than any other semiconductor manufacturer.

    Intel’s advantage is the result of close collaboration between CPU designers and process engineers, superb manufacturing controls, and robust, continuing investment into R&D. It’s by no means guaranteed that these practices will carry the company smoothly through 14nm, but their success thus far speaks for itself. Whether TSMC and GlobalFoundries can achieve similar results within the constraints of the foundry business model remains to be seen.

    (Even companies like Samsung and IBM, which still handle a significant amount of their own product manufacturing, have teamed up with GlobalFoundries to jointly focus on R&D. The rest, like Qualcomm, Nvidia, Toshiba, and Texas Instruments, outsource their manufacturing to companies like TSMC, UMC, and GlobalFoundries.)

    Reply
  13. Tomi Engdahl says:

    Does Facebook have designs on its own chip?
    http://news.cnet.com/8301-1001_3-57425667-92/does-facebook-have-designs-on-its-own-chip/

    Facebook, a chip designer? Never! Well, don’t be too quick to pooh-pooh the idea. Facebook, after all, is a Silicon Valley company.

    Facebook may venture into the rarified ranks of chip designers, a source told CNET.

    Sound crazy? Well, Facebook already makes its own servers.

    “They have chip designers,” the source said but admitted that it’s not clear what those designers are for. This person also said that it wasn’t clear if Facebook was using a design from ARM, referring to the most popular chip architecture for smartphones and tablets.

    Could the chip be for a future Facebook device, like a tablet or smartphone? Or maybe something more dull like silicon for a data center computer? Again, that is not known.

    Reply
  14. Tomi Engdahl says:

    Dell Loses Orders as Facebook Do-It-Yourself Servers Gain: Tech
    By Ian King and Dina Bass – Sep 12, 2011 7:01 AM GMT+0300
    http://www.bloomberg.com/news/2011-09-12/dell-loses-orders-as-facebook-do-it-yourself-servers-gain-tech.html

    When Facebook Inc. set out to build two new data centers, engineers couldn’t find the server computers they wanted from Dell Inc. (DELL) or Hewlett-Packard Co. (HPQ) They decided to build their own.

    “We weren’t able to get exactly what we wanted,” Frank Frankovsky, Facebook’s director of hardware design, said at a conference on data-center technology last month.

    Hewlett-Packard, Dell and companies that sell the computers off the shelf are losing sales in a key market because Facebook and larger rival Google Inc. (GOOG) are leading a switch among Internet companies to do-it-yourself servers. These customized machines now account for 20 percent of the U.S. market for servers, which generated $31.9 billion globally in last year, said Jeffrey Hewitt, an analyst at Stamford, Connecticut-based Gartner Inc.

    As sales of personal computers slump and consumers shift to tablets such as Apple Inc.’s iPad, computer makers are becoming more dependent on servers. Dell and Hewlett-Packard lose out when they’re shunned by large customers such as Facebook, which are outfitting data centers with thousands of servers.

    “It’s definitely a threat to the traditional business model,” said Jim McGregor, chief technology strategist for researcher In-Stat in Scottsdale, Arizona. “Customers are finding solutions that the industry wasn’t ready to provide.”

    “People want to be able to build it their way,” Frankovsky said at the Dell-Samsung Chief Information Officer Forum in Half Moon Bay, California

    “It’s a completely different animal” than corporate servers, said Rejeanne Skillern, head of marketing at Intel’s cloud computing division.

    Google, Facebook and Microsoft Corp. (MSFT) have designed servers that contain the minimum amount of components required for their specific task. Facebook’s servers, for example, have custom power supplies and circuit boards in sheet-metal enclosures designed to maximize airflow with the minimum number of fans.

    Those and other tweaks — combined with a specially designed facility — boosted efficiency by 38 percent and reduced the cost of building a data center in Oregon by 24 percent, according to the company.

    Google’s servers are also built to the company’s specifications, with hardware limited to what is needed for applications to run. The machines run a stripped-down version of the Linux operating system that leaves out unnecessary code.

    For the data centers that underpin its cloud services and Bing search engine, Microsoft uses machines similar to the scaled-down, low-power ones Google uses. Though instead of making the hardware itself or through contractors, the company has a team of engineers who create server designs. Microsoft then commissions companies like Dell and Hewlett-Packard to build the machines.

    Computers makers must come up with products that fit the new needs of data-center builders.

    “If you think you’re going to go in there with a generic off-the-shelf product, you’re going to lose.”

    Reply
  15. Tomi Engdahl says:

    HP reclaims top spot in PC sales, market as a whole climbs 21 percent
    http://www.engadget.com/2012/05/01/hp-reclaims-top-spot-in-pc-sales-market-as-a-whole-climbs-21-pe/

    Well, Apple’s reign atop the list of the world’s top PC makers was short lived. After clawing its way into the lead, if you counted the iPad as a PC, HP is back atop the heap — even with Cupertino’s tablet-inflated numbers. According to Canalys, the Palo Alto company shipped 15.8 million units in the first quarter of 2012, barely sneaking passed Apple by 40,000 computers. Of course, remove Apple’s 11.8 million iPads, and it’s not even a competition. Lenovo, Acer and Dell rounded out the top five, with the total market shooting up 21 percent over the same time last year.

    Reply
  16. Tomi Engdahl says:

    Microsoft faces an Xbox 360 and Windows 7 ban in Germany
    http://www.theinquirer.net/inquirer/news/2171947/microsoft-xbox-360-windows-ban-germany

    IT ISN’T ALL bad news for Motorola today, as it has won an injunction against Microsoft in a German court.

    The court ruled that Microsoft infringed two Motorola patents relating to the H.264 compression standard, Reuters reports, deciding that the Redmond-based software house must cease distribution of its Xbox 360 console and Windows 7 operating system (OS) both online and in stores in Germany.

    This decision follows a ruling last week by the US International Trade Commission (ITC), which also found that Microsoft’s Xbox 360 console infringes Motorola’s patents.

    However, it’s unlikely that Microsoft’s products will be removed from shelves straight away, if at all.

    The ITC is investigating Motorola’s behaviour, but that hasn’t stopped it from boasting about the German court’s decision.

    Reply
  17. Tomi Engdahl says:

    Researchers develop new method to measure IT quality
    http://phys.org/news/2012-04-method-quality.html

    Researchers at the University at Buffalo School of Management have proposed a better way of measuring the capabilities of IT service providers in a study recently published in IEEE Transactions on Engineering Management.

    For many years, IT service providers have been judged using the Capability Maturity Model IntegrationSM (CMMI) framework developed at Carnegie Mellon University.

    Kishore and his fellow researchers have proposed a variation in the CMMI framework for thinking about and measuring capabilities of IT service provider firms. They call it the Quality Distinction (QD) Capability Model, as quality is its main theme.

    Building upon the CMMI framework, the QD model also acknowledges the high importance of regularly evaluating and adapting the development and delivery processes of an IT firm. This gives managers a better tool to measure the effectiveness of their firm’s information technology capabilities.

    “Our model can be used by managers in conjunction with CMMI to gain a more reliable understanding of their IT capabilities,” Kishore says. “The high focus on quality in the QD model makes it easier for an IT provider to measure and validate the quality of its products and services.”

    Carnegie Mellon’s CMMI framework is based on practical experience gained over the last two decades in actual software and IT project development work in a number of industries. But independent academic research has yielded inconsistent results across different studies with this model.

    Reply
  18. Tomi Engdahl says:

    Is the Age of Silicon Computing Coming to an End? Physicist Michio Kaku Says “Yes”
    http://www.dailygalaxy.com/my_weblog/2012/05/is-the-age-of-silicon-coming-to-an-end-physicist-michio-kaku-says-yes.html

    Traditional computing, with its ever more microscopic circuitry etched in silicon, will soon reach a final barrier: Moore’s law, which dictates that the amount of computing power you can squeeze into the same space will double every 18 months, is on course to run smack into a silicon wall due to overheating, caused by electrical charges running through ever more tightly packed circuits.

    “In about ten years or so, we will see the collapse of Moore’s Law. In fact, already, already we see a slowing down of Moore’s Law,” says world-renowned physicist, Michio Kaku. “Computer power simply cannot maintain its rapid exponential rise using standard silicon technology.”

    Despite Intel’s recent advances with tri-gate processors, Kaku argues in a video interview with Big Think, that the company has merely delayed the inevitable: the law’s collapse due to heat and leakage issues.

    “So there is an ultimate limit set by the laws of thermal dynamics and set by the laws of quantum mechanics as to how much computing power you can do with silicon,” says Kaku, noting “That’s the reason why the age of silicon will eventually come to a close,” and arguing that Moore’s Law could “flatten out completely” by 2022.”

    Kaku see several alternatives to the demise of Moores Law: protein computers, DNA computers, optical computers, quantum computers and molecular computers.

    “If I were to put money on the table I would say that in the next ten years as Moore’s Law slows down, we will tweak it. We will tweak it with three-dimensional chips, maybe optical chips, tweak it with known technology pushing the limits, squeezing what we can. Sooner or later even three-dimensional chips, even parallel processing, will be exhausted and we’ll have to go to the post-silicon era,” says Kaku.

    Kaku concludes that when Moore’s Law finally collapses by the end of the next decade, we’ll “simply tweak it a bit with chip-like computers in three dimensions. We may have to go to molecular computers and perhaps late in the 21st century quantum computers.”

    We’ll place our bets on quantum computing.

    To leapfrog the silicon wall, we have to figure out how to manipulate the brain-bending rules of the quantum realm – an Alice in Wonderland world of subatomic particles that can be in two places at once. Where a classical computer obeys the well understood laws of classical physics, a quantum computer is a device that harnesses physical phenomenon unique to quantum mechanics (especially quantum interference) to realize a fundamentally new mode of information processing.

    The fundamental unit of information in quantum computing (called a quantum bit or qubit), is not binary but rather more quaternary in nature, which differs radically from the laws of classical physics.A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or superposition of these classical states.

    In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state.

    Reply
  19. Tomi Engdahl says:

    IBM’s Watson Is a True Computing Star
    http://www.designnews.com/author.asp?section_id=1386&doc_id=242883&cid=NL_Newsletters+-+DN+Daily

    IBM’s latest celebrity computer gained fame when he (or is it she?) took on the two greatest winners in the history of the long-running game show Jeopardy in a three-day battle of the octagon with $1 million going to the winner.

    But like Deep Blue versus Kasparov years before, the consistent humming drive of Watson seemed to emotionally exhaust the human competitors over the final two days.

    Watson is now taking on the medical establishment — and what an impact it will make in utilizing the ever-increasing mountain of medical research that doctors just can’t realistically keep up with.

    And with approximately 20 percent of medical errors due to errors in diagnosis, one can’t imagine a physician not wanting Watson to provide assistance as he filters through the complex diagnostic options faced nearly every day in our busy hospitals and doctor’s offices.

    So far, no medical trials involving Watson seem to exist, but I am sure they are just around the corner. Can you imagine Watson, or the inevitable variants that will emerge from the tech community, becoming part of the standard of care for patients?

    Technology is already making life better for doctors and patients with remarkable technologies for medical imaging, robot-assisted surgery, and implantable defibrillators, as just a few examples.

    Reply
  20. Tomi Engdahl says:

    Open Compute Developing Wider Rack Standard
    http://hardware.slashdot.org/story/12/05/03/1417232/open-compute-developing-wider-rack-standard

    “Are you ready for wider servers? The Open Compute Project today shared details on Open Rack, a new standard for hyperscale data centers, which will feature 21-inch server slots, rather than the traditional 19 inches. “We are ditching the 19-inch rack standard,”

    Reply
  21. Tomi Engdahl says:

    Consumerization Trend Driving IT Shops “Crazy,” Gartner Analyst Says
    http://www.cio.com/article/705448/Consumerization_Trend_Driving_IT_Shops_Crazy_Gartner_Analyst_Says

    IT managers who grapple with Bring Your Own Device (BYOD) policies can expect to see an explosion of different devices used by their workers in the next few years.

    “The number of devices coming in the next few years will outstrip IT’s ability to keep the enterprise secure,” he said. “IT can’t handle all these devices. They’re going crazy. They get into fights on whether users should get upgrades or not.”

    And because IT shops won’t be able to keep up, software vendors will be forced to innovate and create what Dulaney called “beneficial viruses” — software that will be embedded in sensitive corporate data, such as financial or patient information, that’s carried on a smartphone or other mobile device. These beneficial viruses would work like Digital Rights Management (DRM) software seen on music and video files, which require a license to play the file, Dulaney explained.

    In his conception, however, the beneficial viruses would take things a step further: sensitive data “would be smart enough to delete itself…,” Dulaney said.

    “It’s time for the SAPs and Oracles to begin thinking about doing that, and it’s a lot harder than we think,” Dulaney said. “Inside every piece of [corporate] data there would be a beneficial virus that whenever the data found itself in the wrong place [such as on an unauthorized device], it would say, ‘I don’t see a license to be here and I will delete myself.’”

    “It’s time for the SAPs and Oracles to begin thinking about doing that, and it’s a lot harder than we think,” Dulaney said. “Inside every piece of [corporate] data there would be a beneficial virus that whenever the data found itself in the wrong place [such as on an unauthorized device], it would say, ‘I don’t see a license to be here and I will delete myself.’”

    Gartner’s current advice to IT shops in managing mobile devices is to consider setting up all or some of three different tiers of support — platform, appliance and concierge. In platform support, IT offers full PC-like support for a device and the device is chosen by IT, and will be used typically in vertical applications.

    With appliance-level support, IT supports a narrow set of applications on a mobile device, including server-based and Web-based application support on a wider set of pre-approved devices. Local applications are not supported.

    With concierge-level support, IT provides hands-on support, mainly to knowledge workers, for non-supported devices or non-supported apps on a supported device. The costs for support, which can be generous, are charged back to the users under this approach.

    Using a Web-based approach was “the easier, quicker and right thing to do, and we didn’t need to tap into the native device” to add a new application, Walton said. Down the road, she said ANICO might find the need to deploy native mobile apps used in the field by agents who handle sensitive data.

    “If we go that way, we’d definitely need to look at the security aspect,” she said. “Most agents are independent and we’d have to figure out how to handle the loss of a device.”

    Reply
  22. Tomi says:

    Low-Cost Indian Tablet Project Falls To Corruption
    http://linux.slashdot.org/story/12/05/06/121221/low-cost-indian-tablet-project-falls-to-corruption

    “The first Aakash tablet proposed for India schools has failed. Datawind managed to deliver the $45 Android tablet as reported here previously, but suffering a breach in faith by both their contract manufacturer and the accepting agency in India had to put the project on hold. Facing a loss in revenue it’s turning into a disaster for the small Canadian company as they are now proving unable to deliver both the Aakash tablet and the parallel retail product.”

    Reply
  23. stevequeerbaitpavlina says:

    Thanks for your posting. What I want to point out is that when looking for a good internet electronics shop, look for a web-site with entire information on important factors such as the personal privacy statement, security details, any payment guidelines, along with other terms plus policies. Usually take time to investigate the help and FAQ sections to get a far better idea of the way the shop performs, what they can perform for you, and how you can make best use of the features.

    Reply
  24. Tomi Engdahl says:

    Lenovo breaks ground on $800M mobile devices facility
    http://news.cnet.com/8301-1035_3-57428816-94/lenovo-breaks-ground-on-$800m-mobile-devices-facility/

    Continuing its mobile-devices push, the world’s second-largest PC maker announces a new base in central China focused on research, production, and sales of smartphones and tablets.

    Continuing its push into the mobile devices market, Chinese PC maker Lenovo broke ground Monday on a new base in the central Chinese city of Wuhan that will focus on research, production and sales of mobile devices.

    Lenovo, which in late 2011 became the world’s second-largest PC maker, issued a statement that the 5-billion-yuan (about $800 million) facility could eventually house as many as 10,000 employees and will focus on smartphones, tablets, and other mobile devices for Chinese and global markets. The facility is expected to begin operating in October 2013.

    Mobile devices still make up a small portion of Lenovo’s overall sales, but represent a lot of future growth as global PC sales lag behind mobile devices. Lenovo created a business unit last year called the Mobile Internet Digital Home.

    Reply
  25. Tomi Engdahl says:

    Today’s computing landscape
    http://www2.electronicproducts.com/Today_s_computing_landscape-article-FAJH_Agilent_Apr2112-html.aspx

    Meeting the challenges of mobile and cloud computing innovation

    Ten years ago or even five years ago the personal computer industry was the driver of technical innovation. Desktops and laptops up-deployed technology to servers, and down-deployed technology into embedded and consumer products.

    Today, technical innovation is driven by mobile and cloud computing. This innovation is feeding a worldwide industry that meets the demand for billions of mobile devices and the computing cloud that supports them. Innovation in size, power requirements, memory, battery life, and flexibility are the answer to the world’s thirst for small mobile devices that can do everything the desktop and laptop used to do, and even more. Add the huge increase in bandwidth now available via Wi-Fi and 3G/4G networks that is driving innovation in servers to enable the cloud.

    Today’s computing landscape has shifted test and measurement needs as well. This innovation shift has changed the conversation around test and measurement needs to address designers’ debug and compliance test requirements.

    “The cloud computing market is heading into the stratosphere as companies seek to offer services designed to serve tablets, smartphones and other mobile devices. …projected to surge to $110 billion in 2015, up from $23 billion in 2010.”

    Reply
  26. Tomi Engdahl says:

    Ubuntu Will Soon Ship On 5% of New PCs
    http://linux.slashdot.org/story/12/05/07/2340247/ubuntu-will-soon-ship-on-5-of-new-pcs

    “Chris Kenyon, the VP of sales and business development for Canonical, just spoke this afternoon at the Ubuntu 12.10 Developer Summit about what Canonical does with OEMs and ODMs. He also tossed out some rather interesting numbers about the adoption of Ubuntu Linux. Namely, Ubuntu will ship on 5% of worldwide PC sales with a number of 18 million units annually.”

    Canonical: Ubuntu To Soon Ship On 5% Of PCs
    http://www.phoronix.com/scan.php?page=news_item&px=MTA5ODM

    Here’s some of the facts that Kenyon tossed out in his after-lunch keynote:

    - Eight to ten million units shipped last year world-wide.

    - Last year Ubuntu shipped on 7.5 billion dollars (presumably USD) worth of hardware.

    - Next year they expect to more than double these numbers to 18 million units world-wide, or what Chris says would be 5% of PCs shipping world-wide would be with Ubuntu Linux.

    - At more than 200 Dell stores in China, there is Ubuntu branding present and Dell China employees knowledgeable about Ubuntu Linux.

    - Ubuntu continues to do very well in the server space.

    Reply
  27. Tomi Engdahl says:

    PC Gaming Hardware Market to Hit $23.6 Billion in 2012
    http://jonpeddie.com/press-releases/details/pc-gaming-hardware-market-to-hit-23.6-billion-in-2012/

    Premium quality “Enthusiast” and “Performance” class equipment contributes $3.2 billion in growth from 2011

    Jon Peddie Research estimates there are 54 million Performance and Enthusiast class PC gamers worldwide, with new entrants and console converts bolstering this to 72 million by 2015

    The recession is winding down and the Enthusiast and Performance class PC gamers (those who spend over $1000 on equipment) have spoken…with their wallets. With chips from AMD, Intel, and Nvidia, new machines from Alienware, HP, Lenovo and others, components and accessories from companies like ASUS, EVGA, Corsair, Logitech, and MadCatz, and new games in the pipe like Far Cry 3, BioShock Infinite, Crysis 3, ARMA 3, rFactor 2, and Interstellar Marines, the financial engine of the world’s most elite gaming platform is fully fueled and will drive the global market to $32 billion by 2015.

    This time the hardware suppliers will be ready for them with new machines, Ultra HD and 120 HZ stereo3D capable displays, new super power supplies, sound systems, cases, cooling, high performance memory, SSDs, keyboards, mice, the list goes on and on.

    Reply
  28. Tomi Engdahl says:

    WTF is… Intel’s Ivy Bridge
    http://www.reghardware.com/2012/05/08/wtf_is_intel_ivy_bridge_architecture/

    Intel’s latest processor architecture, codenamed Ivy Bridge, is its previous one, Sandy Bridge, shrunk. Sandy Bridge chips, marketed as second-generation Core i CPUs, were produced using a 32nm process. Ivy Bridge is 22nm

    Actually, there’s a little bit more to it than that.

    The Ivy Bridge die layout is indeed much the same as Sandy Bridge’s. There are four 64-bit x86 processor cores, a memory controller and a graphics processor all integrated onto the same silicon die.

    he Sandy Bridge 3.5GHz Core i7-2700K has 1.16 billion transistors on a 216mm² die. The Ivy Bridge equivalent, the i7-3770K, measures just 160mm² and contains 1.4 billion transistors – 21 per cent more.

    At the heart of the new fabrication process is the Intel’s Tri-Gate transistor technology, which the chip giant is calling the world’s first ’3D’ transistor.

    The most significant change within Ivy Bridge’s architecture is its integrated graphics processor (IGP). Probably the most important upgrade: the IGP now supports DirectX 11.

    There are two versions of the IGP. First, there’s the HD 4000, which will feature in the high-end Ivy Bridge chips. It has 16 shader units and a core speed of up to 1.15GHz. Although the core is capable of running at 1.35GHz, this higher speed will only be available to mobile CPUs.

    The second Ivy Bridge IGP is the HD 2500, aimed at the mainstream market. It has a core speed of 1.15GHz too, but with just six shader units.

    Both IGPs are set to run at 650MHz, but can reach the higher speeds if the need arises. Again, this is all in the name of energy preservation.

    Intel has taken the opportunity to enhance its processors’ security adding a Digital Random Number Generator (DRNG), a high speed number generator to churn out cryptographic keys quickly and less predictably than basic, pseudo-random number generators in software.

    There’s also Supervisory Mode Execute Protection (SMEP) which helps to prevent Escalation of Privilege (EoP) attacks in both 32- and 64-bit operating modes.

    Reply
  29. Tomi Engdahl says:

    Intel Ivy Bridge Core i7-3770K quad-core CPU
    http://www.reghardware.com/2012/05/08/review_intel_ivy_bridge_core_i7_3770k_quad_core_procesor/

    The i7-3770K is the flagship of Intel’s Ivy Bridge desktop range and as such is the replacement for the very popular Sandy Bridge Core i7-2700K part.

    Like the 2700K, the 3770K is a quad-core design capable of processing eight threads thanks to HyperThreading. It has 8MB of “Smart” cache memory and is clocked at 3.5GHz which, with the aid of Intel’s Turbo Boost 2.0 technology, can be upped to 3.9GHz on the fly.

    Apart from the move to a 22nm process, the other important IVY Bridge improvement is the new integrated GPU.

    Verdict: RH Recommended Medal

    The Ivy Bridge Core i7-3770K doesn’t make a giant leap ahead of the previous generation of Core i7 chippery. It’s more of gentle step forward. But the die shrink down to 22nm does make for a far more power efficient chip than the previous generation – good for your leccy bills – and at last Intel’s integrated graphics supports DirectX 11, something it has needed to do for quite some time.

    Reply
  30. Tomi Engdahl says:

    TSMC zaps 3.1GHz ARM processor with 28nm shrink ray
    Dual-core Cortex-A9 turbocharged for microservers
    http://www.theregister.co.uk/2012/05/08/tsmc_28_nanometer_cortex_a9_arm/

    TSMC has put a dual-core 32-bit Cortex-A9 processor test chip through the fab dryer and brought it down from 40nm using its latest process (known as 28HPM). The silicon biz was able to crank up the clock speed on the A9 to a comfortable 1.5GHz to 2GHz in a thermal and power-draw band suitable for smartphones and tablets, and pushed the clocks up as high as 3.1GHz for other “high performance” and unnamed uses under “typical conditions” – like perhaps microservers, for instance.

    TSMC said that the 28nm part was “twice as fast” as its 40nm sibling “under the same operating conditions”, by which we presume it sucked on the same amount of juice and emitted the same amount of heat as a dual-core Cortex-A9 implemented in 40nm and running at 1.5GHz.

    It is not clear how much less current the 28nm part will burn at 1.5GHz and 2GHz compared to 40nm equivalents

    Reply
  31. Unblock ThePirateBay says:

    Nice Post !But I found Another way to Unblock Thepiratebay here

    Reply
  32. Tomi Engdahl says:

    AMD G series APUs support Windows Embedded Compact 7
    APUs for medical, retail and industrial devices
    http://www.theinquirer.net/inquirer/news/2173125/amd-series-apus-support-windows-embedded-compact

    CHIP DESIGNER AMD has announced that its G series Fusion accelerated processor units (APUs) now support Microsoft’s Windows Embedded Compact 7 operating system.

    AMD’s G series embedded APUs are the firm’s effort to get into the high-volume embedded chip market by offering signficantly better GPU power than anything Intel and its Atom processor can offer. Now AMD has announced that its G series chips support Microsoft’s Embedded Compact 7, an operating system that is pitched towards medical, retail and industrial automation systems.

    While the thought of Microsoft Windows running on medical devices might send your heart into cardiac arrest, for AMD it could be onto a winner, as Windows is popular in retail and industrial systems, while digital signage systems make heavy use of Windows. According to Microsoft, its Windows Embedded Compact 7 operating system will include Silverlight, a customised Internet Explorer web browser and support for Adobe’s Flash 10.1.

    Reply
  33. Tomi Engdahl says:

    AMD’s Hondo APUs ready for Windows 8 Q4 launch – report
    Chip giant’s tablet-friendly silicon on the way
    http://www.theregister.co.uk/2012/05/09/amd_hondo_tablet_windows8/

    Chip giant AMD is set to debut its 32nm Trinity APUs in notebooks later this month, while the firm’s tablet-friendly Hondo chips will hit the streets in the fourth quarter to coincide with the much-anticipated launch of Windows 8, Digitimes has learnt.

    Citing “sources from notebook players”, the Taiwan-based tech title said that AMD would delay a version of the A-Series Trinity APUs for desktops until August, with prices expected to come in under those of Intel’s Ivy Bridge processors.

    Among the desktop models will be the A10-5800K, A10-5700, A8-5600K and A8-5500 models, all made by former foundry GlobalFoundries.

    Trinity will be based on the Piledriver architecture, boosting overall performance by 25 per cent and graphics performance by an impressive 50 per cent over AMD’s current Llano chips, the sources blabbed to Digitimes.

    More interesting for many will be AMD’s play in the burgeoning tablet market, with the firm’s ultra low power (ULP) 40nm Hondo APUs slated for launch in Q4.

    The firm has high hopes for the tablet and ‘ultrathin’ market and it will be interesting to see whether its 4.5-watt Hondos can rise to the twin challenge of unsettling undisputed mobile chip champ ARM and edging out arch-rival Intel.

    Reply
  34. Tomi Engdahl says:

    Protocols to Go
    http://www.fpgagurus.edn.com/blog/fpga-gurus-blog/protocols-go-0?cid=Newsletter+-+EDN+on+Embedded+Processing

    At the turn of the millennium, a specialized communication chip called a network processor (which slowly got replaced by FPGAs and ASSPs) spawned a more specialized co-processing chip usually known as a “TCP Offload Engine”, or TOE.

    The idea was to hardwire support for middle-layer communication protocols into silicon. Depending on how your hardware design was configured and whether the operating system supported TCP acceleration, the TOE worked pretty well.

    I kept waiting for TOEs and similar devices to be turned into FPGA IP cores (IP in this case referring to Intellectual Property – though since it was designed for TCP/IP, you might whimsically call it “IP for IP”). There were a few protocol-stack hardware offerings for Gigabit Ethernet and Interlaken designs, but it’s always been a bit hit-or-miss for embedding these kind of cores into FPGAs.

    Now, the IP core specialist PLDA Inc. has come up with what might be an optimal middleware core called QuickTCP

    All the major TCP-related protocols are supported, such as Address Resolution Protocol. If I was going to chide PLDA on one feature, it would be for the support of IPv4 alone. Many OEMs and network operators are switching to IPv6 these days. It is important not merely to upgrade to IPv6, but to offer dual-stack support for system designs that have to support IPv4 and IPv6 simultaneously. But that’s a minor quibble. I’m just happy to see some ready-to-roll communication protocols being embedded in FPGA cores. Let the games begin.

    Reply
  35. Tomi Engdahl says:

    Microsoft to bring full Internet Explorer browsing to Xbox 360 with Kinect controls
    http://www.theverge.com/2012/5/10/3012013/internet-explorer-browser-xbox-360-kinect

    Microsoft is currently testing a modified version of Internet Explorer 9 on its Xbox 360 console, according to our sources. The Xbox 360 currently includes Bing voice search, but it’s limited to media results. Microsoft’s new Internet Explorer browser for Xbox will expand on this functionality to open up a full browser for the console. We are told that the browser will let Xbox users surf all parts of the web straight from their living rooms.

    Microsoft has also integrated Kinect gestures and voice control heavily into the experience

    Reply
  36. Tomi Engdahl says:

    Twin-track development plan for Intel’s expansion into smartphones
    Android-optimization is key to beating Apple
    http://www.theregister.co.uk/2012/05/11/intel_smartphone_android/

    Intel is planning a two-pronged attack on the smartphone and tablet markets, with dual Atom lines going down to 14 nanometers and Android providing the special sauce to spur sales.

    Intel has speeded up the process technology shift for this sector and its Atom chips will shift from 32nm today, then 22nm next year. 14nm hardware is scheduled for 2014.

    Later this year Intel will release the Atom Z2580 chip, which promises 2x processing and graphics performance within a good power envelope for smartphones and tablets. The next stage will be the 22nm Merrifield chip, which Bell predicted would change the game for Intel in the smartphone market.

    “This is a really big deal for us,” he said. “It’s not just a technology shrink to 22nm, it’s a fundamental change. There’s a brand new processor core, it has state of the art imaging and graphics and is a new part from the ground up.”

    Merrifield will ship next year in high-end smartphones and tablets

    To address the more basic market the Z2000 series of Atom processors, which operate at around 1GHz, will ship later this year and be aimed at the low-end handset market, along with 2G and 3G chipsets and HSPA+ connectivity.

    “Our phone efforts right now are concentrated on Android and we have thousands of engineers right now optimizing Android to be the best version on Intel architecture,” Bell said. “This is a fundamental advantage that not many other people have.” Eul said the same for Intel’s tablet range.

    Reply
  37. Tomi Engdahl says:

    ARM dominates 10B unit CPU core market
    http://www.edn.com/article/521772-ARM_dominates_10B_unit_CPU_core_market.php?cid=EDNToday_20120510

    Driven by the growth of mobile devices, merchant CPU cores shipped in more than 10 billion chips last year, up 25% over 2010, according to a new report. ARM Ltd commanded 78% of that market while Ceva and Imagination Technologies took even larger chunks of the smaller markets for DSP and graphics cores, said the report from the Linley Group (Mountain View, Calif).

    ARM’s success casts a shadow on its archrival, MIPS Technologies. MIPS recently said it may sell some of its patents, and is reportedly seeking an acquisition partner.

    “Despite a general industry need for a strong alternative to ARM, MIPS is slowly sinking below the threshold of viability,” the report said.

    “Without new customers, MIPS cannot survive, but the mere possibility that the company could collapse or be sold to an unknown bidder will make it difficult to sign new licensees,” the report said. “We expect major changes to occur within the next year,” it added.

    ARM’s dominance “has created an unbalanced market,” the report concluded. Indeed, at least one semiconductor executive said China’s mobile chip designers want an alternative to what they see as ARM’s high royalty rates with some already turning to the Power architecture.

    The report validated the group’s predictions in 2008 that the market of 5.3 billion chips with merchant CPU cores would double by 2012. Last year’s growth fell short of 2010, however, which saw a 30% increase over 2009.

    “We expect CPU IP to maintain a 10% compound annual growth rate through 2016 as the market matures and smartphone growth slows,” the report said.

    Overall shipments of merchant DSP cores grew 44% in 2011 to see use in 1.16 billion chips in 2011, thanks mainly to their adoption as cellular baseband processors.

    Graphics cores represent the smallest but fastest growing of the processor core markets. Shipments exceeded 300 million units in 2011, up from less than 90 million in 2008, the report said.

    ARM took last place in this sector with a 4% share while Vivante was second and DMP third at 8% and 6%, respectively.

    “I’m still bullish on ARM’s Mali cores due to the company’s reach,” said Gardner. “They should do well, and their Samsung win was important,” he added.

    Reply
  38. pc repair tucson says:

    very well done…

    great post. looking forward seeing it in the future….

    Reply
  39. Tomi Engdahl says:

    The Apple-Intel-Samsung Ménage à Trois
    http://www.mondaynote.com/2012/05/13/the-apple-intel-samsung-menage-a-trois/

    Fascinating doesn’t do justice to the spectacle, nor to the stakes. Taken in pairs, these giants exchange fluids – products and billion$ – while fiercely fighting with their other half. Each company is the World’s Number One in their domain: Intel in microprocessors, Samsung in electronics, Apple in failure to fail as ordained by the sages.

    The ARM-based chips in iDevices come from a foundry owned by Samsung, Apple’s mortal smartphone enemy. Intel supplies x86 chips to Apple and its PC competitors, Samsung included, and would like nothing more than to raid Samsung’s ARM business and make a triumphant Intel Inside claim for Post-PC devices. And Apple would love to get rid of Samsung, its enemy supplier, but not at the cost of losing the four advantages it derives from using the ARM architecture: cost, power consumption, customization and ownership of the design.

    Intel got stuck knitting one x86 generation after another. The formula wasn’t broken.

    These new ARM chips are great, but where’s the money? They’re too inexpensive, they bring less than a third, sometimes even just a fifth of the price, of a tried and true x86 PC microprocessor

    Then there’s the power consumption factor: x86 chips use more watts than an ARM chip. Regardless of price, this is why ARM chips have proliferated in battery-limited mobile devices. Year after year, Intel has promised, and failed, to nullify ARM’s power consumption advantage through their technical and manufacturing might.

    2012 might be different. Intel claims ‘‘the x86 power myth is finally busted.” Android phones powered by the latest x86 iteration have been demonstrated. One such device will be made and sold in India, in partnership with a company called Lava International. Orange, the France-based international carrier, also intends to sell an Intel-based smartphone.

    Finally, Otellini’s ‘‘they can’t ignore us’’ could be decoded as ‘‘they won’t be able to ignore our prices’’. Once concerned about what ARM-like prices would do to its business model, Intel appears to have seen the Post-PC light: Traditional PCs will continue to make technical progress, but the go-go days of ever-increasing volumes are gone. It now sounds like Intel has decided to cannibalize parts of its PC business in order to gain a seat at the smartphone and tablet table.

    Reply
  40. Tomi Engdahl says:

    AMD reveals Trinity specs, claims to beat Intel on price, multimedia, gaming
    http://www.engadget.com/2012/05/15/amd-trinity-apu-unveiled/

    Itching for the details of AMD’s latest Accelerated Processing Units (APUs)? Then get ready to scratch: Trinity has arrived and, as of today, it’s ready to start powering the next generation of low-power ultra-portables, laptops and desktops that, erm, don’t run Intel. The new architecture boasts up to double the performance-per-watt of last year’s immensely popular Llano APUs, with improved “discrete-class” integrated graphics and without adding to the burden on battery life. How is that possible? By how much will Trinity-equipped devices beat Intel on price?

    Reply
  41. Tomi Engdahl says:

    Drive interfaces require trade-offs
    http://www.edn.com/article/521761-Drive_interfaces_require_trade_offs.php?cid=EDNToday_20120514

    As the amount of data in networks continues to grow at an exponential rate, higher-performance storage at both the client side and the enterprise server is becoming a necessity. The client side must balance the performance with power and space constraints. The server side must balance capacity and throughput. These capacity issues drive a power constraint on a multi-unit rather than a single-unit basis.

    Disk drives have moved from power-inefficient and high-pin-count parallel interfaces to low-power, high-speed serial connections with low pin counts

    For newer, power-conscious and application-optimized systems, solid-state drives are making inroads, but they have smaller capacities due to their cost.

    The form factor for this client-side storage is a 2.5-in. or smaller drive. Bandwidths of approximately 3 Gbps for SATA II are now moving to 6 Gbps for SATA III for the primary interface, including the 5-Gbps USB 3 interface, which is generally SATA II or III devices with a protocol converter and a buffer.

    With the arrival of the Internet of Things and the rise in embedded computing, the storage needs of new embedded clients have also changed. The embedded-system world has several storage formats to use. The EMMC (embedded multimedia card) is the most common format for cell phones, tablets, and other microcontroller-directed processing environments featuring directly mapped storage and I/Os.

    Just as in the industrial embedded-system market, enterprise storage focuses on an extended range of operating temperature as well as ECC, wear-leveling, data integrity, and a high BER (bit-error rate).

    The enterprise market splits between using long sequential read/writes and short random read/writes.

    Unlike consumer storage, these enterprise storage-area-network and direct-attached-storage products may scale into the petabyte, or 1015-byte, and exabyte, or 1018-byte, level.

    Density matters, but getting the correct data is the key.

    Reply
  42. Tomi Engdahl says:

    Four reasons why MIPS new cores may make it relevant again
    http://www.edn.com/article/521793-Four_reasons_why_MIPS_new_cores_may_make_it_relevant_again.php

    MIPS Technologies last week rolled out a new generation of microprocessor cores called Aptiv.

    With the new cores’ smaller die size and reduced energy consumption compared to ARM’s midrange core like A15, MIPS is hoping that the new family of cores can put the company back on track.

    MIPS is at a crossroads, however. Some analysts see Aptiv arriving in the market a little too late.

    MIPS is also facing an even bigger upheaval: the company’s potential sale. Recent speculation that “MIPS is up for sale” has not died down, and MIPS has neither confirmed nor denied the reports.

    Reply
  43. Tomi Engdahl says:

    NVIDIA CEO Jen-Hsun Huang announces cloud-based, virtualized Kepler GPU technology and GeForce GRID gaming platform
    http://www.engadget.com/2012/05/15/jen-hsun-huang-announces-cloud-based-virtualized-gpu/

    NVIDIA’s GPU technology conference here in San Jose, California and CEO Jen-Hsun Huang just let loose that his company plans to put Kepler in the cloud. To make it happen, the company has created a virtualized Kepler GPU tech, called VGX, so that no physical connections are needed to render and stream graphics to remote locations. So, as Citrix brought CPU virtualization to put your work desktop on the device of your choosing, NVIDIA has put the power of Kepler into everything from iPads to netbooks and mobile phones.

    While the virtualized GPU has application in an enterprise setting, it also, naturally, can put some serious gaming power in the cloud, too.

    Reply
  44. Tomi Engdahl says:

    With the NVIDIA VGX platform in the data center, employees can now access a true cloud PC from any device — thin client, laptop, tablet or smartphone — regardless of its operating system, and enjoy a responsive experience for the full spectrum of applications previously only available on an office PC.

    NVIDIA VGX enables knowledge workers for the first time to access a GPU-accelerated desktop similar to a traditional local PC. The platform’s manageability options and ultra-low latency remote display capabilities extend this convenience to those using 3D design and simulation tools, which had previously been too intensive for a virtualized desktop.

    “NVIDIA VGX represents a new era in desktop virtualization,” said Jeff Brown, general manager of the Professional Solutions Group at NVIDIA. “It delivers an experience nearly indistinguishable from a full desktop while substantially lowering the cost of a virtualized PC.”

    NVIDIA VGX is based on three key technology breakthroughs:

    NVIDIA VGX Boards. These are designed for hosting large numbers of users in an energy-efficient way. The first NVIDA VGX board is configured with four GPUs and 16 GB of memory, and fits into the industry-standard PCI Express interface in servers.

    NVIDIA VGX GPU Hypervisor. This software layer integrates into commercial hypervisors, such as the Citrix XenServer, enabling virtualization of the GPU.

    NVIDIA User Selectable Machines (USMs). This manageability option allows enterprises to configure the graphics capabilities delivered to individual users in the network, based on their demands.

    The NVIDIA VGX platform enables up to 100 users to be served from a single server powered by one VGX board, dramatically improving user density on a single server compared with traditional virtual desktop infrastructure (VDI) solutions.

    NVIDIA VGX boards are the world’s first GPU boards designed for data centers. The initial NVIDIA VGX board features four GPUs, each with 192 NVIDIA CUDA® architecture cores and 4 GB of frame buffer. Designed to be passively cooled, the board fits within existing server-based platforms.

    The NVIDIA VGX GPU Hypervisor is a software layer that integrates into a commercial hypervisor, enabling access to virtualized GPU resources. This allows multiple users to share common hardware and ensure virtual machines running on a single server have protected access to critical resources. As a result, a single server can now economically support a higher density of users, while providing native graphics and GPU computing performance.

    Source: Press release at
    http://www.engadget.com/2012/05/15/jen-hsun-huang-announces-cloud-based-virtualized-gpu/

    Reply
  45. Tomi Engdahl says:

    Reducing energy cost of intra-chip communications
    http://www.eetimes.com/design/smart-energy-design/4372987/Reducing-energy-cost-of-intra-chip-communications?Ecosystem=communications-design

    For achieving high-performance systems, it is well-known that the race towards higher frequency has moved towards a race in terms of number of cores.

    With the advent of new highly computing-intensive mobile applications (high throughput and software-defined radio, high-resolution video streaming, 3D image processing, augmented reality…), current system-on-chips (SoCs) are quickly moving towards many-cores for increasing parallelism. As a result, the number and distance of communications between these cores are growing exponentially.

    This point is explaining the relative importance of communications which can account for up to 30 percent of overall energy consumption in the highest performing many-core architectures.

    Two important constraints have to be considered: Time-To-Market (TTM) and power consumption. The two points are linked to communications between PE. Indeed, SoC design is clearly moving from Intellectual Properties (IP) or blocks reuse to platform reuse in order to minimize software development efforts for TTM reasons. Communications represent the key point to master for platform reuse.

    Until the early 2000, busses were mostly used in communication infrastructure.

    In the late 1990s, the network-on-chip (NoC) concept was introduced. Keywords for defining NoCs are regularity, flexibility, throughput scalability and reduced power consumption. NoCs leverage on multiprocessors interconnects background but differ in their implementation with different latency, area cost and power consumption requirements. As regular structures, they bring the flexibility and scalability needed for the platform concept. In terms of power consumption, they are more efficient than busses

    Even if NoC-based architectures solve many issues linked to many-core architectures, the power consumption stays at a high level and tends to increase due to the increasing number of cores. Without innovation in this field, the communication alone could have accounted for more than 50 percent of the full SoC power consumption.

    NoC are distributed all over the SoC and the clock tree of a fully synchronous NoC typically represents 30 percent of its power consumption.

    However, clock distribution is not the only problem. One more fundamental issue is the difficulty to predict communications events which are often performed by data bursts

    GALS architectures are a solution to deal with multiple clocks domains. Consequently, it is a solution to solve the clock tree distribution issue in NoC-based architectures and has been widely used.

    Towards asynchronous communications?
    Asynchronous logic has been known in the designer community to come with three issues delaying their adoption in industry: high area overhead; the need of specialized logic cells for performance purpose, “Muller gates” or “C-element” necessary for arbitrating signals; finally, the need of specialized tools for synthesis and back-end.

    One of the main advantages of asynchronous design is its capacity to react on events. This feature can be used to further reduce the power consumption by implementing a voltage scaling scheme on the routers controlled by the arrival / departure of packets.

    As demonstrated in this article, communication quality is a key factor for low-power many-core development.

    For all these reasons, the selection of an adapted GALS technology has to be considered. Mesochronous, asynchronous GALS and asynchronous merits have then been compared.

    Reply
  46. Tomi Engdahl says:

    This is why people pirate Windows
    http://www.networkworld.com/community/blog/why-people-pirate-windows

    “I’m pirating the next version of Windows” where the author wrote, “I will not buy another copy of Windows until the activation system is removed. Not another moment of my time will be wasted entering excessively long 100-digit activation keys into my telephone, only to have the key automatically rejected, then manually accepted after a few more minutes of inconvenience by someone on the phone. I have had enough.”

    Indeed, even Microsoft agreed, “Because Windows installed on your PC is genuine, enjoy the security, reliability and protection it provides.” Whew, ok this was like the third time but all is well that ends well, right? Eeenk! Two days later guess what showed up again on the bottom right of the monitor? “This copy of Windows is not genuine.”

    Well I hope it doesn’t come back, but this had me thinking about the coming Windows 8. Granted I’m out of practice of mass fixing Windows, but you can surely see from the screenshot proof that it was indeed validated and genuine at least once.

    “There is mounting frustration about the Windows activation issue and I’ve also read a host of pages describing how to remove the offending ‘This copy is not genuine.’ I would like Microsoft to give the definitive how-to remove answer to customers who are not pirating, not a victim of counterfeiting, but continue to get error messages as if they are.”

    Reply
  47. Tomi Engdahl says:

    IT Workers Are Happy, But Will Still Leave for Something Better
    http://www.cio.com/article/706171/IT_Workers_Are_Happy_But_Will_Still_Leave_for_Something_Better?taxonomyId=3123

    Despite overall satisfaction with their current job situation, IT workers still show a readiness to jump ship when the next best thing comes along.

    The majority of IT employees are engaged at work, loyal to their employers and inspired to do their best everyday, according to new survey findings from Randstad Technologies and Technisource. However, despite that, more than half (53 percent) are open to new employment opportunities, Think of it as the IT sector’s version of The Five Year Engagement

    “The takeaway for employers is that they use whatever means to create a strong bond with their employees by engaging, recognizing and empowering them in order to minimize attrition,”

    Reply
  48. Tomi Engdahl says:

    Obsolescence by design: Short-term gain, long-term loss, and an environmental crime
    http://www.edn.com/blog/Brian_s_Brain/41782-Obsolescence_by_design_Short_term_gain_long_term_loss_and_an_environmental_crime.php?cid=EDNToday_20120515

    Anyhoo … the battery in the iPhone 3GS still works passably, but holds notably less charge than it did when new.

    And I’ll also claim that this desired outcome defined an intentional design decision by Apple; when the battery inevitably fails, the company assumes that the affected consumer will just go out and buy a brand new handset.

    That same design decision (explained by the company as a necessity to enable slim system form factors…a claim which I frankly don’t buy) extends to the company’s iPods, none of which have ever offered an easily user-accessible battery. And it also extends to the company’s laptops

    One other related MacBook “feature” irks me, too. Apple makes a habit of regularly obsoleting various products (and generations of products) with each Mac OS X uptick.

    Here’s the thing; I pragmatically ‘get’ why Apple chose to chart these particular design courses, from a business standpoint.

    Non-removable batteries, as I’ve already mentioned, guarantee obsolescence and replacement of the entire system.

    And O/S obsolescence not only guarantees system obsolescence but also simplifies both O/S development (by limiting backwards-compatibility necessity) and subsequent O/S support.

    But Apple’s stubbornness also fills up lots of landfills with lots of otherwise perfectly good hardware. And it irks me every time I discover that some widget I’ve bought seemingly only a short time before is now archaic.

    Reply
  49. Tomi Engdahl says:

    Now it has been proven to: more money put to information technology increases selling

    A new study shows that investment in information technology to improve the profitability of the company.

    The study is good news, particularly the Chief Information Officer, as the best in the company’s investments in IT ratio can be as productive as the marketing money spent.

    Most useful were the company that the IT budget has been increased.

    Growth projects, one dollar additional per person employed in the IT costs of sales your way to an increase of 12 dollars per person.

    Source:
    http://www.tietoviikko.fi/cio/nyt+se+on+todistettu+tietotekniikkaan+laitettu+raha+lisaa+myyntia/a808810?s=r&wtm=tietoviikko/-16052012&

    Reply
  50. Tomi says:

    Hewlett-Packard Said to Consider Cutting Up to 25,000 Jobs
    http://www.bloomberg.com/news/2012-05-17/hewlett-packard-said-to-consider-cutting-as-many-as-25-000-jobs.html

    Hewlett-Packard Co. (HPQ) is considering cutting as many as 25,000 jobs, or 8 percent of its workforce, to reduce costs and help the company contend with ebbing demand for computers and services, people briefed on the plans said.

    The number to be cut includes 10,000 to 15,000 from Hewlett-Packard’s enterprise services group, which sells a range of information-technology services and has been beset by declining profitability, said these people, who asked not to be identified because the plans aren’t final and may change.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*