Computer technologies for 2012

ARM processor becomes more and more popular during year 2012. Power and Integration—ARM Making More Inroads into More Designs. It’s about power—low power; almost no power. A huge and burgeoning market is opening for devices that are handheld and mobile, have rich graphics, deliver 32-bit multicore compute power, include Wi-Fi, web and often 4G connectivity, and that can last up to ten hours on a battery charge.The most obvious among these are smartphones and tablets, but there is also an increasing number of industrial and military devices that fall into this category.

The rivalry between ARM and Intel in this arena is predictably intense because try as it will, Intel has not been able to bring the power consumption of its Atom CPUs down to the level of ARM-based designs (Atom typically in 1-4 watt range and a single ARM Cortex-A9 core in the 250 mW range). ARM’s East unimpressed with Medfield, design wins article tells that Warren East, CEO of processor technology licensor ARM Holdings plc (Cambridge, England), is unimpressed by the announcements made by chip giant Intel about the low-power Medfield system-chip and its design wins. On the other hand Android will run better on our chips, says Intel. Look out what happens in this competition.

Windows-on-ARM Spells End of Wintel article tells that Brokerage house Nomura Equity Research forecasts that the emerging partnership between Microsoft and ARM will likely end the Windows-Intel duopoly. The long-term consequences for the world’s largest chip maker will likely be an exit from the tablet market as ARM makes inroads in notebook computers. As ARM is surely going to keep pointing out to everyone, they don’t have to beat Intel’s raw performance to make a big splash in this market, because for these kinds of devices, speed isn’t everything, and their promised power consumption advantage will surely be a major selling point.

crystalball

Windows 8 Release Expected in 2012 article says that Windows 8 will be with us in 2012, according to Microsoft roadmaps. Microsoft still hinting at October Windows 8 release date. It will be seen what are the ramifications of Windows 8, which is supposed to run on either the x86 or ARM architectures. Windows on ARM will not be terribly successful says analyst but it is left to be seen is he right. ARM-based chip vendors that Microsoft is working with (TI, Nvidia, Qualcomm) are now focused on mobile devices (smartphones, tablets, etc.) because this is where the biggest perceived advantages of ARM-based chips lie, and do not seem to be actively working on PC designs.

Engineering Windows 8 for mobile networks is going on. Windows 8 Mobile Broadband Enhancements Detailed article tells that using mobile broadband in Windows 8 will no longer require specific drivers and third-party software. This is thanks to the new Mobile Broadband Interface Model (MBIM) standard, which hardware makers are reportedly already beginning to adopt, and a generic driver in Windows 8 that can interface with any chip supporting that standard. Windows will automatically detect which carrier it’s associated with and download any available mobile broadband app from the Windows store. MBIM 1.0 is a USB-based protocol for host and device connectivity for desktops, laptops, tablets and mobile devices. The specification supports multiple generations of GSM and CDMA-based 3G and 4G packet data services including the recent LTE technology.

crystalball

Consumerization of IT is a hot trend that continues at year 2012. Uh-oh, PC: Half of computing device sales are mobile. Mobile App Usage Further Dominates Web, Spurred by Facebook article tells that the era of mobile computing, catalyzed by Apple and Google, is driving among the largest shifts in consumer behavior over the last forty years. Impressively, its rate of adoption is outpacing both the PC revolution of the 1980s and the Internet Boom of the 1990s. By the end of 2012, Flurry estimates that the cumulative number of iOS and Android devices activated will surge past 1 billion, making the rate of iOS and Android smart device adoption more than four times faster than that of personal computers (over 800 million PCs were sold between 1981 and 2000). Smartphones and tablets come with broadband connectivity out-of-the-box. Bring-your-own-device becoming accepted business practice.

Mobile UIs: It’s developers vs. users article tells that increased emphasis on distinctive smartphone UIs means even more headaches for cross-platform mobile developers. Whose UI will be a winner? Native apps trump the mobile Web.The increased emphasis on specialized mobile user interface guidelines casts new light on the debate over Web apps versus native development, too.

crystalball

The Cloud is Not Just for Techies Anymore tells that cloud computing achieves mainstream status. So we demand more from it. That’s because our needs and expectations for a mainstream technology and an experimental technology differ. Once we depend on a technology to run our businesses, we demand minute-by-minute reliability and performance.

Cloud security is no oxymoron article is estimated that in 2013 over $148 billion will be spent on cloud computing. Companies large and small are using the cloud to conduct business and store critical information. The cloud is now mainstream. The paradigm of cloud computing requires cloud consumers to extend their trust boundaries outside their current network and infrastructure to encompass a cloud provider. There are three primary areas of cloud security that relate to almost any cloud implementation: authentication, encryption, and network access control. If you are dealing with those issues and software design, read Rugged Software Manifesto and Rugged Software Development presentation.

Enterprise IT’s power shift threatens server-huggers article tells that as more developers take on the task of building, deploying, and running applications on infrastructure outsourced to Amazon and others, traditional roles of system administration and IT operations will morph considerably or evaporate.

Explosion in “Big Data” Causing Data Center Crunch article tells that global business has been caught off-guard by the recent explosion in data volumes and is trying to cope with short-term fixes such as buying in data centre capacity. Oracle also found that the number of businesses looking to build new data centres within the next two years has risen. Data centre capacity and data volumes should be expected to go up – this drives data centre capacity building. Data centre capacity and data volumes should be expected to go up – this drives data centre capacity building. Most players active on “Big Data” field seems to plan to use Apache Hadoop framework for the distributed processing of large data sets across clusters of computers. At least EMC, Microsoft, IBM, Oracle, Informatica, HP, Dell and Cloudera are using Hadoop.

Cloud storage has been very popular topic lately to handle large amount of data storage. The benefits have been told very much, but now we can also see risks of that to realize. Did the Feds Just Kill the Cloud Storage Model? article claims that Megaupload Type Shutdowns and Patriot Act are killing interest to Cloud Storage. Many innocent Megaupload users have had their data taken away from them. The MegaUpload seizure shows how personal files hosted on remote servers operated by a third party can easily be caught up in a government raid targeted at digital pirates. In the wake of Megaupload crackdown, fear forces similar sites to shutter sharing services?. If you use any of these cloud storage sites to store or distribute your own non-infringing files, you are wise to have backups elsewhere, because they may be next on the DOJ’s copyright hit list.

Did the Feds Just Kill the Cloud Storage Model? article tells that worries have been steadily growing among European IT leaders that the USA Patriot Act would give the U.S. government unfettered access to their data if stored on the cloud servers of American providers. Escaping the grasp of the Patriot Act may be more difficult than the marketing suggests. “You have to fence yourself off and make sure that neither you or your cloud service provider has any operations in the United States”, “otherwise you’re vulnerable to U.S. jurisdiction.” And the cloud computing model is built on the argument data can and should reside anywhere around the world, freely passing between borders.

crystalball

Data centers to cut LAN cord? article mentions that 60GHz wireless links are tested in data centers to ease east-west traffic jams. According to a recent article in The New York Times, data center and networking techies are playing around with 60GHz wireless networking for short-haul links to give rack-to-rack communications some extra bandwidth for when the east-west traffic goes a bit wild. The University of Washington and Microsoft Research published a paper at the Association of Computing Machinery’s SIGCOMM 2011 conference late last year about their tests of 60GHz wireless links in the data center. Their research used prototype links that bear some resemblance to the point-to-point, high bandwidth technology known as WiGig (Wireless Gigabit), which among other things is being proposed as a means to support wireless links between Blu-ray DVD players and TVs, replacing HDMI cables (Wilocity Demonstrates 60 GHz WiGig (Draft 802.11ad) Chipset at CES). 60 GHz band is suitable for indoor, high-bandwidth use in information technology.. There are still many places for physical wires. The wired connections used in a data center are highly reliable, so “why introduce variability in a mission-critical situation?”

820 Comments

  1. Tomi says:

    Resume Copy; Writing in IT Terms
    cio.com/article/706389/Resume_Copy_Writing_in_IT_Terms?taxonomyId=3123

    Perhaps you see your IT resume as a way to get a job interview. That’s the goal when we send out our resumes, after all. But thinking about resumes that way doesn’t really help you determine how to prepare one.

    Reply
  2. Tomi Engdahl says:

    CPU and GPU chips account for half of $111bn chip market
    Application specific cores are the next big thing
    http://www.theinquirer.net/inquirer/news/2175750/cpu-gpu-chips-account-half-usd111bn-chip-market

    ANALYST OUTFIT IMS Research claims that half of the processor market is made up of chips that have CPUs and GPUs integrated on the same die.

    The firm’s findings show that chips that include both CPU and GPU cores now account for half of the $111bn processor market. According to its report, this growth has all but destroyed the integrated graphics market, but the company said that discrete GPUs will continue to see growth.

    Tom Hackenberg, semiconductors research manager at IMS Research said, “Through the last decade the mobile and media consumption device markets have been pivotal for this hybridization trend; Apple, Broadcom, Marvell, Mediatek, Nvidia, Qualcomm, Samsung, St Ericsson, Texas Instruments and many other processor vendors have been offering heterogeneous application-specific processors with a microprocessor core integrating a GPU to add value within extremely confined parameters of space, power and cost.”

    Hackenberg made an interesting point as to why both AMD and Intel are pushing deeper into their respective CPU and GPU on-die strategies, suggesting that it is a way to easily design an embedded processor for use in handheld devices.

    He also claimed that in the future chip vendors will have to push application specific cores to sell chips rather than rely on higher frequencies.

    Reply
  3. Tomi Engdahl says:

    AMD plugs Trinity into embedded systems
    http://www.eetimes.com/electronics-news/4373315/-AMD-plugs-Trinity-into-embedded-systems?Ecosystem=communications-design

    AMD announced a family of embedded processors based on its new Trinity integrated x86 and graphics design

    The AMD Embedded R-Series includes as many as eight chips that merge x86 and graphics cores and two related I/O controllers.

    Like the Trinity parts, the R-Series sports members with up to four of AMD’s latest Piledriver x86 cores and 384 graphics cores. The embedded devices are unique in that AMD will guarantee their availability for at least five years and support them with a dedicated embedded service group.

    Like the Trinity parts, the R-Series sports members with up to four of AMD’s latest Piledriver x86 cores and 384 graphics cores. The embedded devices are unique in that AMD will guarantee their availability for at least five years and support them with a dedicated embedded service group.

    A handful of embedded systems designs said they will use the chips.
    Advantech-Innocore and Quixant will use them in casino game machines, iBase will use them in digital signs and Congatec and DFI will use them in COM Express modules.

    Reply
  4. Tomi Engdahl says:

    Microsoft expects to sell 350 million Windows 7 operating system based devices on this year.

    Microsoft CEO Steve Ballmer said Windows 7 on the all-time best-selling single system.

    Windows 7′s sales company in the world helped the world’s largest software company results that exceed expectations in the third quarter of the financial year in April.

    Now expectations are placed on the next version of Windows. Windows 8 is scheduled to become available in October.

    Source:
    http://www.digitoday.fi/bisnes/2012/05/22/ballmer-lupaa-myyda-350-miljoonaa-win7-laitetta/201229822/66?rss=6

    Reply
  5. Tomi Engdahl says:

    Core Wars: Inside Intel’s power struggle with NVIDIA
    http://www.theregister.co.uk/2012/05/21/intel_v_nvidia_core_battle/

    GPU Technology Conference Intel and NVIDIA are battling for the hearts and minds of developers in massively parallel computing.

    Intel has been saying for years that concurrency rather than clock speed is the future of high performance computing, yet it has been slow to provide the mass of low-power, high-efficiency CPU cores needed to take full advantage of that insight.

    Another angle on this is that GPUs are already designed for power-efficient massively parallel computing, and back in 2006 NVIDIA exploited its potential for general-purpose computing with its CUDA architecture, adding shared memory and other features to the GPU and providing supporting libraries and the CUDA SDK. CUDA is primarily a set of extensions to C, though there are wrappers for other languages.

    Power efficiency, which is the true limitation on supercomputer performance, has also been a focus, and NVIDIA states a three times improvement in performance per watt, compared to the previous “Fermi” generation.

    Another GK110 advance is Hyper-Q, which provides 32 simultaneous connections between CPU and GPU, compared to just one in Fermi. The result is that multiple CPUs can launch work on the GPU simultaneously, greatly improving utilisation.

    NVIDIA now projects that by 2014, 75 per cent of HPC customers will use GPUs for general purpose computing.

    The rise of GPU computing must be troubling to Intel, especially as the focus on power efficiency raises interest in combining ARM CPUs with GPUs, though implementation is unlikely until we have 64-bit ARM on the market.

    Intel’s response is an initiative called Many Integrated Core (MIC, pronounced Mike). It has similarities with GPU computing, in that MIC boards are accelerator boards with their own memory, and developers need to understand that parts of an application will execute on the CPU, parts on MIC, and that data has to be copied between them.

    Knights Ferry is the MIC prototype, available now to some Intel partners, and has 32 cores and up to 128 threads (four Hyper Threads per core). Knights Corner will be the production MIC and has more than 50 cores and over 200 threads.

    Intel is supporting MIC with its existing suite of tools for concurrent programming: Parallel Studio XE and Cluster Studio XE. Key components are Threading Building Blocks (TBB), a C++ template library, and Cilk Plus which extends C/C++ with keywords for task parallelism. Intel is also supporting OpenMP, a standardised set of directives for parallel programming, on MIC, though in doing so it is getting ahead of the standard since OpenMP does not yet support accelerators. Intel’s Math Kernel Library (MKL) will also be available for C and Fortran. OpenCL, a standard language for programming accelerators, will also be supported on MIC.

    Intel’s line is that if you have an application that takes advantage of parallel programming on the CPU today, it can easily be adapted for MIC, since the MIC processors use the familiar x86 instruction set and programming model.

    “We are trying to provide the common tools and programming models for the Xeon and x86 architecture and for the MIC architecture so you can use C++, Fortran, OpenMP, TBB, Cilk Plus, MKL; not only for Xeon but for MIC as well,” said Intel technical consulting engineer Levent Akyil at the company’s Software Conference last Month in Istanbul. “You can develop for Xeon today and scale your investment to the future Intel MIC architecture.”

    The advantage over CUDA is that developers do not have to learn a new language. Intel quotes Dan Stanzione, deputy director at TACC (Texas Advanced Computing Center). “Moving a code to MIC might involve sitting down and adding a couple of lines of directives [which] takes a few minutes. Moving a code to a GPU is a project,” says Stanzione.

    That said, NVIDIA has partnered with CAPS, Cray and PGI to create a directive-based approach to programming GPU accelerators, called OpenACC. Compiler support is limited currently to those from the above companies, but the expectation is that OpenACC will eventually merge with OpenMP. Adding directives to their C or C++ code is easier for programmers than learning CUDA C or OpenCL.

    Why use NVIDIA GPUs rather than Intel MIC? Jack Wells, director of science at the Oak Ridge National Laboratory (ORNL) in Tennessee, is doubtful that Intel’s “same code” approach will deliver optimum results.

    “But this is a delicate issue. In supercomputing, just porting codes and getting them to run is not the goal. If it doesn’t run well, it’s a bug. So our best judgment is that the same process one needs to go through to get the codes running on a GPU hybrid machine would be similar to what you would do on a MIC hybrid machine, if it’s in a hybrid mode… It is not credible to me that, even if MIC delivers good performance, that just compiling your code and running it will be satisfactory.”

    “NVIDIA has embraced OpenACC, and that’s a development that we’re thrilled about,” he says

    Another mitigating factor against the proprietary nature of CUDA is NVIDIA’s support for the open-source LLVM compiler project. The LLVM compiler for CUDA opens up the possibility of both supporting other languages on NVIDIA GPUs, and compiling CUDA code to target other GPUs or x86 CPUs.

    The key question is: Will MIC or Knights Corner offer the best performance per watt?

    Reply
  6. Tomi Engdahl says:

    NVIDIA VGX VDI: New tech? Or rehashed hash?
    http://www.theregister.co.uk/2012/05/21/nvidia_vgx_vdi_comment/

    NVIDIA’s new VGX virtualised GPU being a potential holy grail for task- and power-user desktop virtualisation inspired reader comments that are well worth addressing. They also brought out a few details that I didn’t cover in the article.

    ” 21st century X-terminals then. Didn’t SGI (Silicon Graphics back then) push this sort of stuff a couple of decades ago?”

    Absolutely right

    “This is hardly new. Virtualising a GPU is already possible under Windows Server 2008 R2 and Hyper-V Server 2008 R2 with RemoteFX. I think it was HP who put up a demo where someone was playing Crysis on a low-end thin client.”

    This is different than what we’ve seen before. From what I can tell, the closest thing to it is what SGI did in their uber-high-end visualisation HPC boxes several years ago. Reader Phil Dalbeck contributed a technical response…

    “This is very different than the virtualised hardware GPU offered under RemoteFX or the software 3d GPU offered in Vmware View 5.”

    “The Virtualised GPU in RemoteFX is an abstraction layer that presents a virtual GPU to the VM, with a very limited set of capability (DirectX9 level calls, no hardware OpenGL, no general purpose compute); not only does this not fully leverage the capabilities of the GPU, but it is less efficient due to having to translate all Virtual > Physical GPU calls at a hypervisor level.”

    “Contrary to some comments above – VGX is a real game changer for MANY industries – my only hope is that Nvidia doesn’t strangle the market by A) Vastly overcharging for a card that is essentially a £200 consumer GPU B) Restrict[ing] competition by tying virtualisation vendors into a proprietary API to interface with the GPU, thus locking AMD out of the market which is to the longer term detriment of end users (eg, CUDA vs OpenCL).”

    Reply
  7. Tomi Engdahl says:

    Half Of PC Users Are Pirates, Says Study
    One in four UK computer users have installed unlicensed software, says BSA
    http://www.techweekeurope.co.uk/news/half-of-pc-users-are-pirates-says-study-78879

    Over half of PC users worldwide have admitted to using pirate software last year, according to a study by the trade group Business Software Alliance (BSA).

    The study discovered that more than three quarters (77 percent) of UK PC users surveyed do not think the risk of getting caught is an effective deterrent to software piracy.

    According to the UK law, the maximum amount of damages the software developers can claim is equivalent to the cost of the software license. The BSA is calling for a stronger damages law, including double damages, to stop the increase in illegal software use.

    According to BSA, on average only 20 percent of software pirates consider current enforcement measures a sufficient deterrent to their activities.

    “It is clear that the fight against software piracy is far from over. Although emerging markets are of the greatest concern, the problem is still persisting in mature markets, in which one in four admit to using pirated software. One of the more troubling issues is that business decision makers purchase some legitimate copies but then turn a blind eye to further (illegal) installations for new users, locations and devices,” said Robin Fry, commercial services partner at DAC Beachcroft.

    “It is all very well having the IP rights in place, but unless we can improve the practical enforcement measures, the effectiveness of the laws will be blunted,” he added.

    Reply
  8. Tomi Engdahl says:

    Summer 2012 preview: tablets, tablets, and more tablets
    http://www2.electronicproducts.com/Summer_2012_preview_tablets_tablets_and_more_tablets-article-fajb_summer2012_tablets_may2012-html.aspx

    The upcoming months will see plenty of new tablets introduced to the market

    It seems like every electronics company nowadays makes its own tablet. Apple, Microsoft, Samsung . . . the list goes on and on.

    With so many names out there, the rumor mill is churning at high speed now, as it is anticipated that the market is about to be flooded with several new and updated tablets over the coming months.

    To make sense of all the hearsay (and no doubt put myself in a position of being wrong on more than one of the following stories I’m about to present), here’s a preview of everything that’s expected to take place over the summer.

    Reply
  9. Tomi Engdahl says:

    Up to 500 million Windows 8 users by end of 2013, says Ballmer
    http://www.neowin.net/news/up-to-500-million-windows-8-users-by-end-of-2013-says-ballmer

    Earlier today, we reported that Microsoft CEO Steve Ballmer said that 350 million devices with Windows 7 would be sold in 2012.

    In a speech to the Seoul Digital Forum in South Korea today, Ballmer said that up to 500 million users will be using a Windows 8 device by the end of 2013.

    Ballmer also said that Microsoft will “soon” launch a version of Skype for Windows 8.

    Ballmer also spoke about the cloud computing industry today, predicting that in a few years there will only be a few companies that will dominate that industry. He stated, “The number of core (cloud) platforms, around which software developers will do their innovation, is not ever-broadening. It’s really a quite smaller and focused number — Windows, various forms of Linux, the Apple ecosystem.”

    Reply
  10. Tomi Engdahl says:

    CIOs Don’t Need to Be Business Leaders
    http://www.cio.com/article/706650/CIOs_Don_t_Need_to_Be_Business_Leaders

    Given the complexity of today’s applications, it’s folly to suggest that the future role of the CIO is less technical and more businesslike, columnist Bernard Golden writes. If anything, it’s the opposite — the business side of the enterprise should embrace technology.

    CIO — It seems like every week I come across an article stating that being a CIO means thinking more like a business person and less like an engineer. Often I see articles that say that CIOs need to talk the language of business, not technology. Occasionally I’ll see one that says that CIOs need to be business leaders and stop focusing on technology.

    I have seen pieces asserting that future heads of IT will be from disciplines such as marketing or finance, since technology really isn’t that important anymore. I’ve even seen analyses that say that CIOs no longer need to manage technically capable organizations because infrastructure is being offloaded to outsourcers and on-premise applications are being displaced by SaaS applications.

    The implication of all these viewpoints is that technology qua technology is no longer significant and that, overall, it’s so standardized and commoditized that it can be treated like any other area of the business. In fact, it can be managed by someone with no technical background at all.

    Notion of CIO as Business Leader Just Plain Wrong

    The shorthand version of this argument is the CIO needs to be a business leader, not a technologist. The implication is clear: The CIO leaves the technical details to others and focuses on the big picture.

    There’s only one thing wrong with this perspective. It’s wrong. In fact, nothing could be further from the truth.

    Technical skills in IT management are important today like never before—and that fact is becoming increasingly evident. In the future, CIOs will need deep technical skills. A CIO with even average technical skills will be not only inadequate for his or her job, he or she will represent a danger to the overall health of the company.

    IT, too, is becoming increasingly complex. Ten years ago, a company’s website was primarily a display application designed to deliver static content. Today, a website is a transaction and collaboration application that supports far higher workloads. Websites commonly integrate external services that deliver content or data that is mixed with a company’s own data to present a customized view to individual users. The application may expose APIs to allow other organizations to integrate it with their applications, and those same APIs may be used to support a mobile website. Finally, the site probably experiences high variability of load throughout the year as seasonal events or specific business initiatives drive large volumes of traffic.

    The complexities of these applications is of an order of magnitude higher than those of a decade ago.

    You Can’t Discuss Tech Without Knowing Tech

    Here’s the thing: Complex as they are, these new applications are critical to the success of the overall business.

    Now, do you think a CIO can get by without understanding the key elements of these type of applications? Without recognizing the weak aspects of the application where failure or performance bottlenecks can ruin successful user engagement with the application?

    it’s critical that the CIO possess a sufficiently deep technical background to feel comfortable discussing current technology matters.

    Believe me, there is a world of difference between someone who understands technology—and as a result has to weigh alternatives and disputes among different groups involved in a technology discussion—and someone who doesn’t really have any technology background and arbitrates by non-technical criteria. The difference between them is the difference between an organization that gets things right on technology—or, when it gets things wrong, can recognize the issue and quickly correct it—and one that makes poor decisions that result in fragile, constrained applications.

    The fact is, businesses today are technology businesses. Information technology is core to what they do. Something so critical to a company’s success imposes an obligation on a CEO to comprehend it.

    Reply
  11. Tomi Engdahl says:

    Nvidia reveals Kai: a platform for building quad-core Android 4.0 tablets priced at $199
    http://www.theverge.com/2012/5/23/3038125/nvidia-reveals-kai-199-quad-core-reference-design

    When Nvidia CEO Jen-Hsun Huang said we might see $199 Tegra 3 tablets this summer, he wasn’t speculating idly. Nvidia has revealed that it’s working on just such a tablet: Kai. At the company’s annual meeting of investors last week, VP Rob Csonger revealed the idea, and explained Kai isn’t just a piece of hardware, but a plan to democratize its quad-core Tegra 3 system-on-chip. Nvidia wants to offer Android 4.0 tablets that are more powerful than the Kindle Fire at the same price point

    Does that make Kai a reference design? Probably yes, but we wouldn’t be surprised if it’s also a consumer product. Nvidia and Asus teamed up at CES 2012 to introduce a quad-core tablet with a then-unheard-of $250 price point, the ME370T, and it’s a dead ringer for the Kai in the picture above.

    Reply
  12. Tomi Engdahl says:

    Microsoft set to release Office for iOS and Android tablets in November
    http://www.bgr.com/2012/05/23/microsoft-office-ipad-android-launch/

    BGR has learned from a reliable source that Microsoft is currently planning to release the company’s full Office suite for not only Apple’s iPad, but for Android tablets as well. The company is targeting November of this year for both launches. Additionally, our source has seen Microsoft Office running on an iPad first-hand and has said that it looks almost identical to the previous leak from The Daily a few months back

    Reply
  13. Tomi Engdahl says:

    HP cuts 27,000 workers
    http://www.channelregister.co.uk/2012/05/23/hp_layoffs_restructuring/

    As rumored last week, IT giant Hewlett-Packard is slashing its employee count worldwide to squeeze more profits from its revenue stream. The job cuts are not as deep as some had been expecting, but are still going to be tough on the company.

    In a statement put out ahead of its conference call with Wall Street analysts, HP said that it was looking to cut between $3bn and $3.5bn in annualized costs from its restructuring as it exited its fiscal 2014, which ends in October of that year. To accomplish this savings goal, HP will need to shed approximately 27,000 workers, which is 8.1 per cent of its 349,600 worldwide workforce.

    Cathie Lesjak, HP’s CFO, said on the call that the company expects to shed 9,000 workers in fiscal 2012, which ends this October

    HP expects the remaining 16,000 employees to be shed over the next two years

    Sure to annoy HPers is the admission by Lesjak that the vendor will rehire employees to cover growth markets and is also to hire employees in lower-cost regions to replace those who are getting the sack.

    HP has singled out three key areas – wait for it – for future investment: cloud, big data, and security. HP is trying to come up with server, storage, services, and software angles for each of these three growth areas and did not provide much in the way of specific investments it would make.

    Reply
  14. Tomi Engdahl says:

    HP plans to lay off 27,000 people — 8 percent of the workforce
    http://venturebeat.com/2012/05/23/hp-plans-to-lay-off-27000-people-8-percent-of-the-workforce/

    Stung by its failures in the mobile computing business, Hewlett-Packard said it would cut 27,000 jobs today. That amounts to 8 percent of its 325,000-person work force.

    The cuts will be implemented by October 2014,

    “We are making progress in our multi-year effort to make HP simpler, more efficient and better for customers, employees, and shareholders,” said Meg Whitman, HP chief executive. “This quarter we exceeded our previously provided outlook and are executing against our strategy, but we still have a lot of work to do.”

    HP will continue to invest in core research and development, enterprise servers, software, and services. This year, HP expects to shed about 8,000 employees.

    HP may not be as bad off as Yahoo, the sick man of the valley, but it hasn’t kept up with the times. Its rival, Apple, has taken over HP’s old properties in Cupertino and is building a futuristic, spaceship-like headquarters on top of them.

    HP Launches Multi-Year Restructuring to Fuel Innovation and Enable Investment
    http://h30261.www3.hp.com/phoenix.zhtml?c=71087&p=irol-newsArticle&ID=1699268&highlight=

    Reply
  15. Tomi Engdahl says:

    Google 7-inch tablet imminent, says report
    http://news.cnet.com/8301-1001_3-57440566-92/google-7-inch-tablet-imminent-says-report/

    The Google 7-inch tablet is on the way, according to a fresh report — the latest in a long line of reports dating back to the beginning of the year.

    The tablet will hit the market in July, according to a Thursday report from DigiTimes. Shipments — about 600,000 initially — are set to begin in June, the Taipei-based publication claimed.

    Asus is expected to manufacture the device.

    That said, total production for 2012 of about two million units, cited in today’s report, is in keeping with NPD DisplaySearch’s original estimate.

    The tablet’s details are not known but there has been speculation about Android 4.0 running on top of a quad-core chip.

    If Google’s 7-inch tablet materializes and a rumored tablet 7.85-inch “iPad Mini” from Apple also surfaces, that would add to a growing collection of smaller tablets from first-tier suppliers.

    Amazon is already a major force in the 7-inch market segment and Samsung has recently begun selling its 7-inch Galaxy Tab 2.

    Reply
  16. Tomi Engdahl says:

    Browser choice: A thing of the past?
    http://news.cnet.com/8301-1023_3-57439936-93/browser-choice-a-thing-of-the-past/

    Devices using iOS and the future Windows RT hobble third-party browsers. Despite some good reasons for doing so, the change could undermine browser competition.

    Like to pick your browser? Beware, because new mobile devices threaten to stifle the competitive vigor of the market for Web browsers on PCs.

    On personal computers running Windows, Macs, and Linux, you can pick from a variety of browsers, finding the best combination of user interface, performance, expansion, customization, and other attributes.

    But on a host of devices ranging from today’s iPhones to tomorrow’s Windows RT tablets, though, things are very different. The idea that the browser is a feature of the operating system — an idea Microsoft floated to defend against an antitrust attack in the 1990s regarding the link between Internet Explorer and Windows — has boomeranged back.

    Although many new devices technically can accommodate other browsers besides those that come with the operating system, those third-party browsers won’t always get the full privileges and thus power of the built-in browser.

    The organization that cares the most about this matter is Mozilla, whose founding purpose is to keep the Web open and whose Firefox browser is today at a significant disadvantage when it comes to spreading beyond personal computers

    There are real technical reasons that curtailing browser privileges can makes sense.

    Thus, when an underlying operating system such Windows, iOS, or Linux grants full privileges to a browser, it’s extending a lot of trust when it comes to security, reliability, power consumption, and other factors.

    That’s why Microsoft clamped down with its new Metro interface that debuts with Windows 8 and its sibling for ARM-based devices, Windows RT. “The Metro style application model is designed from the beginning to be power-friendly,” Microsoft said in one blog post, and for IE, Microsoft is working to head off security problems.

    The sticking point for Mozilla and Google is that Microsoft’s own browser gets the deeper privileges.

    And because plenty of software, especially mobile device software, uses browser engines as a part of its user interface, browsers are really becoming part of the operating system. Metro, for example, offers Microsoft’s browser engine as a way to let programmers more easily write software that works on multiple devices.

    Apple’s iOS and Google’s Android also permit apps to use Web technology behind the scenes.

    For Google to bring Chrome to iOS, it likely would have to either use Apple’s version of WebKit rather than its own or, less likely, rely on a proxy server.

    Windows Phone 7.x shares similar restrictions: other browsers are technically possible but not permitted to spread their wings fully, and programmers can build new browser interfaces on Internet Explorer.

    Greg Sullivan, senior product manager for Windows Phone 7, said last year that Microsoft doesn’t bar other browsers from Windows Phone but said it would have to be written using Microsoft’s higher-level programming techniques.

    “If you can write a browser in Silverlight or XAML, you could submit it to the market,” Sullivan told

    One of the central issues of browser restrictions is a programming approach called just-in-time compilation. Modern browsers running JavaScript applications use JIT compilers to convert high-level JavaScript software into fast, low-level native instructions for a particular processor.

    To do that, however, the browser must have an important power: to be able to create the low-level code then tell the computer that it can execute the code. Marking the memory where the code is stored as executable, though, is a significant step when it comes to security.

    Windows 8 and Windows RT uses a newer interface called WinRT that doesn’t come with the ability to mark code as executable. On x86 machines, Microsoft made an exception, so that third-party browsers running on Metro will be able to use Win32. But on ARM-based systems using Windows RT, it hasn’t made that exception, so only IE gets access to Win32.

    That means no JIT for third-party browsers on Windows RT, which in turn means undoing much of the progress that’s been made in recent years in accelerating JavaScript. And nobody wants to use JavaScript-heavy Web pages and Web applications these days.

    The reason JavaScript program size has exploded is because browsers can shoulder a heavier burden.

    As Nokia Chief Executive Stephen Elop is fond of saying these days, competition has become a war of ecosystems. He’s right.

    It’s not enough to have a nice piece of hardware or a nice operating system. Today, there’s a rush toward vertical integration,

    A browser is one key piece of this stack of technology, so it’s no surprise it’s become so competitive.

    A browser is the gateway to online services, for starts. Google has those services in abundance, with Microsoft charging as fast as it can down the same path. Controlling the browser can ensure people have the Web technologies — Google’s Native Client or Dart programming languages, for example — a company believes necessary to make those services work well.

    That’s where Mozilla gets worried, though

    Google’s Chrome OS and Mozilla’s Boot to Gecko (B2G) don’t even give the option of another browser, of course, because their operating system is the browser.

    And the Web simply is too broad today for developers to target any single browser except in some unusual conditions.

    Now, in addition to real browser competition on PCs, programmers must contend with Safari on iOS, Chrome on Android, IE on Windows, and other contenders such as Amazon’s Silk on Kindle Fire tablets.

    That diversity ensures that, even you’re locked into a particular device’s browser, at least the Web overall won’t be.

    Reply
  17. Tomi Engdahl says:

    Power: a significant challenge in EDA design
    http://www.edn.com/article/521840-Power_a_significant_challenge_in_EDA_design.php

    Power has become a primary design consideration over the past decade and is causing some big changes in the way that engineers design and verify systems. Physics no longer provides a free ride.

    Power is the rate at which energy is consumed—not a hot topic 10 years ago but a primary design consideration today. A system’s consumption of energy creates heat, drains batteries, strains power-delivery networks, and increases costs.

    The rise in mobile computing initially drove the desire to reduce energy consumption, but the effects of energy consumption are now far-reaching and may cause some of the largest structural changes in the industry. This issue is important for server farms, the cloud, automobiles, chips, and ubiquitous sensor networks relying on harvested energy.

    Reply
  18. Tomi Engdahl says:

    Funds Pour into Big-Data Vendors
    http://www.cio.com/article/706746/Funds_Pour_into_Big_Data_Vendors?taxonomyId=3028

    Investors jump on the big-data bandwagon. Many see it as a good bet, but others warn about hype.

    Investors have taken note of the surging enterprise demand for tools that can manipulate and analyze massive volumes of structured and unstructured data.

    In recent months, top venture capital firms have poured hundreds of millions of dollars into companies that make products designed to manage so-called big data, generally defined as very large and diverse sets of structured and unstructured data gathered from a websites, clickstreams, email messages, social media interactions and the like.

    “Big data has become big business,” McDowell said. “Companies are looking for tools to store, manage, manipulate, analyze, aggregate, combine and integrate data.”

    A key driver of the data explosion is the spread of cloud computing, mobile computing and social media technologies, along with business globalization, he said.

    McDowell estimated that the market for big data tools will rise from last year’s $9 billion to $86 billion in 2020, when spending on big data tools will account for some 11% of all enterprise IT spending.

    Reply
  19. Tomi Engdahl says:

    http://www.theregister.co.uk/2012/05/25/quotw_ending_may_25/

    Over at Microsoft, there are probably a few staffers suffering from whiplash after the about-turn the bigwigs did on the Aero interface first introduced with Windows Vista.

    Despite having once championed the design, which is still being used with Windows 7, Redmond has had a change of heart. It is apparently now of the view that Aero is lame.

    Jensen Harris, director of program management for the Windows user experience team said in a blog post:

    This style of simulating faux-realistic materials (such as glass or aluminum) on the screen looks dated and cheesy now, but at the time, it was very much en vogue.

    Reply
  20. dda says:

    Will put a link on your Facebook account in Poland and I strongly commend to your blog: Computer technologies for 2012 Tomi Engdahl’s ePanorama blog. And I have many friends who are interested in medicine, I’m a doctor. I cordially greet and welcome to the blog in our country.

    Reply
  21. Tomi Engdahl says:

    Empowering Choice in Collaboration
    http://blogs.cisco.com/collaboration/empowering-choice-in-collaboration/

    we are facing a workplace that is no longer a physical place, but a blend of virtual and physical environments; where employees are bringing their preferences to work and BYOD (“Bring Your Own Device” to work) is the new norm; where collaboration has to happen beyond a walled garden; and any-to-any connectivity is a requirement, not a “nice to have.”

    As we announced last week, findings from the Cisco IBSG Horizons Study on virtualization and BYOD shows that 95% of organizations surveyed allow employee-owned devices in some way, shape or form in the office, and, 36% of surveyed enterprises provide full support for employee-owned devices. These stats underscore a major shift in the way people are working, in the office, at home and on-the-go, a shift that will continue to gain momentum.

    Based on these market transitions, Cisco will no longer invest in the Cisco Cius tablet form factor, and no further enhancements will be made to the current Cius endpoint beyond what’s available today.

    Reply
  22. Tomi Engdahl says:

    Aged Windows XP costs 5x more to manage than Windows 7
    As XP’s life wanes, Microsoft talks dollars to get businesses to ditch 11-year-old OS
    http://www.computerworld.com/s/article/print/9227490/Aged_Windows_XP_costs_5x_more_to_manage_than_Windows_7

    Microsoft yesterday added ammunition to its increasingly aggressive battle to get users off the nearly-11-year-old Windows XP by citing a company-sponsored report that claims annual support costs for the older OS are more than five times that of Windows 7.

    Microsoft has been banging the Windows XP upgrade drum for years, but stepped up the campaign in 2012, including starting a “two-year countdown” to the demise of security support. Last month, Microsoft was blunt, saying “If your organization has not started the migration to a modern PC, you are late.”

    Windows XP exits all support, including monthly security patches, in April 2014.

    “The bottom line…[is that] businesses that migrate from Windows XP to Windows 7 will see significant return on investment,” said Visser.

    According to IDC, an amazing 42% of the Windows “commercial” installed base, or anything other than consumers’ home machines, was Window XP, making Microsoft’s job of moving everyone off the old OS by its April 2014 retirement nearly impossible.

    In fact, IDC projected that if current trends continue, 11% of the enterprise and educational Windows installed base will still be running XP when Microsoft stops patch delivery in 23 months

    “The migration from Windows XP to Windows 7 yields a 137% return on investment over a three-year period,” claimed IDC.

    Microsoft has been dissing Windows XP for some time, but the ROI report was its first argument that stressed dollars and cents.

    Reply
  23. Tomi Engdahl says:

    Lack of Women Hurting IT Industry
    http://www.cepro.com/article/lack_of_women_hurting_it_industry/

    Female integrators are often ineligible for government contracts because federal law requires at least two women-owned businesses to submit bids. Only 11% of all IT firms are female owned.

    Wendy Frank, founder of Accell Security Inc. in Birdsboro, Pa., wishes she had more competitors.

    It’s not often you hear any integrator say that, but in Frank’s case, she has good reason.

    The current Women-Owned Small Business (WOSB) Federal Contract program authorizes five percent of Federal prime and subcontracts to be set aside for WOSBs. While that might sound fair on the surface, in order to invoke the money set aside for this program, the contracting officer at an agency has to have a reasonable expectation that two or more WOSBs will submit offers for the job.

    “Involving yourself in these political issues, right now, can make great changes that improve your business.”

    Reply
  24. Tomi Engdahl says:

    Why don’t the best techies work in the channel?
    ‘Cos the channel doesn’t want them
    http://www.channelregister.co.uk/2012/05/28/channel_techy/

    “It’s the client’s fault, of course, because they negotiate day rates, not skill levels, erasing the difference between mediocre people like me and stars like you. Of course that’s not the same as saying there aren’t good techies in the channel, because I know some.”

    Even if you didn’t know, it tells you what the firm believes is important: sales, followed by sales support – with actually delivering the system and keeping it running some way behind.

    Resellers and consultancies are run by and for the benefit of the sales team because in the words of an ex-boss of mine “they are the ones that bring in the money” and you’re just a cost, not part of the business.

    Money is the best revenge

    That cliche applies to outsourcing. If you’re outsourced you look good by pushing costs up, not down, and you’re in a good position as you know the IT at your ex-employer and of course they don’t.

    “I hadn’t done the best job I could, or even a competent one. I felt embarrassed when I bumped into the users, especially when they thanked me for the work, not knowing that it wasn’t going to end well.”

    Few good geeks actually enjoy ripping people off.

    Of course there are some good people in consultancies and resellers; I know a good number of them. But that’s not to say they are happy there.

    Reply
  25. Tomi Engdahl says:

    Sony Patent Will Stop Game to Play Advertisement
    http://www.tomsguide.com/us/Patent-PlayStation-advertisement-in-game-NeoGAF,news-15373.html

    Instead of using in-game ad placement, this patent will actually stop the game and play an advertisement as if it were a network TV show.

    The patent is called “Advertisement Scheme for use with interactive content,” and is geared towards Sony’s PlayStation games, maybe more.

    Up until now, gamers have dealt with advertisements as in-game banners, or as actual placements within the virtual real-estate such as billboards, virtual storefronts, virtual products and more.

    Most free-to-play games depend on in-game advertisement to help offset the expense of hosting gamers who don’t purchase virtual goods.

    Gamers will argue that their $60 purchase shouldn’t contain any kind of adverts at all.

    But Sony looks to copy network TV by completely halting a game in progress to display full-screen advertisements. The patent describes a system that will slow down gameplay and display a warning to the end-user, stating that an advertisement is about to take place. The system then halts the game, plays the advertisement, and then warns that the game is about to resume. After that, it’s business as usual. Rewinding to the state of pre-slowdown may even be an option.

    Eurogamer points out that the patent is a continuation of a patent filed back in 2006. That said, Sony’s plans to halt your gameplay may never see the light of day.

    Reply
  26. Tomi Engdahl says:

    AMD says APUs need to have balanced CPU and GPU performance
    http://www.theinquirer.net/inquirer/news/2180387/amd-apus-balanced-cpu-gpu-performance

    CHIP DESIGNER AMD has said that accelerated processor units (APUs) need to have a balance between CPU and GPU performance.

    AMD’s desktop Trinity parts are still some ways off, but the firm desperately needs to address the imbalance that exists in its Llano APUs in terms of CPU and GPU power.

    Now AMD has told The INQUIRER that APUs need to have a balance of CPU and GPU performance, with CPU performance remaining a vital part of overall performance.

    Robinson continued, “The key is the balance between the two cores. Balance between the two [CPU and GPU], not just some monstrous GPU and very little CPU or a monstrous CPU and very little GPU. To me it makes sense to have that balance.”

    Reply
  27. Tomi Engdahl says:

    AMD admits it has to work on improving Linux OpenCL support
    Windows 8 is not the only party in town
    http://www.theinquirer.net/inquirer/news/2180336/amd-admits-improving-linux-opencl-support

    CHIP DESIGNER AMD has admitted it has work to do in improving OpenCL support in Linux.

    AMD’s considerable effort in releasing its Llano and Trinity accelerated processor units (APUs) has been offset by stumbling support from applications for its GPGPU architecture. Now AMD has admitted that it needs to beef up support for Linux.

    Although AMD works with Microsoft to provide OpenCL support in Windows 8, Neal Robinson, senior director of Consumer Developer Support at AMD told The INQUIRER that the firm has “more work to do in the Linux environment”.

    Robinson explained some of the open-source work AMD has been involved in with developers on projects such as the GNU Image Manipulation Program (GIMP), x264, Handbrake and Videolan, with Robinson saying it will work with FFMpeg in the near future to develop OpenCL support. However when firms such a Dell ship workstations with Linux but only offer GPGPU support for users running Microsoft Windows operating systems, it clearly sends a discouraging message about the work AMD and Nvidia have been doing to drive Linux support for GPGPUs.

    Robinson explained some of the open-source work AMD has been involved in with developers on projects such as the GNU Image Manipulation Program (GIMP), x264, Handbrake and Videolan, with Robinson saying it will work with FFMpeg in the near future to develop OpenCL support. However when firms such a Dell ship workstations with Linux but only offer GPGPU support for users running Microsoft Windows operating systems, it clearly sends a discouraging message about the work AMD and Nvidia have been doing to drive Linux support for GPGPUs.

    As for Microsoft’s upcoming Windows 8 operating system, Robinson said, “You’ll see heterogeneous compute on Windows 8 in the Metro interface or in the traditional desktop interface, whichever the user wants to use. There’s a lot of opportunity [for support] there for sure, between DirectX 11 and C++ AMP.”

    Reply
  28. Tomi Engdahl says:

    Are you handcuffed to the rails of your disk array’s sinking ship?
    How your data could end up tied down to a supplier
    http://www.theregister.co.uk/2012/05/29/large_array_migration_impossibility/

    Are your storage arrays now so big, you can’t easily migrate your data off them? If so, you’ve handcuffed yourself to your supplier, open interfaces or not.

    We’ll suppose you’re going to move your data off a, say, fully-loaded VMAX 40K array. How long will it take?

    It will take 2.22 days to move 4PB off the storage vault onto the destination array with the array’s controllers’ normal workload slowed down by all the data moving.

    That’s quite a long time and, in reality, we couldn’t use half the array’s links to do the job. Instead we’d have to use fewer, ten say, so as not affect the array’s normal workload too much. In that case the data transfer rate would be 0.55PB per day and the transfer would need 7.27 days at best – and probably a couple of weeks in real life.

    If the incoming array can virtualise third-party arrays, including the one you are migrating from, then things get better.

    Over time the contents of the old array are drip-fed into the new one until the old array is empty and can be moved out. Thus only buying new arrays that can virtualise your existing array looks like a good idea, and that means using EMC’s VMAX, HDS’ VSP, IBM’s SVC as a virtualising front-end, or NetApp’s V-Series.

    What about the idea of using 100GbitE? The El Reg storage desk understands that using a single big link is more expensive than using multiple smaller links that, collectively, equal the big fat link’s bandwidth. Thus ten 10GbitE links are cheaper than one 100GbitE link.

    Data growth is around 50 per cent per year, so in three years that will be a touch over 50PB. The Isilon scale-out filer doesn’t virtualise third-party arrays and may not be virtualisable by them. Migration means a straight data transfer, up to 50PB of it off 144 nodes. This looks to be a potentially horrendous task.

    The net result of this particular puzzle is that your big data scale-out filer supplier may effectively be your supplier for life. You will be joined at the hip and unable, practically speaking, to throw them off.

    The prospect here is that massive data tubs could become a permanent fixture in your data centres because it is effectively impossible to move the data off them onto a new array. Open systems? Not really.

    Reply
  29. Tomi Engdahl says:

    HP’s last quarter net sales were $ 30.7 billion (approximately EUR 24 billion). Profit was $ 1.6 billion (approximately EUR 1.3 billion).

    HP has announced a giant project to cut the redundancies. The company is expected to cut about 27 000 employees.

    HP has world-wide more than 300 000 employees. 27 000 people represents a reduction of about eight per cent reduction.

    HP revealed at the same time, that the company will publish a Windows 8-tablet.

    Source:
    http://www.tietokone.fi/uutiset/hp_lta_jattimaiset_potkut_27000_lahtee

    Reply
  30. Tomi Engdahl says:

    Lowering Sandy Bridge prices expected to boost PC demand in emerging markets
    http://www.digitimes.com/news/a20120525PD212.html

    Intel has begun to launch Ivy Bridge processors but has not yet lowered prices of Sandy Bridge models as it did before when replacing platforms; therefore, sources from motherboard players pointed out that if Intel lowers prices for Sandy Bridge processors in June-July, it will be conducive to boosting demand for PCs in emerging markets.

    Reply
  31. Tomi Engdahl says:

    Hey! Put down that Cat6 and plug the cloud app gap instead
    Channel MSPs should bone up on new skills … or else
    http://www.channelregister.co.uk/2012/05/29/walsh_on_cloud_managed_services/

    Cloud computing isn’t the future; services are the future – and vendors are increasingly talking about “managed services” opportunities in the cloud.

    When managed services hit the channel a decade ago, the delivery model was seen as the saviour of value-added resellers (VARs), with their business models heavily dependent on increasingly commoditised hardware and software sales. Through services, VARs could transform their break/fix repair services into predictable, recurring revenue providing stability and sustained profitability. And they did.

    Through automation, MSPs monitor and maintain thousands of servers, endpoints, networking switches, Cat5e and Cat6 cabling, and storage arrays; some will also provide email and backup support, but that’s basically the extent of their services.

    Missing from the MSP equation are software and applications.

    Simply put, the average MSP doesn’t have the application skills to sell, set up and support the emerging demand for advanced cloud services.

    When vendors and cloud companies talk about “managed services”, they’re really talking about the class of service providers – hosting companies, carriers, cloud providers – that have the capability to integrate and manage applications in private and hybrid cloud environments.

    The complexity of managed cloud cannot be understated, as enterprises and mid-market businesses are looking to cloud providers to help them finally make use of their shelfware and under-utilised software licences.

    A next generation of service provider is emerging to capitalise on this opportunity by developing APIs and integration templates, making easier the job of porting conventional licences to cloud environments.

    In the near future, cloud managed services will go beyond the integration and support of complex business applications in hosted environments. It is already evolving to include the brokerage of cloud services and the portability of data across multiple hosting domains.

    If conventional MSPs want to engage the evolving cloud services market, they need to invest in higher-level application skills and develop the ability to provide comprehensive application and data management services on behalf of their customers.

    Reply
  32. Tomi Engdahl says:

    Windows XP Costs Firms Five Times More To Maintain Than Windows 7, Says Microsoft
    http://digg.com/newsbar/topnews/windows_xp_costs_firms_five_times_more_to_maintain_than_windows_7_says_microsoft

    Of the many upcoming entries to the technology fray, Windows 8 is by far one of the most anticipated. The Consumer Preview dropped in February to critical acclaim, and although there’s nothing particularly amiss with the current Windows 7, consumers are still pretty eager to sink their teeth into the Metro interface.

    Still, Microsoft is having quite a bit of trouble convincing businesses to make the transition from Windows XP to Windows 7, so to help persuade the stragglers that moving to 7 is the right move, the Redmond-based software maker has sponsored a white paper by analyst firm IDC, which assesses the costs incurred by running the two different operating systems. As it happens, using XP can be significantly more costly for medium-to-large companies, and the difference in some cases isn’t just minor, either.

    According to IDC’s report, in which nine separate companies were interviewed, the costs of IT maintenance services were up to five times higher with XP than Windows 7, which is a pretty damning statistic. Of course, it’s only natural that older software requires more maintenance, but such significant differences should be cause for alarm for those larger businesses running the older operating system.

    With Microsoft advocating the report, we’ll leave it up to you to decide whether the outcome has been doctored to suit the company’s own ends.

    Support for Windows XP and Office 2003 is set to end on April 8th, 2014, so the motives are clear

    Reply
  33. Tomi Engdahl says:

    RedSleeve does RHEL-ish clone for ARM
    http://www.theregister.co.uk/2012/05/29/redsleeve_enterprise_linux_arm/

    If you are tired of waiting for Red Hat to do an official port of its Enterprise Linux commercial distribution to the ARM architecture, well then Red Sleeve Linux has just what you are looking for.

    Like many of you, El Reg had no idea that RedSleeve Linux, a port of the upstream RHEL reworked to run on ARM RISC processors instead of X86, Power, and mainframe processors, even existed, but its existence came to light in the comment section in the story about Dell’s new ARM-based microservers, which are running the latest Ubuntu 12.04 LTS release from Canonical.

    Fedora and RHEL cloner CentOS are working on ARM ports

    RedSleeve can’t actually say that it is a clone of RHEL, of course, but you have to admit it was pretty clever with the naming and branding to convey that message.

    Reply
  34. Tomi Engdahl says:

    Dell ARMs up for hyperscale servers
    But are they dangerous to anyone except HP?
    http://www.theregister.co.uk/2012/05/29/dell_copper_arm_server/

    Look out Intel. Here comes another ARM box to the microserver party.

    If people didn’t want ARM-based servers, Dell wouldn’t build them

    “This is not only real, but it is going out the door,” says Cumings, with a jab at rival Hewlett-Packard’s “Project Moonshot” hyperscale server effort and its “Redstone” many-ARMed server nodes launched last November. Those servers made a big splash, but neither HP nor Calxeda have said much since then about who is using these concept machines – or what the plan is to ramp up the Redstone product line with other ARM chips with more oomph and memory capacity.

    Those HP Redstone servers are based on Calxeda’s 32-bit ECX-1000 processors, which also debuted last fall, and put four processors on a card, 18 cards in a tray, and 288 nodes in a 3U space – all interconnected by the EnergyCore Fabric Switch embedded on each Calxeda chip.

    The reason why it has taken Dell so long to get a production-grade ARM server to market has little to do with hardware – 32-bit and 40-bit ARM chips that could be used in servers have been around for a while – but rather is related to the fact that the ecosystem of software had to evolve around the machines to give them something useful to do. Even the most hyper of the hyperscale data centers does not want to roll its own Linux and software stacks unless it absolutely has to.

    Now that Ubuntu 12.04 LTS is out with an ARM variant, there is a commercial-grade Linux and application stack you can run on an ARMv7-compatible chip. And the Fedora Project is also doing tweaks to support ARM chips, allowing those who like RHELish Linux to experiment as well.

    The LAMP stack and Hadoop will run on these Linuxes on ARM processors, and the OpenStack cloud controller was demoed running on LARM (what else are we going to call it?) this month, too. The KVM hypervisor is expected to be ready soon for the Cortex-A15 ARM processor and Java also works on the chip, too.

    The important thing is not that all of this software is running or will soon be running, but that the software is getting architecture-specific optimizations as people see the hardware and software ecosystem for ARM-based servers coming into being.

    Cumings says that the Copper sleds have been shipping to selected seed customers for some time and that Dell will be standing up racks of the ARM-based servers in its solution centers as well as in the Texas Advanced Computing Center (TACC) at the University of Texas.

    A single C5000 chassis can hold 48 ARM processors, for a total of 192 cores. That works out to 2,688 cores in a rack if you fill it top to bottom with C5000s

    Reply
  35. Tomi Engdahl says:

    What would a post-e-mail world look like?
    http://www.itworld.com/unified-communications/277519/what-would-post-e-mail-world-look

    E-mail may be on the decline, but its archival abilities can’t be matched by any current contender

    As headlines seem to almost gleefully declare the death of e-mail, it may be worth taking a pause and asking the most obvious follow-up question:

    What would a post-e-mail world look like?

    For all of the declarations on the demise of e-mail, not much attention has been given to how communications would function in such a world.

    It is a demonstrable fact that e-mail use is plummeting amongst younger people

    “Possibly part of the story can be around the death of e-mail as a method of sharing,” said Matt Richards, VP Products at ownCloud, a storage service start up.

    “Enter file sync and share,”

    “Problem is, IT needs the same sort of control they had with e-mail in this new era, and is struggling to find it,” Richards said.

    To see the problems with e-mail alternatives, Geck explained, it’s important to define all of the functionality that e-mail currently provides: the messaging has to be robust and relatively unlimited, secure, and able to document conversations and message threads completely. In Geck’s opinion, these are functions that simply must be provided by any e-mail replacement, and currently no one service will provide them.

    Twitter’s character limitations prevents robust communications, and Facebook and Google offer businesses much poorer security for exchanging business documents. And no alternative provides complete and unalterable archival capabilities.

    “Neither Facebook nor Twitter will replace e-mail,”

    Reply
  36. Tomi Engdahl says:

    VOICE RECOGNITION is becoming commonplace in the information technology industry. Once a novelty, known more for its dysfunctional and irritating nature than being helpful and reliable, the advance in technology used in voice recognition has meant a significant growth in its integration into handheld gadgets and other electronics.

    These days, there are not only Apple Iphones deploying speech recognition with relative success, we’ve also got Microsoft’s Xbox 360 Kinect voice commands that can control the console. Then there’s Samsung, which launched a range of voice activated TVs only last week. Speech recognition technology is slowly becoming identified as less of a gimmick and more of a valuable resource.

    Source: The Inquirer (http://s.tt/1cVWE)

    Reply
  37. Tomi Engdahl says:

    Apple’s Crystal Prison and the Future of Open Platforms
    https://www.eff.org/deeplinks/2012/05/apples-crystal-prison-and-future-open-platforms

    Two weeks ago, Steve Wozniak made a public call for Apple to open its platforms for those who wish to tinker, tweak and innovate with their internals.

    EFF supports Wozniak’s position: while Apple’s products have many virtues, they are marred by an ugly set of restrictions on what users and programmers can do with them. This is most especially true of iOS, though other Apple products sometimes suffer in the same way. In this article we will delve into the kinds of restrictions that Apple, phone companies, and Microsoft have been imposing on mobile computers; the excuses these companies make when they impose these restrictions; the dangers this is creating for open innovation; why Apple in particular should lead the way in fixing this mess. We also propose a bill of rights that need to be secured for people who are purchasing smartphones and other pocket computers.

    Reply
  38. Tomi Engdahl says:

    Google and Samsung announce a Chromebox desktop PC
    Apple Mini style desktop runs Chrome OS

    GOOGLE AND SAMSUNG have announced a Chromebox desktop PC and an overhauled Chromebook laptop running the Chrome operating system (OS).

    The Chromebox is an Apple Mini-style desktop box and was revealed in a Google blog post on Tuesday. It will plug into any TV or monitor and enable punters to use Chrome OS.

    Google said that both the Chromebox and the refreshed series 5 Chromebook are based on Intel Core series processors

    Chrome OS has been popular with schools and businesses due to its automatic upgrades to the latest version and reduction in the need for IT departments to manage anti-virus systems.

    Source: The Inquirer (http://s.tt/1cZnC)

    Reply
  39. Tomi Engdahl says:

    Unix, mainframes drag down servers in Q1
    x86 machines can’t fill the gaps
    http://www.theregister.co.uk/2012/05/30/gartner_servers_q1_2012/

    The server market is starting to run out of steam, and there’s plenty of blame to go around as to why.

    According to the box counters over at Gartner, the Unix market is in a slump, IBM is at the tail end of its System zEnterprise mainframe line and customers are awaiting new machines for later this year. The later-than-expected rollout of the Xeon E5 processors from Intel also had a dampening effect on server sales in the first quarter of 2012. You can also lay some blame on the faltering economies in Europe, skittish companies in the United States, and slowing economies in Asia.

    During the three months ended in March, Gartner reckons revenues fell 1.8 per cent on a year ago, with the world consuming $12.44bn worth of server iron of all shapes and sizes. Shipments across all processor architectures didn’t go into negative territory, but with only 1.5 per cent growth to 2.35 million machines, this is not exactly a sign of strength.

    all of the geographic regions in the world at least posted some shipment growth in the quarter – except Western Europe, which had a 6.4 per cent shipment decline. Eastern Europe posted a 16 per cent shipment bump, the best on Earth in the quarter.

    As has been the case so many times in the past, x86 iron defied at least some of the gravity. Shipments of boxes based on processors from Intel and Advanced Micro Devices sporting the x86 instruction set rose by 1.7 per cent, to just under 2.3 million boxes, and revenues for these machines rose at nearly three times that rate, up 5.6 per cent to $8.95bn.

    HP pushed out 678,874 of its ProLiant x86 machines (up two-tenths of a point)

    IBM was the third biggest x86 server shipper, with its relevant System x and BladeCenter

    Server upstart Cisco Systems, which sells both rack and blade servers using Xeon processors, pushed 40,498 machines, up 70.9 per cent, making it the fifth largest shipper and ahead of Oracle. Cisco did not make the top five ranking of x86 server sales by vendor, but Hewitt told El Reg that Cisco’s revenues grew even faster, rising 72.4 per cent to $335m.

    In the RISC/Itanium server market, HP and Oracle continue to decline
    The RISC/Itanium market has shrunk considerably – only 45.719 boxes went out the door across all vendors in Q1 according to Gartne

    IBM has been raiding the Solaris and HP-UX bases for years now, generating hundreds of millions of dollars per quarter in sales displacing HP and Oracle machinery with Power Systems iron

    What is obvious is that IBM is growing its AIX business

    I suspect that Moore’s Law, competition, virtualization, and server consolidation have all combined to crunch Unix server revenues and the number of containers and partitions running an instance of Unix has continued to grow.

    Reply
  40. Tomi Engdahl says:

    D10: Oracle’s Ellison on cloud computing, sailing and more
    http://seattletimes.nwsource.com/html/technologybrierdudleysblog/2018321185_d10_oracles_ellison_on_cloud_c.html

    The personal computer became a complex device that people used to access the complex network that is the Internet.

    Ellison recounted how the direction things were heading was obvious to him. He came up with the concept of a “network computer” or simple terminal connected to the Internet 20 years ago.

    The network computer vision is becoming a reality as consumers are increasingly using simple devices such as smartphones and tablets to connect to the Internet, he said. (Although today’s phones and tablets are as powerful and technicially complex as PCs were a decade ago …).

    “It’s taken a long, long time for the technologies to mature, the software and hardware technologies to mature, to where the Internet has become just that – enormously complex on one side but on the consumer side, very simple,” he said.

    “We migrated the complexity off the desktop, away from the PC and moved that complexity into Internet servers,” he continued.

    Reply
  41. Tomi Engdahl says:

    Sony Rejects Web-Based PlayStation Console
    http://online.wsj.com/article/SB10001424052702303640104577436261084921778.html

    Sony Corp. considered but ultimately rejected a download-only plan for its next videogame console, people familiar with the matter said, opting to include an optical disk drive rather than break with decades-old industry practice.

    The Japanese electronics maker’s flirtation with dropping the optical drive underscores the rising importance of online networks in the videogame industry, which allow console users to download games, television shows and music without the need for disks or cartridges.

    Sony is planning a 2013 release for the successor to its PlayStation 3 console, people familiar with the matter said.

    Consoles without optical drives would likely add to pressures on brick-and-mortar and online retailers that sell game disks.

    But Sony decided against a download-only model largely because Internet connections are too inconsistent around the world, one of the people familiar with Sony’s thinking said. Because game files are large, customers in countries where Internet connections are relatively slow would be hobbled by a requirement to download games, the person said.

    The success of a new PlayStation is especially critical for Sony

    Microsoft Corp. is planning to include an optical disk drive in the successor to its Xbox 360 console, according to a person familiar with the matter. The software company also had concerns about access to Internet bandwidth, the person said.

    While hardware makers offer online options, only a few have been successful at a download-only model.

    Apple Inc., for example, in 2008 launched an app store that provides games and other software for its mobile devices.

    Others, including Google Inc., have followed in Apple’s footsteps.

    Some retailers have already sensed this shift and have moved to mitigate it.
    GameStop CEO Paul Raines said his company expects a transition to online-only consoles to happen at some point.

    Reply
  42. Tomi Engdahl says:

    IT Desktop Support To Be Wiped Out Thanks To Cloud Computing
    http://tech.slashdot.org/story/12/05/31/0332220/it-desktop-support-to-be-wiped-out-thanks-to-cloud-computing
    “Tech industry experts are saying that desktop support jobs will be declining sharply thanks to cloud computing. Why is this happening? A large majority of companies and government agencies will rely on the cloud for more than half of their IT services by 2020, according to Gartner’s 2011 CIO Agenda Survey.”

    IT Desktop Support To Be Wiped Out Thanks To Cloud Computing
    http://www.lazytechguys.com/featured/it-desktop-support-will-be-wiped-out-by-cloud-computing/

    Tech industry experts are saying that tech jobs with desktop support in the Information Technology department will be declining sharply thanks to cloud computing. Why is this happening? A large majority of companies and government agencies will rely on the cloud for more than half of their IT services by 2020, according to Gartner’s 2011 CIO Agenda Survey.

    Makes tons of sense considering the advantages of cloud computing: easier updates, synched and secured files.

    Does this mean that IT support will become obsolete? Will IT people become jobless or has desktop support now a dinosaur? Not exactly. But we can expect that IT people will extend their skills to cloud management.

    So what happens to server admins? Is there time up? John Rivard, Gartner research director said that, while there will still be roles for people who want to specialize in general IT support, those professionals are going to need a grasp of corporate demands “or the business will bypass them”.

    “The cloud is an ability to commoditise the non-differentiating aspects of IT, and increasingly IT’s role in differentiating the business is bigger and bigger,” says Rivard. “The kinds of roles are definitely going to change: you’re going to see much more automation, more cloud capabilities and less hands-on administration. Across the board, every organisation that I talk to is asking How can I use less of the resources that I have on the run, and more of it on driving the business?”

    his will mean that more IT people will get more into business analysis as more companies will need to know which apps will work best not only for the employees but for the business itself.

    “There are not going to be fewer people involved in IT, but they will be involved in IT in different ways,” says COO Howard Elias at storage giant EMC. “If you are a server, storage or network admin, there may be fewer of those dedicated – what I call siloed component – skillsets needed.”

    IT professionals looking to transition into one of these new, more business-orientated roles will also face competition not just from other techies, but from business analysts, programmers, database engineers and user interface designers who’ve trained to fill these positions as well as those who’ve amassed experience over the years.

    IT is in a constant state of flux with technologies coming and going every year, said Rivard, and so expects IT professionals to be able to handle the coming change.

    Reply
  43. Tomi Engdahl says:

    Intel’s Medfield finally tips up in Orange San Diego
    An interesting processor in a forgettable phone
    http://www.theinquirer.net/inquirer/news/2181351/intels-medfield-finally-tips-orange-san-diego

    MOBILE OPERATOR Orange has announced the San Diego, the first smartphone to arrive in the UK with an Intel processor.

    Intel’s Medfield Atom processor and its reference design was shown off at this year’s CES trade show, however it took Orange and its partners a further six months to release a smartphone using Intel’s chip.

    Orange’s San Diego smartphone is a middle-of-the-road handset featuring an Intel Atom Z2460 single-core processor that supports Hyperthreading with access to 16GB of storage, covered by a 4.03in screen that has a seemingly random 600×1024 resolution. While the Orange San Diego might have a pedestrian processor and screen, its photographic capabilities are impressive with an 8MP camera that can shoot 10 frames per second and capture HD 1080p video, while the front-facing camera is 1.3MP.

    The biggest problem for Orange is that it has decided to ship the San Diego with Android 2.3.7 Gingerbread, though an Orange representative told us that Android 4.0 Ice Cream Sandwich (ICS) will arrive, but only in October, at which point Android 4.0 ICS will have been out for almost a year.

    Intel’s decision to showcase its Atom processor in a relatively unimpressive smartphone is a questionable strategy.

    Intel has a major disadvantage, as Graham Palmer, country manager for Intel UK and Ireland admitted. Palmer said that Android phones with Intel chips can run 70 per cent of the apps available, and while that is a lot there is always the chance that the app that users want the most simply won’t run on Intel powered Android phones.

    Both Orange and Intel claim good battery life figures for the San Diego

    if the San Diego can compete with devices of similar screen size in the battery life stakes, then that will vindicate the firm’s perseverance with the x86 architecture.

    Reply
  44. Tomi Engdahl says:

    Windows 8 due ‘for the holidays,’ but will biz bite?
    http://news.cnet.com/8301-1001_3-57444964-92/windows-8-due-for-the-holidays-but-will-biz-bite/

    Windows 8 is heading toward retail, but businesses may not be that interested in the upgrade yet.

    Microsoft dropped some hints today that the commercial release of Windows 8 could come sooner rather than later, but critics are worried that it’s not very business friendly.

    “If the feedback and telemetry on Windows 8 and Windows RT match our expectations, then we will enter the final phases of the RTM (release to manufacturing) process in about 2 months,” Sinofsky wrote. (Windows RT refers to the version that runs on ARM chips. Windows 8 runs on Intel and AMD processors.)

    He continued. “If we are successful in that, then we are tracking to our shared goal of having PCs with Windows 8 and Windows RT available for the holidays.”

    That’s the good news. On the other hand, businesses may not find the upgrade to be satisfying, say observers.

    “Virtually all of the major new features in Windows 8 — the new Windows Runtime, the Metro environment with its full-screen apps, and the all-new developer APIs that drive it all — are derived solely from the mobile world and Microsoft’s experiences building Windows Phone for smartphones,” wrote Paul Thurrott at SuperSite for Windows.

    Thurrott continued. “It’s become increasingly clear that Microsoft doesn’t actually expect businesses to upgrade to this new system in any meaningful way.”

    Thurrott went on to say that it’s a “a calculated risk” that allows Microsoft to focus on the consumer market, which it risks losing to Apple and, to some extent, Android.

    Reply
  45. Tomi Engdahl says:

    Announcing the Release Candidate (RC) of Visual Studio 2012 and .NET Framework 4.5
    http://blogs.msdn.com/b/jasonz/archive/2012/05/31/announcing-the-release-candidate-rc-of-visual-studio-2012-and-net-framework-4-5.aspx

    In conjunction with today’s Visual Studio release, Windows has made available a Windows 8 Release Preview. Please visit the Building Windows 8 blog for the official announcement by Steven Sinofsky.

    What’s new in the RC since Beta
    Logo & Branding
    - You’ll notice that we’ve updated our product branding from the “11” version number to the 2012 year. This means that the RTM version will release this calendar year!
    Setup
    Performance
    User Interface
    IDE
    Metro style apps
    Metro style apps using XAML
    - there are some new Metro style app templates, including a new Windows Runtime Component template for C# and VB developers, and a new DLL project template for C++ developers
    Metro style apps using JavaScript
    - When it comes to editing Metro style Javascript applications, Blend has introduced a host of new features for the RC release.
    ASP.NET 4.5
    - ASP.NET Web Forms has been updated to fully support the new async “await” keyword. Page events and control events can now be marked as “async” and utilize the new async support added in .NET 4.5”.
    Web tools
    LightSwitch
    Team Foundation Server
    Architectural Tools
    “Go Live” License
    - As with the beta, Visual Studio 2012 RC ships with a “Go Live” license. This means that you can use the product to build apps that run in production. For more information on the “Go Live” terms and how to get support if you need it, please visit the Visual Studio 2012 RC website.

    Reply
  46. Tomi Engdahl says:

    Designing with 10GBase-T transceivers
    http://www.edn.com/article/521923-Designing_with_10GBase_T_transceivers.php

    Take advantage of 10GBase-T board layout and routing guidelines, power distribution and decoupling requirements, and EMI reduction design concepts to employ best practices in network designs.

    As was the case with three prior generations of Ethernet, the ubiquity, the ready and familiar management tools, and the compelling cost structure are allowing 10G Ethernet to quickly dominate the computer networking scene.

    Crehan Research, a leading industry analyst of data center technologies, estimates that by 2014, 10G Ethernet will overtake 1G Ethernet as the preferred network connectivity option in computer servers. And in one of its most reports on the subject, The Linley Group, another leading industry analyst, predicted robust 10GbE growth and estimated that 10GbE NIC/LAN-on-motherboard (LOM) shipments alone will surpass 16 million ports in 2014.

    Several standards-based options exist for 10G Ethernet and span the gamut, from single-mode fiber to twin-ax cable. But of all the options available, 10GBase-T, which is also known as IEEE 802.3an, is arguably the most flexible, economical, backwards compatible, and user friendly 10G Ethernet connectivity option available. It was designed to operate with the familiar unshielded twisted-pair cabling technology, which is already pervasive for 1G Ethernet and can interoperate directly with it.

    10GBase-T is capable of covering, with a single cable type, any distance up to 100 meters and thereby reaches 99% of the distance requirements in data centers and enterprise environments.

    Reply
  47. Tomi Engdahl says:

    Acer, Toshiba to Take on IPads With Windows 8 Tablets
    http://www.bloomberg.com/news/2012-05-31/acer-toshiba-to-take-on-ipads-with-windows-8-tablets.html

    Acer Inc., Toshiba Corp. (6502) and Asustek Computer Inc. (2357) will unveil tablets running Microsoft Corp.’s Windows 8 operating system next week, people with knowledge of the matter said, challenging the dominance of Apple Inc. (AAPL)’s iPad.

    Acer will display a tablet based on Microsoft’s new software at the Computex show in Taipei, while Toshiba will show a tablet and a notebook-type device, said the people, who asked not to be identified because the plans haven’t been made public. Asustek will present tablets with detachable keyboards similar to its current Transformer model, the people said.

    Asustek will demonstrate tablets based on an Nvidia ARM- based chip called Tegra and another powered by an Intel chip, the people said. The Tegra-based device, which is similar to the one that will go on sale, will be displayed publicly, setting up an opportunity for direct comparisons between a Windows computer running on ARM and one using Intel technology.

    The Acer (2353) tablet is built around an Intel chip, while Toshiba is using Texas Instruments for its processors.

    Qualcomm, the world’s largest maker of mobile-phone chips, will demonstrate a test device running Windows 8 based on its Snapdragon processor, said a person familiar with its plans.

    The June 5-9 Taipei show will highlight a limited number of ARM-based devices as Microsoft seeks to ensure that when Windows 8 is released later this year, the products will stand up to comparisons with the iPad. The limited debut will be followed by a second wave of computer and phone makers lined up for next year, two of the people said.

    Reply
  48. Tomi Engdahl says:

    The Swiss Green take advantage of ABB’s HVDC technology for the latest Zurich Lupfigissa west of the center.

    The hope is to cut up to one fifth of energy consumption. Industrial company ABB has supplied the Swiss Green, a company data center, bringing new HVDC technology.

    ABB and Green calculate that DC technology allows the data center saves 10-20 per cent and electricity consumption by as much as 25 percent of the physical space requirements

    ABB’s product manager matalajännitteista Tarak Mehta said at the press conference that the old data center change from AC to DC power is unlikely to be economically viable.

    Tarak Mehta told the 3T magazine that will take years before the technology can be direct current to spread more widely the data center.

    The computer manufacturer HP Server Unit Director Ron Noblett stated that the DC power technology servers have had nearly 10 years. They were not, however, was no demand, because the other data center environment, support has been lacking.

    Data center energy efficiency is a dress-grade Uptime Institute rating of 1.4.

    The DC technology will affect the entire data center from the mains to individual UPS devices, servers, backup power batteries and power supplies up to.

    Source:
    http://www.3t.fi/artikkeli/uutiset/teknologia/datakeskus_saastaa_tasasahkolla

    Reply
  49. Tomi Engdahl says:

    Energy risk and cost will shape data center landscape
    - Predicted growth in large data centers raises supply and location questions in EMEA
    http://www.canalys.com/newsroom/energy-risk-and-cost-will-shape-data-center-landscape

    Market analyst firm Canalys anticipates that the EMEA (Europe, Middle East and Africa) region will face major challenges in meeting the rising energy requirements of data centers over the next decade. It estimates that the combined base of server closets, server rooms, and small, medium-sized and large data centers accounts for 1% of all electricity consumption and 5% of commercial electricity use across the region.

    It predicts electricity use across all these facilities will be 15% higher by 2016, driven in particular by a 40% rise in consumption by large data centers. Some countries are much better placed than others to cope with this increased demand. Vendors, service providers and their customers therefore need to evaluate their location choices carefully.

    ‘Virtualization and workload acceleration technologies, as well as increasing network bandwidth, enable organizations to invest in data centers sited far beyond their traditional geographic borders. As this trend continues and the need for server proximity diminishes, energy supply risk and cost become key factors in determining the best locations for those data centers,’

    Companies building and utilizing data centers will seek opportunities to capitalize on lower-cost, more dependable and greener energy supply, as long as legislation regarding restrictions on data movement allows such freedom.

    The 2012 Energy Watch report highlights Norway, Switzerland, France, Sweden and Denmark as the top five countries providing the necessary energy foundation for medium-sized and large data centers in the coming years. The countries most at risk of not meeting energy demands are Greece, Italy, Portugal and Hungary.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*