Greenpeace report How Clean is Your Cloud? I saw mentioned in 3T magazine news is actually quite interesting reading. This year’s report provides a look at the energy choices some of the largest and fastest growing IT companies. The report analyzes the 14 IT companies and the electricity supply chain in more than 80 data center cases.
The report contains also lots of interesting background information on both IT and telecom energy consumption. I recommend checking it out. Here are some points picked from How Clean is Your Cloud? report:
Facebook, Amazon, Apple, Microsoft, Google, and Yahoo – these global brands and a host of other IT companies are rapidly and fundamentally transforming the way in which we work, communicate, watch movies or TV, listen to music, and share pictures through “the cloud.”
The growth and scale of investment in the cloud is truly mind-blowing, with estimates of a 50-fold increase in the amount of digital information by 2020 and nearly half a trillion in investment in the coming year, all to create and feed our desire for ubiquitous access to infinite information from our computers, phones and other mobile devices, instantly.
The engine that drives the cloud is the data center. Data centers are the factories of the 21st century information age, containing thousands of computers that store and manage our rapidly growing collection of data for consumption at a moment’s notice. Given the energy-intensive nature of maintaining the cloud, access to significant amounts of electricity is a key factor in decisions about where to build these data centers. Industry leaders estimate nearly $450bn US dollars is being spent annually on new data center space.
Since electricity plays a critical role in the cost structure of companies that use the cloud, there have been dramatic strides made in improving the energy efficiency design of the facilities and the thousands of computers that go inside. However, despite significant improvements in efficiency, the exponential growth in cloud computing far outstrips these energy savings.
How much energy is required to power the ever-expanding online world? What percentage of global greenhouse gas (GHG) emissions is attributable to the IT sector? Answers to these questions are very difficult to obtain with any degree of precision, partially due to the sector’s explosive growth, a wide range of devices and energy sources, and rapidly changing technology and business models. The estimates of the IT sector’s carbon footprint performed to date have varied widely in their methodology and scope. One of the most recognized estimates of the IT sector’s footprint was conducted as part of the 2008 SMART 2020 study, which established that the sector is responsible for 2% of global GHG emissions.
The combined electricity demand of the internet/cloud (data centers and telecommunications network) globally in 2007 was approximately 623bn kWh (if the cloud were a country, it would have the fifth largest electricity demand in the world). Based on current projections, the demand for electricity will more than triple to 1,973bn kWh (an amount greater than combined total demand of France, Germany, Canada and Brazil).
The report indicates that, due to the economic downturn and continued energy efficiency and performance improvements, global energy demand from data centers from 2005-2010 increased by 56%. Estimates of data center electricity demand come in at 31GW globally, with an increase of 19% in 2012 alone. At the same time global electricity consumption is otherwise essentially flat due to the global recession is still a staggering rate of growth.
Given the scale of predicted growth, the source of electricity must be factored into a meaningful definition of “green IT”. Energy efficiency alone will, at best, slow the growth of the sector’s footprint. The replacement of dirty sources of electricity with clean renewable sources is still the crucial missing link in the sector’s sustainability efforts according to the report.
The global telecoms sector is also growing rapidly. Rapid growth in use of smart phones and broadband mobile connections mean mobile data traffic in 2011 was eight times the size of the entire internet in 2000. It is estimated that global mobile data traffic grew 133% in 2011, with 597 petabytes of data sent by mobiles every month. In 2011, it is estimated that 6 billion people or 86.7% of the entire global population have mobile telephone subscriptions. By the end of 2012, the number of mobile connected devices is expected to exceed the global population. Electronic devices and the rapidly growing cloud that supports our demand for greater online access are clearly a significant force in driving global energy demand.
What about telecoms in the developing and newly industrialized countries? The report has some details from India (by the way it is expected that India will pass China to become the world’s largest mobile market in terms of subscriptions in 2012). Much of the growth in the Indian telecom sector is from India’s rural and semi-urban areas. By 2012, India is likely to have 200 million rural telecom connections at a penetration rate of 25%. Out of the existing 400,000 mobile towers, over 70% exist in rural and semi-urban areas where either grid-connected electricity is not available or the electricity supply is irregular. As a result, mobile towers and, increasingly, grid-connected towers in these areas rely on diesel generators to power their network operations. The consumption of diesel by the telecoms sector currently stands at a staggering 3bn liters annually, second only to the railways in India.
What is the case on other developing and newly industrialized countries? I don’t actually know.
NOTE: Please note that that many figures given on the report are just estimates based on quite little actual data, so they might be somewhat off the actual figures. Given the source of the report I would quess that if the figures are off, they are most probably off to direction so that the environmental effect looks bigger than it actually is.
608 Comments
Tomi Engdahl says:
Greenpeace calls out cloud names on green claims
http://www.theregister.co.uk/2012/04/17/greenpeace_cloud_report/
In spite of claims that cloud computing is getting “greener”, Greenpeace has launched a campaign calling on Apple, Amazon and Microsoft to improve their performance.
Yahoo! and Google get back-patted as leading the industry by giving renewable energy priority in their strategy, along with Facebook, which is building a 100 percent renewable-powered data centre in Sweden.
A key problem, the report states, is that the “green” claims surrounding cloud computing are almost impossible to assess. Both transparency and actual data are lacking
The Greenpeace study highlights one of the key issues facing the cloud business: the location of infrastructure. The location of a data centre, the report notes, becomes a key consideration in how electricity utilities manage their networks – something that can lock utilities into “dirty” energy choices.
However, the cloud business isn’t the only sector under attack. Greenpeace also notes that the huge growth in mobile networks is also problematic, especially in developing countries where mobile infrastructure is leapfrogging inadequate fixed networks.
Tomi Engdahl says:
Apple: Greenpeace’s Cloud Math is Busted
http://www.datacenterknowledge.com/archives/2012/04/17/apple-greenpeaces-cloud-math-is-busted/
Apple says it will use 20 megawatts of power at full capacity in its North Carolina data center, about one-fifth the amount estimated by Greenpeace in a report that is sharply critical of Apple and other data center operators for relying upon “dirty”energy sources to power their cloud computing operations.
Apple’s statement raises serious issues about the credibility of the estimates in the Greenpeace report, and illustrates the difficulty of seeking to estimate data center power usage – a detail that many companies are unwilling to disclose on their own.
Greenpeace’s Gary Cook used that estimate to downplay the significance of Apple’s substantial investment in on-site renewable power in Maiden, which includes a 20 megawatt solar array and a biogas-powered fuel cell with a 5 megawatt capacity.
“Our data center in North Carolina will draw about 20 megawatts at full capacity, and we are on track to supply more than 60% of that power on-site from renewable sources including a solar farm and fuel cell installation which will each be the largest of their kind in the country,”
Apple would clearly receive a much higher score if Greenpeace used a 20 megawatt base to evaluate its coal-sourced power. In effect, the current score whacks Apple for 80 megawatts of “dirty” power that it’s not using.
So how could Greenpeace have been so far off base?
Commentor:
The Greenpeace report is very credible. It may ruffle a few feathers of organizations that are not used to answering questions, but they’ll get over it and grow.
If these public companies practiced greater transparency and let their stakeholders know more about what they’re doing, then the problems with obtaining accurate information would disappear. Shareholders would certainly like to know the facts about the companies that they own.
As a public company Apple is, if a strong statement like this is found not to be true in the future, it will only cause them eventually more damage to their image. I see no motivation for Apple to lie and a strong motivation not to lie.
I see a fair amount of error in many of their reports, and they don’t seem to be interested in cleaning that up.
Outside estimates of these numbers are very problematic.
Tomi Engdahl says:
Why Microsoft’s data centre future is based on the point of ordure
http://www.cloudpro.co.uk/cloud-essentials/public-cloud/3399/why-microsofts-data-centre-future-based-point-ordure
It sounds like the punchline to a thousand jokes but Microsoft gets serious about sewage to meet cloud demand.
It sounds like a cue for a thousand jokes from open source adherents but Microsoft has found a new source to power its data centres. In the week that Greenpeace castigated vendors for their dirty data centres, Microsoft has dug even further into the dirt and is looking to power data centres by using biogas, produced by decaying sewage.
Writing in a Microsoft blog, Christian Belady, Microsoft’s general manager data centre service said the company had to prepare to reduce its carbon footprint, while at the same time coping with increasing demand.
Thinking Off the Grid: Independence for today’s Data Centers via Data Plants?
http://www.globalfoundationservices.com/posts/2012/april/18/thinking-off-the-grid-independence-for-today%E2%80%99s-data-centers-via-data-plants.aspx
Simplified Design. Ironically, the unreliability of the grid requires us to install a complex array of UPS, back-up generators, maintenance bypass circuits, power conditioning, etc., that adds additional impending sources of failure. Keep in mind, a data center does not actually have to incur a utility outage for there to be problems. Spikes and sags at the millisecond level can result in component damage downstream. The reality is data centers are constantly bombarded with power quality events and transients that over time degrade the built in protection infrastructure of the data center lowering confidence in older, well-engineered data center electrical plants.
As the demand for cloud services grows, we are looking at new methods for maintaining the high availability of our applications, while becoming more sustainable, efficient and cost effective so we can pass those benefits on to our customers and our shared environment.
Clearly, the industry is going to face some challenges with respect to power, carbon, and water as a resource. Within Microsoft, we are working to proactively address these issues. You can expect massive integration in cloud infrastructure, and it’s purely driven by sustainability and a total cost of ownership advantage. Remember, lower costs generally mean better sustainability too. If you drive your costs down, you are actually using less material and energy. That also makes it more sustainable.
Currently, our team is researching the first-ever grid independent fuel cell, data center that is fueled directly from biogas. The experiment is small scale, so we can demonstrate and measure the benefits of it.
Tomi Engdahl says:
Is a “Net Zero” Data Center Possible?
http://hardware.slashdot.org/story/12/05/31/0024228/is-a-net-zero-data-center-possible
“HP Labs is developing a concept for a “net zero” data center — a facility that combines on-site solar power, fresh air cooling and advanced workload scheduling to operate with no net energy from the utility grid. HP is testing its ideas in a small data center in Palo Alto with a 134kW solar array and four ProLiant servers. ”
the main thing they’re testing is the scheduling of workloads to to get the maximum benefit from their solar array.
Tomi Engdahl says:
Energy risk and cost will shape data center landscape
- Predicted growth in large data centers raises supply and location questions in EMEA
http://www.canalys.com/newsroom/energy-risk-and-cost-will-shape-data-center-landscape
Market analyst firm Canalys anticipates that the EMEA (Europe, Middle East and Africa) region will face major challenges in meeting the rising energy requirements of data centers over the next decade. It estimates that the combined base of server closets, server rooms, and small, medium-sized and large data centers accounts for 1% of all electricity consumption and 5% of commercial electricity use across the region.
It predicts electricity use across all these facilities will be 15% higher by 2016, driven in particular by a 40% rise in consumption by large data centers. Some countries are much better placed than others to cope with this increased demand. Vendors, service providers and their customers therefore need to evaluate their location choices carefully.
‘Virtualization and workload acceleration technologies, as well as increasing network bandwidth, enable organizations to invest in data centers sited far beyond their traditional geographic borders. As this trend continues and the need for server proximity diminishes, energy supply risk and cost become key factors in determining the best locations for those data centers,’
Companies building and utilizing data centers will seek opportunities to capitalize on lower-cost, more dependable and greener energy supply, as long as legislation regarding restrictions on data movement allows such freedom.
The 2012 Energy Watch report highlights Norway, Switzerland, France, Sweden and Denmark as the top five countries providing the necessary energy foundation for medium-sized and large data centers in the coming years. The countries most at risk of not meeting energy demands are Greece, Italy, Portugal and Hungary.
Tomi Engdahl says:
ABB and Green calculate that DC technology allows the data center saves 10-20 per cent and electricity consumption by as much as 25 percent of the physical space requirements
The computer manufacturer HP Server Unit Director Ron Noblett stated that the DC power technology servers have had nearly 10 years. They were not, however, was no demand, because the other data center environment, support has been lacking.
The DC technology will affect the entire data center from the mains to individual UPS devices, servers, backup power batteries and power supplies up to.
Source:
http://www.3t.fi/artikkeli/uutiset/teknologia/datakeskus_saastaa_tasasahkolla
Tomi Engdahl says:
Sorry Global Warming Alarmists, The Earth Is Cooling
http://www.forbes.com/sites/peterferrara/2012/05/31/sorry-global-warming-alarmists-the-earth-is-cooling/
Climate change itself is already in the process of definitively rebutting climate alarmists who think human use of fossil fuels is causing ultimately catastrophic global warming. That is because natural climate cycles have already turned from warming to cooling, global temperatures have already been declining for more than 10 years, and global temperatures will continue to decline for another two decades or more.
That is one of the most interesting conclusions to come out of the seventh International Climate Change Conference sponsored by the Heartland Institute, held last week in Chicago.
The conference featured serious natural science, contrary to the self-interested political science you hear from government financed global warming alarmists seeking to justify widely expanded regulatory and taxation powers for government bodies, or government body wannabees, such as the United Nations. See for yourself, as the conference speeches are online.
What you will see are calm, dispassionate presentations by serious, pedigreed scientists discussing and explaining reams of data.
The Heartland Institute has effectively become the international headquarters of the climate realists, an analog to the UN’s Intergovernmental Panel on Climate Change (IPCC). It has achieved that status through these international climate conferences, and the publication of its Climate Change Reconsidered volumes, produced in conjunction with the Nongovernmental International Panel on Climate Change (NIPCC).
Check out the 20th century temperature record, and you will find that its up and down pattern does not follow the industrial revolution’s upward march of atmospheric carbon dioxide (CO2), which is the supposed central culprit for man caused global warming (and has been much, much higher in the past). It follows instead the up and down pattern of naturally caused climate cycles.
For example, temperatures dropped steadily from the late 1940s to the late 1970s. The popular press was even talking about a coming ice age.
In the late 1970s, the natural cycles turned warm and temperatures rose until the late 1990s, a trend that political and economic interests have tried to milk mercilessly to their advantage. The incorruptible satellite measured global atmospheric temperatures show less warming during this period than the heavily manipulated land surface temperatures.
Central to these natural cycles is the Pacific Decadal Oscillation (PDO). Every 25 to 30 years the oceans undergo a natural cycle where the colder water below churns to replace the warmer water at the surface, and that affects global temperatures by the fractions of a degree we have seen.
Tomi says:
IaaS providers: how to select the right company for your cloud
http://www.cloudpro.co.uk/iaas/4113/iaas-providers-how-select-right-company-your-cloud?page=0,0
There’s more choice than ever when it comes to IaaS providers, how does a company negotiate the minefield
While Amazon isn’t the only player when it comes to IaaS it is by far the biggest and this gives it a sense of gravity, pulling more customers into it. But there are other vendors too. In the last few months Google and Microsoft has joined the IaaS party, with Windows Azure and Google’s Compute Engine offering organisations similar products to Amazon’s.
And that’s not forgetting that other hosting providers (such as Rackspace) and telcos (BT, Telefonica, et al) are itching to sweep up customers with their public cloud offerings. With research firm Gartner predicting by 2016 the global IaaS market will grow to $24.4 billion, many more vendors will join and create such an array of choice, it can be hard to know where to start.
Tomi says:
IT helps Australian bank achieve carbon-neutrality
National Australia Bank has adopted tri-generation, private cloud, modular data centres
http://www.theregister.co.uk/2012/08/20/nab_carbon_neutral_it/
National Australia Bank (NAB), one of Australia’s big four banks, has detailed how changes to its data centres helped the organisation to become carbon neutral in a white paper (PDF) issued by the Open Data Center Alliance.
Tri-generation, the practice of generating electricity, heating and cooling from one device, is one of the Bank’s most important carbon-cutting initiatives.
A private cloud is also important to the Bank’s efforts
The new data centre will almost match commercial rivals for efficiency, as modelling shows it will achieve Power Usage Effectiveness (PUE) score of at least 1.35, which means for every watt consumed by kit another .35 watts goes to keeping the data center running. By contrast, HP’s newest data centre in Sydney is only a little better with a PUE of 1.3.
“Business units need to understand the cost of requesting test environments, for example, and need to apply due diligence to reducing their impact as much as possible.”
Tomi Engdahl says:
How Much Less Coal Must Be Burned to Slow Climate Change? [Video]
http://www.scientificamerican.com/article.cfm?id=future-climate-how-much-less-coal-must-be-burned-slow-climate-change
In terms of global warming, a simple model reveals that the sooner and more completely the world transitions away from the dirtiest fossil fuel, the better
Coal-burning remains the single largest source of all the greenhouse gases dumped into Earth’s atmosphere.
Tomi Engdahl says:
Best practices for deploying high power to IT racks
http://www.cablinginstall.com/index/display/article-display/3478536532/articles/cabling-installation-maintenance/news/data-center/data-center-power-cooling/2012/june/white-paper_covers.html
A new white paper from Raritan addresses considerations surrounding the deployment of high power to IT equipment racks.
The paper contends that, with average rack power consumption still increasing, the deployment of high power to racks is becoming more of a necessity for data center managers. Increased efficiency means more power is available for servers to support data center growth.
Tomi Engdahl says:
White paper compares AC vs. DC power distribution for data centers
http://www.cablinginstall.com/index/display/article-display/1665688902/articles/cabling-installation-maintenance/news/data-center/data-center-power-cooling/2012/june/white-paper_compares.html
A new white paper from APC by Schneider Electric provides a quantitative comparison of high efficiency AC vs. DC power distribution platforms for data centers.
The latest high efficiency AC and DC power distribution architectures are shown by the analysis to have virtually the same efficiency, suggesting that a move to a DC-based architecture is unwarranted on the basis of efficiency
A Quantit ative Com parison of High Efficiency AC vs. DC Power Distribution for Data Centers
http://www.apcmedia.com/salestools/NRAN-76TTJY_R3_EN.pdf
The data in this paper demonstrates that the best AC power distribution systems today already achieve essentially the same efficiency as hypothetical future DC systems, and that most of the quoted efficiency gains in the popular press are misleading, inaccurate, or false. And unlike virtually all other articles and papers on this subject, this paper includes citations and references for all of the quantitative data
There are five methods of power distribution that can be realistically used in data centers, including two basic types of alternating current (AC) power distribution and three basic types of direct current (DC) power distribution. These five types are explained and analyzed
two of the five distribution methods, one AC and one DC, offer superior electrical efficiency. This paper focuses on comparing only those two highest efficiency distribution methods. Unless there is a major change in data center power technology, one of these two methods is very likely to become the preferred method for distributing power in future data centers.
In many published articles, expected improvements of 10% to 30% in efficiency have been claimed for DC over AC.
The data in this paper demonstrates that the best AC power distribution systems today
already achieve essentially the same efficiency as hypothetical future DC systems
The introduction explained that two alternative power distribution systems have emerged as candidates for building future high efficiency data centers.
One system is based on the existing predominant 400/230 V AC distribution system currently used in virtually all data centers outside of North America and Japan.
The other system is based on a conceptual 380 V DC distribution system supplying IT equipment that has been modified to accept DC power.
a consensus in the literature has developed around 380 V as a preferred standard,
In the proposed international ETSI standard for DC distribution for data centers, the 380V DC system is actually created with the midpoint at ground potential to keep the maximum system voltage to ground to within +/- 190 V.
video share says:
Hello there, I found your blog by way of Google at the same time as looking for a comparable topic, your site got here up, it appears to be like good. I have bookmarked to my favourites|added to my bookmarks.
Tomi Engdahl says:
Laugh all you want at ‘the cloud’ – it’ll be worth ‘$100bn by 2016′
Public-facing services to coin it, predicts IDC
http://www.channelregister.co.uk/2012/09/11/idc_public_cloud_forecast/
Some $100bn will be slurped up by public IT cloud services by 2016, according to the crystal-ball gazers at IDC.
Spending is set to peak at $40bn this year but is forecast to expand more than 26 per cent on a compound annual growth basis over the next four years – five times faster than the total industry average.
The public cloud specifically is predicted to account for 16 per cent of global IT revenues in five categories: applications, Software-as-a-Service (SaaS), platform-as-a-service, servers and basic storage.
IDC reckons cloud services will generate 41 per cent of all growth in the five segments by the end of the forecast period in 2016. “Quite simply, vendor failure in cloud services will mean stagnation,” said Gens.
Tomi Engdahl says:
How DCIM Can Help Better Manage Data Centers
http://www.fedtechmagazine.com/article/2012/08/how-dcim-can-help-better-manage-data-centers
As agencies make progress toward the goals of the Federal Data Center Consolidation Initiative, they are arriving at an interesting conclusion: Running an efficient data center is as much about the systems that power and cool the room as it is about the servers themselves.
The House of Representatives learned this lesson after implementing data center infrastructure management (DCIM) tools.
Tomi Engdahl says:
Emerson unveils data center energy consumption e-book, calculator
http://www.cablinginstall.com/index/display/article-display/5785751040/articles/cabling-installation-maintenance/news/data-center/data-center-power-cooling/2012/august/emerson-unveils_data.html?cmpid=$trackid
Energy Logic 2.0, an e-book in PDF format billed as “a vendor-neutral roadmap of 10 strategies that can reduce a data center’s energy use by up to 74 percent.”
Energy Logic 2.0
New Strategies for Cutting Data Center
Energy Costs and Boosting Capacity
http://www.emersonnetworkpower.com/en-US/Latest-Thinking/EDC/Documents/White%20Paper/IS03947_2012_EnergyLogic_FIN.pdf
Tomi Engdahl says:
DCIM: Where the physical and virtual can meet
http://www.cablinginstall.com/index/display/article-display.articles.cabling-installation-maintenance.volume-20.issue-9.features.dcim-where-the-physical-and-virtual-can-meet.html
A single data center infrastructure management platform enables a holistic approach to managing physical and virtual assets.
Technological developments demand that data center managers plan for future growth. This includes the physical layer as much as everything else. As Forrester recommended in its “Market Overview: Data Center Infrastructure Management Solutions” (April 2012), “The ability to understand the impact of future workloads and infrastructure changes on potential capacity requirements is extremely valuable for many organizations.”
Increasingly over the last few years more IT managers have been turning to data center infrastructure management (DCIM) solutions in an effort to address these needs. DCIM provides a consolidated infrastructure repository (configuration management database, or CMDB) and consolidates information from various equipment and appliances in the data center, such as sensors, power consumption, cooling and others.
Forrester defines DCIM as “the convergence of previous generations of purely facilities-oriented power management, physical asset management, network management and financial management and planning solutions for data centers.” Or simply, it is a “comprehensive approach to managing the physical aspects of the data center.”
As Forrester explains, “To effectively propose an optimal set of allocations, the software needs to understand the behavior of the system at a granular level, the potential workloads and the details of the power and cooling environment.”
Tomi Engdahl says:
Intel predicts ubiquitous, almost-zero-energy computing by 2020
http://www.extremetech.com/computing/136043-intel-predicts-ubiquitous-almost-zero-energy-computing-by-2020
This year, the company has discussed the shrinking energy cost of computation as well as a point when it believes the energy required for “meaningful compute” will approach zero and become ubiquitous by the year 2020. The company didn’t precisely define “meaningful compute,”
The idea that we could push the energy cost of computing down to nearly immeasurable levels is exciting. It’s the type of innovation that’s needed to drive products like Google Glass or VR headsets like the Oculus Rift. Unfortunately, Intel’s slide neatly sidesteps the greatest problems facing such innovations — the cost of computing already accounts for less than half the total energy expenditure of a smartphone or other handheld device.
Intel’s decision to present on the zero cost future of computing is disappointing because it flies in the face of everything the company has said in the past year and ignores the previously-acknowledged difficulty of scaling all the various components that go into a modern smartphone. The idea that 2020 will bring magical improvements or suddenly sweep neural interfaces to the forefront of technology is, in a word, folly.
This doesn’t mean technology won’t advance, but it suggests a more deliberate, incremental pace as opposed to an upcoming revolution. Smartphones of 2018-2020 may be superior to top-end devices of the present day in much the same way that modern computers are more powerful than desktops from the 2006 era.
Tomi Engdahl says:
Green data centre market to reach £28bn by 2016
http://news.techworld.com/data-centre/3381772/green-data-centre-market-reach-28bn-by-2016/
Rising costs and increasing demand are forcing the data centre industry to undergo major changes
Green data centres may have gone out of fashion as a topic of conversation, but rising energy costs, increasing demand for computing power, environmental concerns, and economic pressures are continuing to drive the market.
The data centre industry currently consumes around 1.5 percent of the world’s energy. The result is that the industry is undergoing major changes as it struggles to keep energy demand in check while maintaining growth.
A new report by Pike Research predicts that the worldwide market for green data centres will grow from $17.1 billion (£10.5bn) in 2012 to $45.4 billion (£28bn) by 2016, at a compound annual growth rate of nearly 28 percent.
Tomi Engdahl says:
Microsoft Pollutes To Avoid Fines
http://slashdot.org/story/12/09/25/2118207/microsoft-pollutes-to-avoid-fines
“Microsoft’s Quincy data center, physical home of Bing and Hotmail, was fined $210,000 last year because the data center used too little electricity.”
You get fined for saving electricity now?
Where is this world going…
Basically large companies need to know what their costs are going to be long term. They enter in to Power Purchasing Agreements [wikipedia.org] with electricity generators much like leasing a building.
Microsoft deliberately wasted energy at data center to avoid fine, says NY Times
http://www.engadget.com/2012/09/24/microsoft-deliberately-wasted-energy-to-avoid-fine/
Redmond’s Quincy data center, which houses Bing, Hotmail and other cloud-based servers, had an agreement in place with a Washington state utility containing clauses which imposed penalties for under-consumption of electricity. A $210,000 fine was levied last year, since the facility was well below its power-use target, which prompted Microsoft to deliberately burn $70,000 worth of electricity in three days “in a commercially unproductive manner” to avoid it, according to its own documents.
Tomi Engdahl says:
Power, Pollution and the Internet
http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?ref=technology&_r=0
A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.
Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid
Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.
“A single data center can take more power than a medium-size town.”
Energy efficiency varies widely from company to company.
on average, they were using only 6 percent to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations.
“This is an industry dirty secret, and no one wants to be the first to say mea culpa,”
The inefficient use of power is largely driven by a symbiotic relationship between users who demand an instantaneous response to the click of a mouse and companies that put their business at risk if they fail to meet that expectation.
To support all that digital activity, there are now more than three million data centers of widely varying sizes worldwide, according to figures from the International Data Corporation.
Nationwide, data centers used about 76 billion kilowatt-hours in 2010, or roughly 2 percent of all electricity used in the country that year
The industry has long argued that computerizing business transactions and everyday tasks like banking and reading library books has the net effect of saving energy and resources. But the paper industry, which some predicted would be replaced by the computer age, consumed 67 billion kilowatt-hours from the grid in 2010, according to Census Bureau figures reviewed by the Electric Power Research Institute for The Times.
In many facilities, servers are loaded with applications and left to run indefinitely, even after nearly all users have vanished or new versions of the same programs are running elsewhere.
“At a certain point, no one is responsible anymore, because no one, absolutely no one, wants to go in that room and unplug a server.”
“Data center operators live in fear of losing their jobs on a daily basis,” Mr. Tresh said, “and that’s because the business won’t back them up if there’s a failure.”
“You look at it and say, ‘How in the world can you run a business like that,’ ” Mr. Symanski said. The answer is often the same, he said: “They don’t get a bonus for saving on the electric bill. They get a bonus for having the data center available 99.999 percent of the time.”
Data centers are among utilities’ most prized customers.
Using the cloud “just changes where the applications are running,” said Hank Seader, managing principal for research and education at the Uptime Institute. “It all goes to a data center somewhere.”
“When somebody says, ‘I’m going to store something in the cloud, we don’t need disk drives anymore’ — the cloud is disk drives,” Mr. Victora said. “We get them one way or another. We just don’t know it.”
“That’s what’s driving that massive growth — the end-user expectation of anything, anytime, anywhere,” said David Cappuccio, a managing vice president and chief of research at Gartner, the technology research firm. “We’re what’s causing the problem.”
Tomi Engdahl says:
The cloud’s black lining
Server farms and data centers exposed as terrible energy wasters and polluters. Where is the efficiency strategy?
http://www.controleng.com/single-article/the-clouds-black-lining/7d0dbe064fdc4dbf48d63a82a68a7a65.html
How long would your company survive if your energy usage efficiency was less than 20%? How about 10%? In our high-tech world, that is the reality of most data centers. Your ability to send an email or read this article depends on thousands of data centers around the world, constantly consuming around 30 MW of power off the grid. The problem is that most of those data centers are terribly inefficient.
This news comes from the New York Times in a series of articles
Power, Pollution and the Internet
http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?ref=technology
Data Barns in a Farm Town, Gobbling Power and Flexing Muscle
http://www.nytimes.com/2012/09/24/technology/data-centers-in-rural-washington-state-gobble-power.html?ref=us&_r=0
Tomi Engdahl says:
Hotter Might Be Better at Energy-Intensive Data Centers
http://www.sciencedaily.com/releases/2012/09/120925143759.htm
As data centres continue to come under scrutiny for the amount of energy they use (about 1 percent of global electricity usage), researchers at University of Toronto Scarborough (UTSC) have a suggestion: turn the air conditioning down.
Data centres typically operate at temperatures from 20C to 22C. Estimates show that just 1 degree increase in temperature could save 2 to 5 percent of the energy the centres consume.
To conduct the study, the researchers collected data from data centres run by Google, Los Alamos National Labs, and others.
“We see our results as strong evidence that most organizations could run their data centers hotter than they currently are without making significant sacrifices in system reliability,” says Bianca Schroeder, a UTSC assistant professor of computer science.
Tomi Engdahl says:
Controlling Heat In Large Data Centers With Improved Techniques
http://www.sciencedaily.com/releases/2009/06/090602161940.htm
Approximately a third of the electricity consumed by large data centers doesn’t power the computer servers that conduct online transactions, serve Web pages or store information. Instead, that electricity must be used for cooling the servers, a demand that continues to increase as computer processing power grows.
Five years ago, a typical refrigerator-sized server cabinet produced about one to five kilowatts of heat. Today, high-performance computing cabinets of about the same size produce as much as 28 kilowatts, and machines already planned for production will produce twice as much.
“The growth of cooling requirements parallels the growth of computing power, which roughly doubles every 18 months. That has brought the energy requirements of data centers into the forefront.”
“Some people have called this the Moore’s Law of data centers,”
Most existing data centers rely on large air conditioning systems that pump cool air to server racks. Data centers have traditionally used raised floors to allow space for circulating air beneath the equipment, but cooling can also come from the ceilings. As cooling demands have increased, data center designers have developed complex systems of alternating cooling outlets and hot air returns throughout the facilities.
“How these are arranged is very important to how much cooling power will be required,” Joshi said. “There are ways to rearrange equipment within data centers to promote better air flow and greater energy efficiency, and we are exploring ways to improve those.”
Before long, centers will likely have to use liquid cooling to replace chilled air in certain high-powered machines. That will introduce a new level of complexity for the data centers, and create differential cooling needs that will have to be accounted for in the design and maintenance.
Joshi believes there’s potential to reduce data center energy consumption by as much as 15 percent by adopting more efficient cooling techniques
Tomi Engdahl says:
Temperature management in data centers: why some (might) like it hot
http://dl.acm.org/citation.cfm?doid=2254756.2254778
Tomi Engdahl says:
Greenhouse Emissions Drop Less During Economic Downturn Than Expected
http://slashdot.org/story/12/10/09/0120201/greenhouse-emissions-drop-less-during-economic-downturn-than-expected
“The contribution of economic decline in reducing greenhouse gas emissions is very low, reveals a new study.”
Tomi Engdahl says:
Boffins build program to HUNT DOWN CO2 polluters where they LIVE
Mapping shows Barry and Clive did climate change
http://www.theregister.co.uk/2012/10/10/carbon_emission_mapping/
The program, named Hestia after the Greek goddess of the home, can map CO2 emissions across urban landscapes and narrow them down to streets or certain homes using public databases, traffic simulation and building-by-building energy consumption modelling.
“Cities have had little information with which to guide reductions in greenhouse gas emissions – and you can’t reduce what you can’t measure,” said Kevin Gurney, an associate professor in Arizona State University’s School of Life Sciences and Hestia’s chief scientist.
“With Hestia, we can provide cities with a complete, three-dimensional picture of where, when and how carbon dioxide emissions are occurring.”
Local air pollution reports, traffic counts and tax assessor information has all been pulled into to the modelling system, which the researchers hope will help towns and cities to do what they can about climate
Tomi Engdahl says:
Power and cooling: The Oak Ridge way
25 Megawatts, 6.6 tons of cooling, more on the way
http://www.theregister.co.uk/2012/10/16/power_and_cooling_oak_ridge_case_study/
You think you have power and cooling issues? Slip into the shoes of Arthur ‘Buddy’ Bland, Project Director for the Oak Ridge Leadership Computing Facility, and learn how they keep one of the largest computing facilities in the world powered up, yet cool enough to prevent melting.
Oak Ridge is one of the world’s most efficient large data centers with a PUE (Power Usage Effectiveness) score of 1.25.
By comparison, Google has one of the lowest PUE averages at around 1.12 – 1.13, and estimates for typical data centers run from 1.6 to more than 2.0.
Watch the video…
emerson 0068 rtd says:
A excellent strategy to organize online time is to utilize online e-book marking equipment. They offer the ability to bookmark your most visited web-sites to a free Internet account. A terrific deal of time on the web is actually thrown away. While folks are proficient at multitasking, there can be plenty of ways people waste time. One of these is badly working with the net. A stunning statistic is usually that one of the most common search terms on search engines are to the most popular web-sites. Typing the address of a webpage will aid save a tiny level of time when compared to looking it up. This will aid save a few of clicks and several seconds on load times depending on how quick your connection is.
Tomi Engdahl says:
Network servers used for generating the optimization would reduce energy consumption by at least as much as ten of the Olkiluoto 3 size unit generates, calculates Jari Soininen doctoral thesis, and presents a new test method for measuring the performance of the server.
Soininen according to energy and the environment can be a significant savings if the server provides right performance.
“Server electric power is estimated to be about 22 GW. Their utilization is about 6-12 per cent, while it could smoothly be at least 60 per cent.”
If the servers have too much performance compared to real need, it will cause unnecessary energy consumption, maintenance costs increase the and excessive investment in equipment. If, however, the service performance is inadequate, at worst, it can slow down or crash critical network services and result in losses.
The server capacity testing is currently often lacking, since each service will have to draw up its own testing. The test preparation is hard work and requires a lot of expertise. Therefore, the test is often made only to the introduction, in which case merely to ensure that the performance is sufficient.
However, among other things, software updates, and service use changes affect the system’s ability to continue to serve the users. That is why frequently encountered situations where a network is down or the operation has slowed down considerably.
Master of Science in Information Technology Soininen, Jari doctoral thesis in Network performance evaluation and prediction of e-business environment are checked at Tampere University of Technology (TUT) in Pori on Wednesday, 31 October.
Source: http://m.tietoviikko.fi/Uutiset/Palvelinten+optimointi+s%C3%A4%C3%A4st%C3%A4isi+kymmenen+ydinreaktorin+verran+energiaa
Tomi Engdahl says:
Data center census: Investment growing 22 percent this year
http://www.cablinginstall.com/articles/2012/10/data-center-growth-22-percent.html
A global data center census conducted by DCD Intelligence (DatacenterDynamics Intelligence) found that worldwide investment in data centers is growing from approximately US$86 billion in 2011 to $105 billion this year.
Perhaps the most eye-popping figure coming from the census is a 63.3-percent increase in power requirements over the past 12 months, to a worldwide total of 38 gigawatts. By comparison, the anticipated 17-percent demand increase for the coming year looks modest. Looking a little more closely at power-consumption numbers, DCD Intelligence found that the percentage of racks housing more than 10kW of power rose from 15 percent last year to 18 percent this year.
To reiterate: while white space increased 8.3 percent over a year, the requirement for power increased 63.3 percent over the same period.
Tomi Engdahl says:
Report examines how the most-energy-efficient data centers do it
http://www.cablinginstall.com/articles/2012/10/451-group-efficiency-report.html
The new report from 451 Research entitled “Highly Energy-Efficient Data Centers in Pracitce” gets into detail on exactly how the highlighted facilities capitalize on developments in design, energy management, cooling and new deployment approaches to improve their energy efficiency. The report “includes case studies on 24 of the world’s most highly energy-efficient data centers—some large and well-known, others smaller and more obscure,” the organization said when announcing the report’s availability. “The majority of data center profiles included in the report were Uptime Institute Green Enterprise IT (GEIT) Award winners or honorees,” it continued. 451 Research and Uptime Institute are divisions of The 451 Group.
Highly Energy-Efficient Datacenters in Practice
Executive Summary
https://451research.com/report-long?icid=2269
Tomi Engdahl says:
How to avoid costs from oversized data center, network room infrastructure
http://www.cablinginstall.com/articles/2012/10/oversized-data-center-infrastructure-costs.html
“The physical infrastructure of data centers and network rooms is typically oversized by five times the actual capacity at start-up and more than one and a half times the ultimate actual capacity,” contends a recent white paper from APC-Schneider Electric.
“The single largest avoidable cost associated with typical data center and network room infrastructure is oversizing,” asserts Neil Rasmussen, the document’s author. “The utilization of the physical and power infrastructure in a data center or network room is typically around 50-60%.”
Tomi Engdahl says:
Panduit unveils new energy efficient data center cabinet system
http://www.cablinginstall.com/articles/2012/10/panduit-energy-efficient-data-center-cabinets.html
As the need for energy efficiency in the data center increases, so does the demand for new cabinet designs that can support multiple cooling architectures, including hot aisle/cold aisle, cold aisle containment, and vertical exhaust.
“Complete separation of cold air and hot air is the best way to improve cooling energy utilization and reduce energy consumption,” notes Dennis Renaud, Panduit’s vice president of datacomm products. “Gaps in the cabinet and poor airflow direction allow cool air to bypass equipment and mix with exhaust air, reducing cooling capacity and efficiency.”
Panduit has announced the availability of its new energy efficient data center cabinet system, which features the company’s latest advancements in air sealing, ducting and containment technology. The new sealed cabinet system eliminates bypass air and the mixing of cold air and exhaust air within the cabinet and the data center, optimizing cooling capacity and reducing cooling energy consumption by over 40%, claims Panduit.
To ensure that thermal efficiency is maintained over time, the Panduit Physical Infrastructure Management (PIM) software solution provides active monitoring and management of the power and cooling environment.
Tomi Engdahl says:
Uptime Institute professional services VP Vince Renaud steps forward and “debunks the myth that typical data center deployments are 10kW-20kW per rack.” For years, the data center industry has been warned by vendors that high density racks are coming, but according to the results of a recent survey, “the reality on the raised floor is far different,” contends Uptime Institute’s Renaud.
Source: http://www.cablinginstall.com/articles/2012/05/data-center-density.html
Tomi Engdahl says:
Data centers: The new global landfill?
http://www.cablinginstall.com/articles/2012/10/data-centers-global-landfill.html
a blog article at Wired.com asks the question: “Are data centers really just the new global landfill?”
“According to Emerson Network Power, there are 509,147 data centers worldwide with 285.8 million square feet of space. In more familiar terms, there’s enough data center space in the world to fit 5,955 football fields — or landfills, with landfills being a more appropriate analogy in this case.”
“Nationwide, in 2010 alone, data centers used about 76 billion kilowatt-hours, or roughly 2 percent of all electricity used in the US that year. It’s worth noting that all of that energy is mostly used to power idle servers.”
“Gartner found that typical data center utilizations ran from 7 – 12 percent. “
Tomi Engdahl says:
Breakthrough Promises Smartphones that Use Half the Power
http://hardware.slashdot.org/story/12/11/01/0021213/breakthrough-promises-smartphones-that-use-half-the-powe
“Powering cellular base stations around the world will cost $36 billion this year—chewing through nearly 1 percent of all global electricity production. Much of this is wasted by a grossly inefficient piece of hardware: the power amplifier,”
” If you’ve noticed your phone getting warm and rapidly draining the battery when streaming video or sending large files, blame the power amplifiers. As with the versions in base stations, these chips waste more than 65 percent of their energy”
Tomi Engdahl says:
Green500 Supercomputer List Released: Intel Takes Top Spot, Followed By AMD, NVIDIA
http://www.anandtech.com/show/6457/green500-supercomputer-list-released-intel-takes-top-spot-followed-by-amd-nvidia
Coinciding with the publication of the Top500 supercomputer list earlier this week, the Top500’s sister list, the Green500, was published earlier this morning. The Green500 is essentially to power efficiency what the Top500 is to total performance, being composed of the same computers as the Top500 list sorted by efficiency in MFLOPS per Watt. Often, but not always, the most powerful supercomputers are among the most power efficiency, which can at times lead to surprises.
Tomi Engdahl says:
How to avoid costs from oversized data center, network room infrastructure
http://www.cablinginstall.com/articles/2012/10/oversized-data-center-infrastructure-costs.html
“The physical infrastructure of data centers and network rooms is typically oversized by five times the actual capacity at start-up and more than one and a half times the ultimate actual capacity,” contends a recent white paper from APC-Schneider Electric.
“The single largest avoidable cost associated with typical data center and network room infrastructure is oversizing,”
the TCO costs associated with oversizing to be in excess of 30%.
Tomi Engdahl says:
Report: 4 suppliers account for half of all containerized data center shipments
http://www.cablinginstall.com/articles/2012/10/ims-containerized-data-center-report.html
Tomi Engdahl says:
Microsoft building clean powered data center at waste water plant
http://gigaom.com/cleantech/microsoft-building-clean-powered-data-center-at-waste-water-plant/
Microsoft has unveiled details of an experimental small data center that it’s building next to a waste water treatment plant in Cheyenne, Wyoming. The tiny data center will be powered by a fuel cell that uses biogas from the water facility, and Microsoft will use the test project to learn how it can scale clean power resources for its other large data centers, and also to figure out how to enable its data centers to become less reliant on the power grid.
Microsoft’s Data Plant is a 200 kW data center — about 10 feet by 20 feet in size in a container — that is being built literally feet from the Dry Creek Wasterwater Reclamation Facility in Cheyenne, Wyoming. A system of pipes sequesters methane that is created by the waste water, cleans it, and then automatically pipes it into the data center’s fuel cell, which powers the entire container.
Waste water treatment plants produce biogas — which is gas that is produced by the breakdown of organic matter. In many cases at these treatment plants, the biogas is just burned away, because it’s usually uneconomical to collect, transport and use.
Microsoft is using a 300 kW fuel cell from FuelCell Energy for the Data Plant. Fuel cells take a fuel — usually natural gas or biogas — and run it over plates covered in a catalyst, to chemically produce electricity.
Microsoft’s experiment will help the company work on alternative ways to power the rest of its data centers and enable its Internet architecture to rely less on grid power. Microsoft will use the knowledge it’s learned at this first Data Plant, to potentially build fuel cells and clean power at other larger sites.
Tomi Engdahl says:
Greenhouse gases hit record high
http://www.eetimes.com/design/smart-energy-design/4401942/Greenhouse-gases-hit-record-high?Ecosystem=communications-design
After heat-trapping greenhouse gases reached record highs in 2011, the World Meteorological Organization (WMO) said there has been a 30-percent increase in radiative forcing – the warming effect on climate – between 1990 and 2011.
In its annual Greenhouse Gas Bulletin, the WMO noted that about 375 billion tons of carbon have been emitted by humans into the atmosphere as carbon dioxide (CO2) since the industrial revolution. Atmospheric measurements indicate that about half of this CO2 remains in the atmosphere and that the ocean and terrestrial sinks have consistently increased.
Finally, N2O atmospheric concentration in 2011 was about 324.2 parts per billion, which is 1.0 ppb above the previous year and 120 percent of the pre-industrial level.
“Until now, carbon sinks have absorbed nearly half of the carbon dioxide humans emitted in the atmosphere, but this will not necessarily continue in the future,” said WMO Secretary-General Michel Jarraud.
Tomi Engdahl says:
Fast Five Ways to a High-Efficiency Data Center
http://www.datacenterjournal.com/it/fast-five-ways-to-a-high-efficiency-data-center/
The “Fast Five Ways to a High-Efficiency Data Center” represents a series of best practices and technologies involved in operating a high-efficiency data center.
According to Gartner analysts Rakesh Kumar and Lydia Leong, “Demand for business agility is pushing organizations toward data center solutions that have greater flexibility, enabling them to expand or contract their data centers according to business needs.”
Implement strategies for sun-setting technology: Straggler hardware, legacy apps and their antiquated operating systems exist in astonishing numbers in every organization. Almost every data center that’s 5 to 10 years old deals with systems that waste energy and cost many times more to operate than they may be worth.
Rationalize the data center network: IT professionals must review their network designs in the data center with experts able to ensure it is built to run according to a modern data center operating model.
Optimize the data center: IT professionals should ask whether their data center is running at optimum performance and energy efficiency. They should consider advanced passive containment options for airflow, cabling and cooling that will streamline efficiency, maximize usage of current capacity and better manage the data center environment. Uninterruptable Power Supply (UPS) efficiency, passive cooling containment and various free-cooling options should all be considered.
Move strategic aspects of the data center environment to a cloud model: Leveraging cloud-based technologies
Outsource for efficiency with managed services and IT automation: Mundane tasks are part of many IT administrators’ everyday lives. Many times these tasks include things like backup management, LAN and WAN management or operating system (OS)-level management. Many of these manual operations take valuable time from the IT departmen
Tomi Engdahl says:
The Green Grid issues data center environmental impact framework
http://www.cablinginstall.com/articles/2012/11/green-grid-data-center-impact-framework.html
The Green Grid has, in its latest report, introduced a framework that organizations can use to identify and uniformly describe the key elements of their data centers so that diverse methodologies can be employed to evaluate a facility’s environmental impact.
Tomi Engdahl says:
Movable data centers are popular in Japan, confined in large cities, where land is expensive.
Japanese company NEC is developing a data center module, which uses convection due to 30 percent less energy in hot and humid climates.
Use of convection cooling is not a new invention.
The old standards in practice forced the data centers in Japan completely closed. According to these data centers operating temperature had to be 20 to 25 degrees and humidity of 40-50 per cent. The new standards from Asharen (American Society of Heating, Refrigerating and Air-Conditioning), however, extended the operating temperature of 15-31 ° C and humidity from 20 to 80 degrees.
Source: http://www.tietoviikko.fi/kaikki_uutiset/japanilaiskeksinto+siirrettava+datakeskus+kayttaa+30+prosenttia+vahemman+energiaa/a859895?s=r&wtm=tietoviikko/-28112012&
Tomi Engdahl says:
Global warming still stalled since 1998, WMO Doha figures show
‘We’re looking into this thing which is NOT HAPPENING’
http://www.theregister.co.uk/2012/11/29/wmo_global_temp_figures_2012_doha_ninth_hottest/
Figures released by the UN’s World Meteorological Organisation indicate that 2012 is set to be perhaps the ninth hottest globally since records began – but that planetary warming, which effectively stalled around 1998, has yet to resume at the levels seen in the 1980s and early 1990s.
The WMO figures are produced by averaging those from the three main climate databases: those of NASA, the US National Oceanic and Atmospheric Administration (NOAA) and the British one compiled jointly by the Met Office and the University of East Anglia.
The official position of the climate establishment is that global warming is still definitely on and the flat temperatures seen for the last 14 years or so are just a statistical fluke of the sort to be expected when trying to measure such a vast and noisy signal as world temperatures with such precision. (The global warming since 1950 is assessed as just half a degree, a difficult thing to pick out when temperatures everywhere go up and down by many degrees every day and even more over a year.)
Tomi Engdahl says:
How higher temperature, humidity affects data center, IT equipment
http://www.cablinginstall.com/articles/2012/11/green-grid-temperature-humidity-paper.html
A new white paper from The Green Grid looks at how environmental parameters of increased temperature and humidity can affect IT equipment, examining reliability and energy usage as the data center’s operating range is extended.
the hypothesis that data center efficiency can be further improved by employing a wider operational range without substantive impacts on reliability or service availability.
Tomi Engdahl says:
ASHRAE updates data center thermal guidelines for energy efficiency
http://www.cablinginstall.com/articles/2012/10/ashrae-updates-data-center-guidelines.html
Since its first edition in 2004, ASHRAE’s Thermal Guidelines for Data Processing Environments, published by the organization’s Technical Committee (TC) 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment, has become the de-facto reference material for unbiased and vendor-neutral information on the design and operational parameters for the entire datacom (i.e. data centers and telecommunications) industry.
“This third edition creates more opportunities to reduce energy and water consumption but it is important to provide this information in a manner that empowers the ultimate decision makers with regards to their overall strategy and approach,”
Highlights in this third edition of the guide include new air and liquid equipment classes and expanded thermal envelopes for facilities that are willing to explore the tradeoffs associated with the additional energy saving of the cooling system through increased economizer usage and what that means in terms of the impact to IT equipment attributes such as reliability, internal energy, cost, performance, contamination, etc.
“The most valuable update to this edition is the inclusion of IT equipment failure rate estimates based on inlet air temperature,” Beaty adds. “These server failure rates are the result of the major IT original equipment manufacturers (OEM) evaluating field data, such as warranty returns, as well as component reliability data. This data will allow data center operators to weigh the potential reliability consequences of operating in various environmental conditions vs. the cost and energy consequences.”
Tomi Engdahl says:
Is it worth investing in a high-efficiency power supply?
http://www.extremetech.com/extreme/143029-empowered-can-high-efficiency-power-supplies-cut-your-electricity-bill
If you’ve gone shopping for a power supply any time over the last few years, you’ve probably noticed the explosive proliferation of various 80 Plus ratings. As initially conceived, an 80 Plus certification was a way for PSU manufacturers to validate that their power supply units were at least 80% efficient at 25%, 50%, 75%, and 100% of full load.
The 80 Plus program has expanded significantly since the first specification was adopted. Valid levels now include Bronze, Silver, Gold, Platinum, and a currently unused Titanium specification level.
In the pre-80 Plus days, PSU prices normally clustered around a given wattage output. The advent of the various 80 Plus levels has created a second variable that can have a significant impact on unit price. This leads us to three important questions: How much power can you save by moving to a higher-efficiency supply, what’s the premium of doing so, and how long does it take to make back your initial investment?
Basic 400W-600W units are quite cheap these days, even from top vendors like Antec, Corsair, OCZ, and Silverstone. Prices start to climb by the 700W range; 1200W units are several hundred dollars.
The price premium for greater-than-80 Plus certification can be substantial.
Even generic PSUs are far more than 50% efficient; in fact, 75-77% is fairly common. This means the amount of money you save from upgrading to a high-efficiency PSU is minimal if you don’t actually draw much power to start with.
Clearly the efficiency of a top-end PSU can save you some scratch over the long term. Exactly how much depends on what you’re doing.
At constant idle, the 750W 80 Plus Silver saves $4.38 to $6.56 over the course of a year. Upgrading to the 80 Plus Platinum drops between $18.63 and $27.87 back in your pocket.
At constant load, even the modest upgrade offered by the 750W 80 Plus Silver is worth $27-$40. The Toughpower XT 1275W saves you $80-$120 in power costs per year.
The good news is that power supplies with better 80 Plus ratings really do deliver what they claim — there is a net reduction in total power consumption. If you burn a lot of power, Platinum units could be good investments and pay back their premiums in a year or two.
The cost premiums, however, don’t add up anywhere but at the highest end.
Most of us, however, would be best served by turning the machine off or dropping into hibernation. The best way to save power is simply not to use it, and manufacturers currently charge huge premiums for marginal performance gains.
It may not make much sense to buy a unit at a significant premium, but if you get a good deal, we recommend taking it.
Tomi Engdahl says:
How much power does it take to keep the Internet running?
http://www2.electronicproducts.com/How_much_power_does_it_take_to_keep_the_Internet_running-article-fajb_power_internet_dec2012-html.aspx
Infograph breaks down the web’s electric bill