Google’s Latest Data Center Is Cooled Entirely With Ocean Water article tells about newest Google data center.
Google has a new video showing how it’s using sea water to cool its new data center in Hamina, Finland. The water is sucked in through granite tunnels (the site used to be a paper mill and the tunnels were built for that tens of years ago). Then it’s pumped through the data center in pipes, and run into exchangers that dissipate the heat from servers.
22 Comments
How Clean is Your Cloud and Telecom? « Tomi Engdahl’s ePanorama blog says:
[...] the cloud, access to significant amounts of electricity is a key factor in decisions about where to build these data centers. Industry leaders estimate nearly $450bn US dollars is being spent annually on [...]
Tomi Engdahl says:
Google Mimics Amazon Cloud With ‘Google Compute Engine’
http://www.wired.com/wiredenterprise/2012/06/google-cloud-platform/
Google has unveiled a service akin to Amazon’s Elastic Compute Cloud, letting developers and businesses hoist applications atop virtual machines running on the same sweeping infrastructure that underpins Google’s own applications and web services.
Unveiled on Thursday morning by Urs Hölzle — the man who oversees Google’s infrastructure — at the company’s annual developer conference, the new service is known as Google Compute Engine.
The company already offers a service for building and running applications atop its infrastructure — Google App Engine — but this service does not offer access to raw virtual machines. With App Engine, you must code applications for specific APIs, or application programming interfaces, that place certain restrictions on what programming languages, libraries, and frameworks can be used.
With raw virtual machines, developers can pretty much run whatever software they want, just as they can with Amazon EC2, the undisputed king of the cloud computing game.
Google’s new service is currently in the beta testing stage, and it’s available to only a limited number of users.
Hölzle claimed that next to competitors — presumably Amazon — the service would offer 50 percent more compute power per dollar. During his keynote, the Google man said that the service lets applications scale to hundreds of thousands of processors cores, showing one genetics-related application running on about 600,000 cores.
Like these competitors, Google Compute Engine is essentially a way of building and hosting applications without setting up computing hardware in your own data center. Amazon pioneered the idea of a public service
Tomi Engdahl says:
Google Compute Engine: Computing without limits
http://googledevelopers.blogspot.fi/2012/06/google-compute-engine-computing-without.html
We’re introducing Google Compute Engine, an Infrastructure-as-a-Service product that lets you run Linux Virtual Machines (VMs) on the same infrastructure that powers Google. This goes beyond just giving you greater flexibility and control; access to computing resources at this scale can fundamentally change the way you think about tackling a problem.
At launch, we have worked with a number of partners – such as RightScale, Puppet Labs, OpsCode, Numerate, Cliqr and MapR – to integrate their products with Google Compute Engine. These partners offer management services that make it easy for you to move your applications to the cloud and between different cloud environments.
Tomi Engdahl says:
Google launches alleged Amazon Web Services killer, but lacks maturity, options
http://www.zdnet.com/blog/btl/google-launches-alleged-amazon-web-services-killer-but-lacks-maturity-options/81276
Summary: Google has thrown down a Amazon Web Services challenge, but has a way to go to match its rival on breadth and depth.
Tomi Engdahl says:
A rare look inside Facebook’s Oregon data center [photos, video]
http://gigaom.com/cleantech/a-rare-look-inside-facebooks-oregon-data-center-photos-video/
Facebook is currently investing heavily in its infrastructure boom. It now has an estimated 180,000 servers for its 900 million plus users
While that’s tiny compared to Google’s estimated 1 million-plus servers, Facebook, like Google did years before it, is now learning how to be an infrastructure company.
Tomi Engdahl says:
Review: Google Compute Engine rocks the cloud
http://www.infoworld.com/d/cloud-computing/review-google-compute-engine-rocks-the-cloud-200591
Google’s new compute cloud offers a crisp and clean way to spin up Linux instances and easily tap other Google APIs
Ten years ago, you would ask your boss to buy a rack or two of computers to churn through the data. Today, you just call up the cloud and rent the systems by the minute. This is the market that Google is now chasing by packaging up time on its racks of machines and calling it the Google Compute Engine.
Google took its sweet time entering this corner of the cloud. While Amazon, Rackspace, and others started off with pay-as-you-go Linux boxes and other “infrastructure” services, Google began with the Google App Engine, a nice stack of Python that held your hand and did much of the work for you. Now Google is heading in the more general direction and renting raw machines too. The standard distro is Ubuntu 12.04, but CentOS instances are also available. And you can store away your own custom image once you configure it.
Why rent machines from Google instead of Amazon or Rackspace or some other IaaS provider? Google claims its raw machines are cheaper. This is a bit hard to determine with any precision because not everyone is selling the same thing despite claims of computing becoming a commodity.
Google sells its machines by the Google Compute Engine Unit (GCEU), which it estimates is about a 1GHz to 1.2GHz Opteron from 2007.
All of Google’s machines rent for 5.3 cents per GCEU per hour, but that isn’t really what you pay.
Is 5.3 cents per GCEU a good deal? It depends upon what you want to do with your machine.
After you get past the differences over RAM and disk space, the Google machines are meant to be essentially the same as the machines from Amazon or Rackspace — or even the machines you might buy on your own.
Like Amazon and Rackspace, Google makes it easy to start off with Ubuntu; after that, you’re talking to Ubuntu, not Google’s code. There are differences in the startup and shutdown mechanisms, but these aren’t substantial. More substantial is Google’s inability to snapshot persistent storage, as you can in Amazon, but Google promises this is coming soon.
Google doesn’t charge for ingress, but it has a fairly complicated model for egress. Shipping data to a machine in the same zone in the same region is free, but shipping it to a different zone in the same region is one penny per gigabyte.
In general, Google is doing a good job of making some of the dangers of the cloud apparent. Like compute instances in Amazon, Rackspace, and other IaaS clouds, each Google instance comes with “ephemeral disk,” a name that makes the storage sound more fragile than it really is. Keep in mind that the file system that comes with your cloud computer — be it on Amazon, Rackspace, or Google — is not backed up in any way unless you code some backup routines yourself. You can run MySQL on your cloud box, but the database won’t survive the failure of your machine, so you better find a way to keep a copy somewhere else too.
Google is also making the dangers of location apparent. One section of the documentation addresses just how you should design your architecture around potential problems.
Tomi Engdahl says:
CloudConnect Conference Talks Infrastructure in the Cloud
http://www.designnews.com/author.asp?section_id=1365&doc_id=251365&cid=NL_Newsletters+-+DN+Daily
Craig McLuckie, the lead product manager for Google Compute Engine, was also a keynote speaker. There was a lot of interest in what he had to say, so a specific breakout session on this material was held. I had a chance to talk to him after that additional session.
Google has had its App Engine, a platform for developing cloud-based applications, for a while. What is new is the Google Compute Engine, which is infrastructure-as-a-service (IaaS), as opposed to the App Engine platform-as-a-service (PaaS) offering. With the Compute Engine, Google is making infrastructure available for customers to build very large and complex cloud applications. These are Linux virtual machines. You have persistent disk resources, local disks, and cloud storage available. There is access to the Internet and to private networks.
The Linux kernel is provided by Google, since it uses this to provide a number of very important services. Some of these services include business continuity, which is built in. Unlike some other IaaS vendors, Google does not let you upload your own. On the other hand, you provide your own business continuity solutions there. Google offers Centos and Ubuntu flavors. Of course, there is a lot you can customize around this, so it should be no problem. This service can be used to build very large parallel applications and can scale up to 10,000 cores, McLuckie said.
Since this is the infrastructure on which Google runs, the services it uses are available to Compute Engine customers.
Tomi says:
Google’s expansion brought 20 new jobs to Finland.
Google provides the necessary hardware in ready containers from abroad so computer dealers in Finland do not get any business from it.
Finnish electricity is used by foreign companies.
Source: http://www.tietoviikko.fi/kaikki_uutiset/suomessa+on+liikaa+konesaleja/a844599?s=r&wtm=tietoviikko/-14102012&
Tomi Engdahl says:
Google Data Center Gallery: ”Where the Interenet Lives”
http://www.google.com/about/datacenters/gallery/#/tech
Google Data Center Gallery « Tomi Engdahl’s ePanorama blog says:
[...] earlier posted some limited details of Google data centers in Google Efficient Data Centers and Google Hamina data center details, but now some rare images of Google data centers became [...]
Tomi Engdahl says:
Hamina
http://www.google.com/intl/fi/about/datacenters/gallery/index.html#/locations/hamina
Tomi Engdahl says:
Google swaps out MySQL, moves to MariaDB
‘They’re moving it all,’ says MariaDB Foundation headman
By Jack Clark, 12th September 2013
http://www.theregister.co.uk/2013/09/12/google_mariadb_mysql_migration/
Google is migrating its MySQL systems over to MariaDB, allowing the search company to get away from the Oracle-backed open source database.
The news came out at the Extremely Large Databases (XLDB) conference in Stanford, California on Wednesday, one month after El Reg reported that Google had assigned one of its engineers to the MariaDB Foundation. News of the swap was not an official announcement by Google, it came out during a presentation by Google senior systems engineer Jeremy Cole on the general state of the MySQL ecosystem.
It turns out that far from being a minor initiative to keep MariaDB alive, Google is actively patching and upgrading MariaDB 10.0 to be fit enough so that Google can migrate all of its thousand-plus MySQL instances onto the technology.
“Were running primarily on [MySQL] 5.1 which is a little outdated, and so we’re moving to MariaDB 10.0 at the moment,” Cole said in a presentation he gave on the general state of the MySQL ecosystem.
By moving to MariaDB, Google can free itself of any dependence on technology dictated by Oracle – a company whose motivations are unclear, and whose track record for working with the wider technology community is dicey, to say the least. Oracle has controlled MySQL since its acquisition of Sun in 2010, and the key InnoDB storage engine since it got ahold of Innobase in 2005.
MariaDB is an open source database backed by Monty Widenius, who spearheaded the original development of MySQL. It is designed to replace Oracle-backed MySQL. Right now, Google has about five people working part-time on MariaDB bug fixes and patches, our sources tell us.
Google’s widespread MariaDB push may be an attempt by the Chocolate Factory to shift developer allegiance from MySQL to MariaDB, and in doing so dilute Oracle’s influence over the open source database ecosystem.
“I’d really love to see a single MySQL community, I think that’s more or less impossible under Oracle, I don’t know if that’s possible under MariaDB,” Cole said.
Though more attention has been paid to Google’s flashier next-generation SQL systems such as Spanner and the back-to-the-future F1 database, Cole confirmed to El Reg that MySQL is running across “thousands of instances” at Google upon legions of flash-based servers. And it’s on the move.
Google is not alone in its shift to MariaDB: Red Hat is ditching MySQL for MariaDB in Red Hat Enterprise Linux 7.
Tomi Engdahl says:
Google to invest 450 million euros in Hamina data center in Finland.
Google acquired in 2009, the former Stora Enso’s Summa paper. The company said at that time that it wanted to place a data center, which can also be expanded. It has already invested 350 million to this project.
Hamina data center serves the users all over the world. It currently employs about 125 people full-time.
This new 450 million will be added to this investment.
Source: http://www.iltalehti.fi/talous/2013110417681820_ta.shtml
Tomi Engdahl says:
IT’S ALIVE! IT’S ALIIIVE! Google’s secretive ‘living-being’ Omega cloud
‘Biological’ signals ripple through massive cluster management monster
http://www.theregister.co.uk/2013/11/04/google_living_omega_cloud/
One of Google’s most advanced data center systems behaves more like a living thing than a tightly controlled provisioning system. This has huge implications for how large clusters of IT resources are going to be managed in the future.
“Emergent” behaviors have been appearing in prototypes of Google’s Omega cluster management and application scheduling technology since its inception, and similar behaviors are regularly glimpsed in its “Borg” predecessor, sources familiar with the matter confirmed to The Register.
Emergence is a property of large, distributed systems. It can lead to unforeseen behavior arising out of sufficiently large groups of basic entities.
Just as biology emerges from the laws of chemistry; ants give rise to ant colonies; and intersections and traffic lights can bring about cascading traffic jams, so too do the ricocheting complications of vast fields of computers allow data centers to take on a life of their own.
The kind of emergent traits Google’s Omega system displays means that the placement and prioritization of some workloads is not entirely predictable by Googlers. And that’s a good thing.
“Systems at a certain complexity start demonstrating emergent behavior, and it can be hard to know what to do with it,” says Google’s cloud chief Peter Magnusson. “When you build these systems you get emergent behavior.”
Omega was created to help Google efficiently parcel out resources to its numerous applications. It is unclear whether it has been fully rolled out, but we know that Google is devoting resources to its development and has tested it against very large Google cluster traces to assess its performance.
Tomi Engdahl says:
Google Data Center Investment in Finland Tops $1 Billion USD
http://www.datacenterknowledge.com/archives/2013/11/04/google-data-center-investment-in-finland-tops-1-billion-usd/
Google’s data center spending and investment continues to soar. The Internet giant announced a EUR450 million (which is about 608 million U.S. dollars) expansion at its Hamina data center in Finland. This comes in addition to an already announced EUR350 million (or about 473 million U.S. dollars) investment.
Worldwide, the company recorded a whopping $2.29 billion (in U.S. dollars) in capital expenditures in the third quarter of 2013 alone, driven primarily by massive expansion projects. This further investment shows there’s no sign of this continued investment subsiding.
“As demand grows for our products, from YouTube to Gmail, we’re investing hundreds of millions of euros in expanding our European data centres,” says Anni Rokainen, Google Finland Country Manager. “This investment underlines our commitment to working to help Finland take advantage of all the economic benefits from the Internet.
Finland Project Is Very Efficient and Green
The data center is one of Google’s most advanced and efficient worldwide, employing seawater from the Bay of Finland in its high-tech cooling system.
Starting in 2015, the data center will be primarily powered by wind energy via a new onshore wind park, which was announced last June. The company will sign additional agreements as it grows to power the data center will 100 percent renewable energy.
Economic Impacts to Finnish Community
The initial construction work turning the paper mill’s first machine hall into a data center lasted just over 18 months. At peak, the new construction will provide work for approximately 800 engineering and construction workers. Currently, the facility employs 125 full time and contractor roles, which is set to expand alongside the facility.
This investment comes at an advantageous time for Finland, as the paper industry has been hit hard. Locating one of its most advanced data centers in the Kotka-Hamina Region helps boost the tech industry at large.
Data Center Energy Retrofits « Tomi Engdahl’s ePanorama blog says:
[...] energy efficiency of these giants. According Pervilä study, large data centers such as Google in Hamina, Yahoo in Lockport and Facebook in Prineville resemble each other: they try to minimize the extra [...]
Tomi Engdahl says:
Google Summa data center power requirement after the expansion is estimated to be 72 MW.
Source: Tekniikka ja Talous
http://www.tekniikkatalous.fi/ict/suomeen+kaavaillaan+uutta+isoa+datakeskusta+ndash+millainen+vaikutus+suomen+taloudelle/a974436
Tomi Engdahl says:
Google Preps Virtual Network
Andromeda service described in keynote
http://www.eetimes.com/document.asp?doc_id=1323666&
Google described Andromeda, its latest effort to turn its massive data centers into virtual cloud computing systems customers can create and use on the fly. The company also called for smarter switch silicon to help optimize its efforts.
“The future of cloud computing is about delivering new capabilities we can’t deliver now, not delivering old capabilities cheaper,” Amin Vehdat, a distinguished engineer at Google, said in a keynote at the Hot Interconnects conference here. “The network is the fundamental barrier to delivering new features.”
Andromeda is Google’s current effort to overcome those barriers. It is essentially a central network controller and protocol, running on servers, that creates virtual systems of computers, networking, and storage as needed.
Google’s internal programmers have been able to request such virtual systems for some time. Now the company is “setting out to support external users with Andromeda, giving them the illusion of running their own networks with different address spaces and dedicated performance.”
Tomi Engdahl says:
Google buys into new Finnish wind energy in renewables search
https://www.reuters.com/article/us-alphabet-renewables-finland/google-buys-into-new-finnish-wind-energy-in-renewables-search-idUSKCN1LR1OG?feedType=RSS&feedName=topNews&utm_medium=Social&utm_source=twitter
HELSINKI (Reuters) – Google (GOOGL.O) said it has signed a 10-year deal to buy renewable energy from three new wind farms that are being built in Finland and which will power one of its data centers.
Tomi Engdahl says:
Google to invest €600m in new data centre in Finland
https://yle.fi/uutiset/osasto/news/google_to_invest_600m_in_new_data_centre_in_finland/10803700
The new server farm will be located in Hamina, where the American tech giant previously opened a data centre in 2011.
Tomi Engdahl says:
https://www.uusiteknologia.fi/2019/09/20/google-investoi-lisaa-haminaan-myos-tuulivoimaa/
Tomi Engdahl says:
Ping alle 5 ms – Googlen Suomi-pilveen avautui suora reitti
TIVI25.12.202013:42PILVIPALVELUTDIGITALOUS
Jopa alle viiden millisekunnin viipeet Haminan datakeskukseen mahdollistavat entistä useamman sovelluksen viemisen pilveen.
https://www.tivi.fi/uutiset/tv/ac6eb96d-a143-4458-ad6b-a03b93fa894d
Google alkoi kesällä tarjota yrityksille mahdollisuutta liittyä yritysverkosta Suomen Google Cloud Platform -pilvialueeseen. Kyseessä on niin sanottu interconnect-yhteys.
”Aikaisemmin kaikki liikenne Haminaan kiersi mennen tullen Tukholman kautta. Tämä aiheutti noin 18 millisekunnin latenssin Helsingistä testattuna”, Mehiläisen tietohallintojohtaja Kalle Alppi kertoo.
Uuden yhteyden jälkeen viive Helsingistä Mehiläisen Haminassa sijaitsevaan palvelimeen on pudonnut nopeimmillaan 3–5 millisekuntiin. Tampereelta viive on noin 7 ja Oulusta 15 millisekuntia.