Google Data Center Gallery

Google has 13 data centers around the world (7 in the Americas, 3 in Asia and 3 in Europe) running 24 hours a day, 7 days a week, it’s fascinating to see how the world’s biggest Internet company is powered. I have earlier posted some limited details of Google data centers in Google Efficient Data Centers and Google Hamina data center details, but now some rare images of Google data centers became available.

Yesterday Google released a massive gallery of pictures of their Data Centers around the world. The Google Data Center Gallery is titled ”Where the Interenet Lives”. Images have servers, technology and personnel in the interior of the centers presented in an artistic way. Photos are taken by a Chinese photographer artist Connie Zhou, who specializes in architecture and interior photography.

Here are some small images from gallery showing Google Hamina data center located in Finland. Click the image to get to ”Where the Interenet Lives” gallery.

gdcg

The photos can also seen in A Photo Tour of Google’s Data Centers Around the World web page if you prefer all photos on one page instead of browsing through gallery picture by picture.

You can also take a walk through a Google data center using street view technology. Look the following Explore Google Data center on google map street view video to get the idea what you get.

Reg reader described the whole experience as “like a boring version of Doom”, and I can pretty much agree this description. But can try the walk through yourself to make your own decision of it is fascinating or boring.

38 Comments

  1. Jeff in Texas says:

    My company provided the electrical power apparatus for several Google datacenters in the US. I was lucky enough to spend a couple of days debugging our systems at their center outside of Atlanta. They were very clear about not taking pictures (we were constantly supervised) but there was no proviso against making sound recordings. I recorded what it sounds to be inside one of the biggest data centers in the world — an eerie buzzing noise, not unlike what I imagine what it would be like inside a bio-mechanical beehive.

    Reply
  2. Jeff in Texas says:

    By the way, I love your site.

    Reply
  3. Tomi Engdahl says:

    Nov 13, 2012 – 4:31PM PT
    Google opens up on seven years of its data center history
    http://gigaom.com/cloud/google-opens-up-on-seven-years-of-its-data-center-history/

    Google opened up on its data center operations today at an industry event in Phoenix. It shared how its thinking and practices have changed as it seeks to lower the costs and environment impact of its servers and IT infrastructure.

    Reply
  4. posture brace review says:

    I am now not certain where you’re getting your information, however good topic. I needs to spend some time studying much more or working out more. Thank you for excellent information I was searching for this info for my mission.

    Reply
  5. IT-GRC Initiative says:

    Definitely believe that which you said. Your favorite reason appeared to be on the net the easiest thing to be aware of.
    I say to you, I definitely get irked while
    people think about worries that they just do not know about.
    You managed to hit the nail upon the top as well as defined out the whole thing without having side
    effect , people can take a signal. Will likely be back to get more.
    Thanks

    Reply
  6. Tomi Engdahl says:

    Google’s 10 rules for designing data centers
    http://gigaom.com/2013/03/05/googles-10-rules-for-designing-data-centers/

    Google’ vice president of data centers, Joe Kava, outlines how the search giant’s pursuit of data center designs corresponds nicely to the company’s ten governing rules. Well, almost.

    Reply
  7. Tomi Engdahl says:

    Google: ‘We’ll track EVERY task on EVERY data center server’
    Chip-level performance tracking in thousand-server Googly clusters
    http://www.theregister.co.uk/2013/04/12/google_cpi2_processor_monitoring/

    Google has wired its worldwide fleet of servers up with monitoring technology that inspects every task running on every machine, and eventually hopes to use this data to selectively throttle or even kill processes that cause disruptions for other tasks running on the same CPU.

    “Performance isolation is a key challenge in cloud computing. Unfortunately, Linux has few defenses against performance interference in shared resources such as processor caches and memory buses, so applications in a cloud can experience unpredictable performance caused by other program’s behavior,” the researchers write.

    “Our solution, CPI2, uses cycles-per-instruction (CPI) data obtained by hardware performance counters to identify problems, select the likely perpetrators, and then optionally throttle them so that the victims can return to their expected behavior. It automatically learns normal and anomalous behaviors by aggregating data from multiple tasks in the same job.”

    CPI2 lets Google gather information on the expected CPU cycles-per-instruction (CPI) of any particular task

    Google gathers this data for a 10 second period once every minute via a perf_event tool in counting mode, rather than sampling mode. Total CPU overhead of the system is less than 0.1% and leads to no visible latency impact.

    Reply
  8. Tomi Engdahl says:

    Google platform cloud now takes PHP apps
    Google closes gap with cloud competitors
    http://www.theregister.co.uk/2013/05/15/google_gae_gets_php/

    Google I/O Google is adding PHP to Google App Engine as the company tries to appeal to developers of the widely-used language.

    The addition was announced on Wednesday at Google’s developer jamboree Google I/O. It means GAE now supports three widely used web languages – Python, Java, and PHP – and Go, a Google-sponsored language designed for apps with massive scale.

    App Engine “has thriving communities around Python, Java, Go, but it’s missing one of the most popular languages on the web, one that powers three quarters of all websites – starting today PHP devs can get the benefits of App Engine,” Google’s senior veep of technical infrastructure Urs Hölzle, said.

    App Engine is Google’s platform-as-a-service and competes with Windows Azure, AWS Elastic Beanstalk, AWS-based Heroku and Engine Yard, App Fog, and others.

    Azure, Elastic Beanstalk, Heroku and, as of Tuesday Engine Yard, all also support PHP. But Google’s infrastructure is profoundly different to those of Microsoft and Amazon

    Reply
  9. Tomi Engdahl says:

    Google takes on AWS, Azure virty servers with micro billing and fat disks
    Compute Engine open for all
    http://www.theregister.co.uk/2013/05/15/google_compute_engine_enhancements/

    Google I/O Google is done dabbling with raw compute and storage infrastructure and has thrown the doors wide open on its Compute Engine services, while at the same time offering finer-grained pricing and fatter persistent storage for its virtual machines than is available from Amazon Web Services, Microsoft Windows Azure, and other public clouds.

    “Many of you want and need access to raw VMs, and that’s important,” he said. “We rolled out Compute Engine a year ago, and we have rolled out a ton of improvements since then.”

    The biggest and most important change is a sensible, even radical, shift in pricing as Compute Engine becomes generally available. Instead of charging per-hour for compute capacity for raw virty server slices, as everyone in the industry does, Google is to charge by the minute.

    “Today, the standard unit of measure for compute is an hour, regardless of how much time you actually use,”

    Google has a caveat and qualifier on those per-minute charges for Compute Engine instance capacity. The charges are indeed billed in one-minute increments, but you have to pay with a ten-minute minimum.

    Even so, in one fell swoop, Google has wiped out a whole chunk of profits that all of the other cloud providers have been counting on – like Bitcoins.

    Another change as Compute Engine moves into general availability are shared-core instances. These are small, sub-core instances, like those available on other clouds, but unlike the tiny VMs on other clouds, according to Hölzle, who says the two from Google have predictable and consistent performance, not best-effort performance.

    The g1-small instance has one virtual core, 1.7GB of virtual memory, and no persistent disk; it costs 5.4 cents per hour. The f1-micro instance has a single virtual core, 600MB of virtual memory, and no persistent disk and costs 1.9 cents per hour.

    “Our persistent disks are limited to 1TB, and even though it matches what other providers have, it wasn’t really enough,”

    Google’s decision to offer 10TB of persistent storage as an option for Compute Engine instances.

    an Advanced Routing feature that creates a virtual private network to link from your on-premise infrastructure over a secure tunnel.

    He also bragged that with everyone yammering about OpenFlow and software-defined networks, Google has operated an OpenFlow backbone with several terabits per second of aggregate bandwidth that connects its data centers in North America, Europe, and Asia for the past two years.

    Reply
  10. Tomi Engdahl says:

    Everything you need to know about physical security and cybersecurity in the Google data center

    The following video from Google covers most major aspects of physical security and cybersecurity measures for the Internet search monolith’s data center operations. Topics including IP-based key access and video monitoring are covered, as are fire-suppression and data backup technologies.

    Google data center security YouTube
    http://www.youtube.com/watch?feature=player_embedded&v=wNyFhZTSnPg

    Reply
  11. Tomi Engdahl says:

    Developers Can Now Ship Hard Drives To Google To Import Large Amounts Of Data To Cloud Storage
    http://techcrunch.com/2013/06/18/developers-can-now-ship-hard-drives-to-google-to-import-large-amounts-of-data-to-cloud-storage/

    Google just added a new service to Google Cloud Storage that will allow developers to send their hard drives to Google to import very large data sets that would otherwise be too expensive and time-consuming to import. For a flat fee of $80 per hard drive, Google will take the drive and upload the data into a Cloud Storage bucket. This, Google says, can be “faster or less expensive than transferring data over the Internet.” The service is now in limited preview for users with a U.S.-based return address.

    Platforms like AWS and Google’s Cloud Platform are obviously great for analyzing large data sets.

    “transferring large data sets (in the hundreds of terabytes and beyond) can be expensive and time-consuming over the public network.” Uploading 5 terabytes of data over a 100Mbps line could easily take a day or two and most developers may not even have these kinds of connections.

    Reply
  12. Tomi Engdahl says:

    Google gets AGILE to increase IaaS cloud efficiency
    http://www.theregister.co.uk/2013/06/26/google_agile/

    Google has instrumented its infrastructure to the point where it can predict future demand 68% per cent better than previously, giving other cloud providers a primer for how to get the most out of their IT gear.

    The system was outlined in an academic paper AGILE: Elastic distributed resource scaling for Infrastructure-as-a-Service which was released by the giant on Wednesday at the USENIX conference in California.

    Agile lets Google predict future resource demands for workloads through wavelet analysis, which uses telemetry from across the Google stack to look at resource utilization in an application and then make a prediction about likely future resource use. Google then uses this information to spin up VMs in advance of demand, letting it avoid downtime.

    Though some of our beloved commentards may scoff at this and point out that such auto-workload assigning features have been available on mainframes for decades, Google’s approach involves the use of low-cost commodity hardware at a hitherto unparalleled scale, and wraps in predictive elements made possible by various design choices made by the giant.

    AGILE works via a Slave agent which monitors resource use of different servers running inside local KVM virtual machines, and it feeds this data to the AGILE Master, which predicts future demand via wavelet analysis (pictured) and automatically adds or subtracts servers from each application.

    The system can make good predictions when looking ahead for one or two minutes, which gives Google time to clone or spin-up new virtual machines to handle workload growth. The AGILE slave imposes less than 1 per cent CPU overhead per server, making it lightweight enough to be deployed widely.

    Reply
  13. Tomi says:

    Is Google building a hulking floating data center in SF Bay?
    http://news.cnet.com/8301-1023_3-57608585-93/is-google-building-a-hulking-floating-data-center-in-sf-bay/

    It looks like Google has been working on an oversize secret project on San Francisco’s Treasure Island. A water-based data center? Could well be.

    Something big and mysterious is rising from a floating barge at the end of Treasure Island, a former Navy base in the middle of San Francisco Bay. And Google’s fingerprints are all over it.

    It’s unclear what’s inside the structure, which stands about four stories high and was made with a series of modern cargo containers. The same goes for when it will be unveiled, but the big tease has already begun. Locals refer to it as the secret project.

    certain that Google is the entity that is building the massive structure that’s in plain sight, but behind tight security.

    Could the structure be a sea-faring data center? One expert who was shown pictures of the structure thinks so, especially because being on a barge provides easy access to a source of cooling, as well as an inexpensive source of power — the sea. And even more tellingly, Google was granted a patent in 2009 for a floating data center, and putting data centers inside shipping containers is already a well-established practice.

    Whether the structure is in fact a floating data center is hard to say for sure, of course, since Google’s not talking. But Google, understandably, has a history of putting data centers in places with cheap cooling, as well as undertaking odd and unexpected projects like trying to bring Internet access to developing nations via balloons and blimps.

    Reply
  14. Tomi Engdahl says:

    IT’S ALIVE! IT’S ALIVE! Google’s secretive Omega tech just like LIVING thing
    ‘Biological’ signals ripple through massive cluster management monster
    http://www.theregister.co.uk/2013/11/04/google_living_omega_cloud/

    Exclusive One of Google’s most advanced data center systems behaves more like a living thing than a tightly controlled provisioning system. This has huge implications for how large clusters of IT resources are going to be managed in the future.

    “Emergent” behaviors have been appearing in prototypes of Google’s Omega cluster management and application scheduling technology since its inception, and similar behaviors are regularly glimpsed in its “Borg” predecessor, sources familiar with the matter confirmed to The Register.

    Emergence is a property of large distributed systems. It can lead to unforeseen behavior arising out of sufficiently large groups of basic entities.

    Just as biology emerges from the laws of chemistry; ants give rise to ant colonies; and intersections and traffic lights can bring about cascading traffic jams, so too do the ricocheting complications of vast fields of computers allow data centers to take on a life of their own.

    The kind of emergent traits Google’s Omega system displays means that the placement and prioritization of some workloads is not entirely predictable by Googlers. And that’s a good thing.

    “Systems at a certain complexity start demonstrating emergent behavior, and it can be hard to know what to do with it,” says Google’s cloud chief Peter Magnusson. “When you build these systems you get emergent behavior.”

    Omega will handle the management and scheduling of various tasks and places apps onto the best infrastructure for their needs in the time available.

    “There’s a lot of complexity involved, and one of the things that distinguishes companies like Google is the degree to which these kinds of issues are handled,” said John Wilkes, who is one of the people at Google tasked with building Omega. “Our goal is to provide predictable behaviors to our users in the face of a huge amount of complexity, changing loads, large scale, failures, and so on.”

    First in the firing line

    Omega matters because soon after Google runs into problems, they trickle down to Facebook, Twitter, eBay, Amazon, and others, and then into general businesses. Google’s design approaches tend to crop up in subsequent systems, either through direct influence or independent development.

    “Too much production band stuff will just fight with each other. You can get very unstable behavior. It’s very strange – it behaves like biological systems from time to time,” he says. “We’ll probably wind up moving in some of those directions – as you get larger you need to get into it.”

    Though Omega is obscured from end users of Google’s myriad services, the company does have plans to use some of its capabilities to deliver new types of cloud services, Magnusson confirmed. The company could use the system as the foundation of spot markets for virtual machines in its Compute Engine cloud, he said.

    “Spot markets for VMs is a flavor of trying to adopt that,” he said.

    Reply
  15. fort lauderdale accounting firms says:

    Hello there, You hve done a great job. I will certainly digg it and personally recommend to my friends.

    I am sure they’ll be benefited from this website.

    Feel free to visit my web page – fort lauderdale accounting firms

    Reply
  16. Tomi Engdahl says:

    Google’s Compute Engine Hits General Availability, Drops Instance Prices 10%, Adds 16-Core Instances & Docker Support
    http://techcrunch.com/2013/12/02/googles-compute-engine-hits-general-availability-drops-instance-prices-10-adds-new-16-core-instance-types-and-docker-support/

    Google today announced the general availability of the Google Compute Engine, the cloud computing platform it launched in the summer of 2012. As part of the GA launch, Google also announced expanded support for new operating systems, a 10 percent drop in pricing for standard instances, new 16-core instances for applications that need a lot of computation power and a new logo to update its branding.

    Compute Engine is the cloud platform Google has developed on top of the vast infrastructure it manages to run its own search engine and its other properties. The company offers 24/7 support and promises a 99.95 percent update in its SLA.

    Until now, Compute Engine supported Debian and CentOS, customized with a Google-built kernel. Starting today, developers will also be able to use any out-of-the-box Linux distribution, including SELinux and CoreOS, the Y Combinator alum with the OS designed to mimic Google’s cloud infrastructure. The company is also announcing official support for SUSE, FreeBSD and Red Hat Enterprise Linux

    As part of this update, Google is also announcing support for Docker, the increasingly popular tool for creating virtual containers from any application. With Docker, developers can build and test an application on their laptops and then move this container to a production server for deployment. The company submitted Docker as an open-source project last month.

    Reply
  17. Tomi Engdahl says:

    Google makes its Compute Engine official
    Googlified Vms now run SUSE or REHL on up to 16 cores with 99.95% SLA
    http://www.theregister.co.uk/2013/12/03/google_compute_engine_launches/

    Google has cut the ribbon on its Compute Engine, bringing it into the world of virtual machines by the hour and into combat with the likes of Amazon Web Services, Rackspace and VMware.

    Google’s been talking up the Compute Engine for a while now, recently teasing on the topic of automatic failover for virtual machines. But everything still had The Chocolate Factory’s BETA stamp on it.

    Google says it has 1,000 people working on the cloud service and that their days are spent toiling on Google’s own infrastructure as well as the bits used to run Compute Engine.

    Uptime of 99.95 per cent uptime is offered under a new service level agreement.

    Reply
  18. Tomi Engdahl says:

    Google’s Data Center Empire Expands to Asia
    http://www.wired.com/wiredenterprise/2013/12/google-asia/

    The Asian arm of Google’s data center empire is up and running.

    On Wednesday, Asia time, the web giant announced that its new data centers in Singapore and Changhua County, Taiwan are now live.

    The company first announced plans to build in Asia two years ago. Previously, the company had leased data space in Asian, but these new facilities are custom data centers designed and operated by Google itself — part of the company’s mission to streamline the operation of its massively popular web services.

    “While we’ve been busy building, the growth in Asia’s Internet has been amazing,” reads the company’s blog post. “The number of Internet users in India doubled, from 100 million to 200 million. It took six years to achieve that milestone in the U.S. Between July and September of this year alone, more than 60 million people in Asia landed on the mobile Internet for the first time.”

    Reply
  19. Tomi Engdahl says:

    Google Said to Mull Designing Chips in Threat to Intel
    http://www.bloomberg.com/news/2013-12-12/google-said-to-mull-designing-chips-in-threat-to-intel.html

    Google Inc. (GOOG) is considering designing its own server processors using technology from ARM Holdings Plc (ARM), said a person with knowledge of the matter, a move that could threaten Intel Corp. (INTC)’s market dominance.

    By using its own designs, Google could better manage the interactions between hardware and software, said the person, who asked not to be identified because the matter is private. Google, among the largest buyers of server processors, has made no decision and plans could change, said the person.

    “We are actively engaged in designing the world’s best infrastructure,” said Liz Markman, a spokeswoman for Google. “This includes both hardware design (at all levels) and software design.” Markman declined to say whether the company may develop its own chips.

    Google has been designing its own data centers around the world with servers to power search, video, online communications and other features. Moving into chip design could take away revenue from Intel, which has counted on Internet companies to help drive processor sales.

    Job openings at Google include one for a “digital design engineer” with qualifications in ASICs, or application-specific integrated circuits, a commonly-used chip.

    Reply
  20. Tomi Engdahl says:

    Google Announces Massive Price Drops For Its Cloud Computing Services And Storage, Introduces Sustained-Use Discounts
    http://techcrunch.com/2014/03/25/google-drops-prices-for-compute-and-app-engine-by-over-30-cloud-storage-by-68-introduces-sustained-use-discounts/

    At its Cloud Platform event in San Francisco today, Google announced a number of massive price drops for virtually all of its cloud-computing services. The company has also decided to reduce the complexity of its pricing charts by eliminating some charges and consolidating others.

    Google Compute Engine is seeing a 32 percent reduction in prices across all regions, sizes and classes. App Engine prices are down 30 percent

    Reply
  21. Tomi Engdahl says:

    Google gets green light for second Dublin data centre
    New council-approved facility in west Dublin to create 300 construction jobs in west Dublin
    http://www.irishtimes.com/business/sectors/technology/google-gets-green-light-for-second-dublin-data-centre-1.1776429

    Tech giant Google is set to expand its presence in Ireland, following the approval of its proposed €150 million new data centre in west Dublin by Dublin City Council.

    As reported in February, the internet search engine provider will construct a new 30,361 sqm, two storey data storage facility and additional outbuildings, beside its existing facility in Clondalkin, Dublin 22. The development will create up to 300 construction jobs, as well as 60 full-time jobs once it is operational.

    Reply
  22. Tomi Engdahl says:

    Google brings futuristic Linux software CoreOS onto its cloud
    A container-based operating system on a virtualized cloud on a container-based distributed system = M.C. Escher’s cloud
    http://www.theregister.co.uk/2014/05/23/google_loads_coreos_onto_its_cloud/

    Fans of new Linux operating system “CoreOS” can now run the lightweight tech on Google’s main cloud service.

    This means developers who want a Linux OS that takes up just 168MB of RAM, runs all of its applications within containers, and is designed for marshaling mammoth clusters of computers, can now do so on top of Google’s cloud.

    “In the next few days, CoreOS will become available as a default image type in the GCE control panel, making running your first CoreOS cluster on GCE an easy, browser-based experience,” wrote CoreOS’s chief technology officer Brandon Philips in a blog post. “CoreOS is an ideal host for distributed systems and Google Compute Engine is a perfect base for CoreOS clusters.”

    Reply
  23. Tomi Engdahl says:

    Google: ‘EVERYTHING at Google runs in a container’
    Ad giant lifts curtain on how it uses virtualization’s successor TWO BILLION TIMES A WEEK
    http://www.theregister.co.uk/2014/05/23/google_containerization_two_billion/

    Google is now running “everything” in its mammoth cloud on top of a potential open source successor to virtualization, paving the way for other companies to do the same.

    Should VMware be worried? Probably not, but the tech pioneered by Google is making inroads into a certain class of technically sophisticated companies.

    That tech is called Linux Containerization, and is the latest in a long line of innovations meant to make it easier to package up applications and sling them around data centers. It’s not a new approach – see Solaris Zones, BSD Jails, Parallels, and so on – but Google has managed to popularize it enough that a small cottage industry is forming around it.

    Google’s involvement in the tech is significant because of the mind-boggling scale at which the search and ad giant operates, which in turn benefits the tech by stress-testing it.

    “Everything at Google runs in a container,” Joe Beda, a senior staff software engineer at Google, explained in some slides shown at the Gluecon conference this week. “We start over two billion containers per week.”

    The company is able to do this because of how the tech works: Linux containerization is a way of sharing parts of a single operating system among multiple isolated applications, as opposed to virtualization which will support multiple apps with their own OS on top of a single hypervisor.

    This means that where it can take minutes to spin up a virtual machine, it can take seconds to start a container because you aren’t having to fire up the OS as well.

    Google began its journey into containerization in the mid-2000s when some engineers donated a tech named cgroups into the Linux kernel.

    The kernel feature cgroups became a crucial component of LXC, LinuX Containers, which combines cgroups with network namespaces to make it easy to create containers and the ways they need to connect with other infrastructure.

    That is, until startup Docker came along.

    “The technology was not accessible/useful for developers, containers were not portable between different environments, and there was no ecosystem or set of standard containers,”

    “Docker’s initial innovation was to address these issues,” he writes, “by providing standard APIs that made containers easy to use, creating a way for the community to collaborate around libraries of containers, working to make the same container portable across all environments, and encouraging an ecosystem of tools.”

    Docker’s approach has been remarkably successful and has led to partnerships with companies like Red Hat, the arrival of Docker containers on Amazon’s cloud, and integration with open source data analysis project Hadoop.

    Google’s take on containerization is slightly different, as it places more emphasis on performance and less on ease of use. To try to help developers understand the difference, Google has developed a variant of LXC named, charmingly, lmctfy, short for Let Me Contain That For You.

    Reply
  24. Tomi Engdahl says:

    Containers At Scale
    by Joe Beda
    https://speakerdeck.com/jbeda/containers-at-scale

    How Google uses containers internally and how you can apply those lessons on the Google Cloud Platform and beyond

    Reply
  25. Tomi Engdahl says:

    Machine learning optimizes Google data centers’ PUE
    http://www.cablinginstall.com/articles/2014/05/machine-learning-google-datacenters-pue.html

    Did you know that Google has been calculating its data centers’ PUE data every five minutes for over five years, including 19 different variables such as cooling tower speed, processing water temperature, pump speed, outside air temperature, humidity, etc?

    “neural networks are the latest way Google is slashing energy consumption from its data centers”

    Reply
  26. Tomi Engdahl says:

    Google Cloud Platform Gets SSD Persistent Disks And HTTP Load Balancing
    http://techcrunch.com/2014/06/16/google-cloud-platform-gets-ssd-persistent-disks-and-http-load-balancing/

    Google’s I/O developer conference may just be a few days away, but that hasn’t stopped the company from launching a couple of new features for its Cloud Platform ahead of the event. As Google announced today, the Cloud Platform is getting two features that have long been on developers’ wish lists: HTTP load balancing and SSD-based persistent storage. Both of these features are now in limited preview

    Developers whose applications need the high number of input/output operations per second SSDs make possible can now get this feature for a flat fee of $0.325 per gigabyte per month. It’s worth noting that this is significantly more expensive than the $0.04 Google charges for regular persistent storage. Unlike Amazon Web Services, Google does not any extra fees for the actual input/output requests.

    Amazon also offers SSD-based EC2 instances, but those do not feature any persistent storage.

    While the standard persistent disks feature speeds of about 0.3 read and 1.5 write IOPS/GB, the SSD-based service gets up to 30 read and write IOPS/GB.

    As for the HTTP load balancing feature, Google says it can scale p to more than 1 million requests per second without any warm-up time. It supports content-based routing and Google especially notes that users can load balance across different regions

    As of now, however, HTTP load balancing does not support the SSL protocol. Developers who want to use this feature will have to use Google’s existing protocol-based network load balancing system.

    Reply
  27. Tomi Engdahl says:

    Google’s Innovative Data Center Cooling Techniques
    http://www.strategicdatacenter.com/52/google%E2%80%99s-innovative-data-center-cooling-techniques

    In years gone by, data centers kept temperatures at 68 to 70 degrees Fahrenheit and narrow humidity ranges. According to a recent article in CIO.com, the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE), the organization that issues de facto standards for data center climate, has moved the top recommended temperature range to 80.6° F and increased the peak humidity threshold as well.

    This shift to higher temperatures and humidity levels has pushed manufacturers to expand the operating climate range of their equipment. It’s also made data center managers think about employing innovative cooling methods, such as using outside air, water and evaporative techniques.

    Google drives down the cost and environmental impact of running data centers by designing and building its own facilities. The company employs “free-cooling” techniques like using outside air or reused water for cooling. Google claims its data centers use 50 percent less energy than the typical data center and are among the most efficient in the world.

    Reply
  28. Tomi Engdahl says:

    Google Open Sources Its Secret Weapon in Cloud Computing
    http://www.wired.com/2014/06/google-kubernetes/

    When Google engineers John Sirois, Travis Crawford, and Bill Farner left the internet giant and went to work for Twitter, they missed Borg.

    Borg was the sweeping software system that managed the thousands of computer servers underpinning Google’s online empire. With Borg, Google engineers could instantly grab enormous amounts of computing power from across the company’s data centers and apply it to whatever they were building–whether it was Google Search or Gmail or Google Maps. As Sirois, Crawford, and Farner created new web services at Twitter, they longed for the convenience of this massive computing engine.

    Unfortunately, Borg was one of those creations Google was loath to share with the outside world–a technological trade secret it saw as an important competitive advantage. In the end, urged by that trio of engineers, Twitter went so far as build its own version of the tool. But now, the next wave of internet companies has another way of expanding their operations to Google-like sizes. This morning, Google open sourced a software tool that works much like Borg, freely sharing this new creation with the world at large.

    ‘It’s a way of stitching together a collection of machines into, basically, a big computer.’

    Releasing Kubernetes as a way of encouraging people to use these cloud computing services, known as Google Compute Engine and Google App Engine.

    But the new tool isn’t limited to the Google universe. It also lets you oversee machines running on competing cloud services–from Amazon, say, or Rackspace–as well as inside private data centers. Yes, today’s cloud services already give you quick access to large numbers of virtual machines, but with Kubernetes, Google aims to help companies pool processing power more effectively from a wide variety of places. “It’s a way of stitching together a collection of machines into, basically, a big computer,” says Craig Mcluckie, a product manager for Google’s cloud services.

    Reply
  29. Tomi Engdahl says:

    Inside a Google data center
    http://www.cablinginstall.com/articles/2015/01/inside-google-datacenter.html

    Below, Joe Kava, VP of Google’s data center operations, gives a tour inside a Google data center, and shares details about the security, sustainability and the core architecture of Google’s infrastructure. “A data center is the brains of the Internet, the engine of the Internet,”

    Inside a Google data center VIDEO
    https://www.youtube.com/watch?v=XZmGGAbHqa0

    Reply
  30. Tomi Engdahl says:

    Revealed: The Secret Gear Connecting Google’s Online Empire
    http://www.wired.com/2015/06/google-reveals-secret-gear-connects-online-empire/

    Three-and-a-half years ago, a strange computing device appeared at an office building in the tiny farmland town of Shelby, Iowa.

    It was wide and thin and flat, kind of like a pizza box. On one side, there were long rows of holes where you could plug in dozens of cables. On the other, a label read “Pluto Switch.” But no one was quite sure what it was. The cable connectors looked a little strange. The writing on the back was in Finnish.

    It was a networking switch, a way of moving digital data across the massive computing centers that underpin the Internet. And it belonged to Google.

    Google runs a data center not far from Shelby, and apparently, someone had sent the switch to the wrong place. After putting two and two together, those IT guys shipped it back to Google and promptly vanished from the ‘net. But the information they posted to that online discussion forum, including several photos of the switch, opened a small window into an operation implications for the Internet as a whole—an operation Google had never discussed in public. For several years, rather than buying traditional equipment from the likes of Cisco, Ericsson, Dell, and HP, Google had designed specialized networking gear for the engine room of its rapidly expanding online empire. Photos of the mysterious Pluto Switch provided a glimpse of the company’s handiwork.

    Seeing such technology as a competitive advantage, Google continued to keep its wider operation under wraps. But it did reveal how it handled the networking links between its data centers, and now, as part of a larger effort to share its fundamental technologies with the world at large, it’s lifting the curtain inside its data centers as well.

    According to Vahdat, Google started designing its own gear in 2004, under the aegis of a project called Firehose, and by 2005 or 2006, it had deployed a version of this hardware in at least a handful of data centers. The company not only designed “top-of-rack switches” along the lines of the Pluto Switch that turned up in Iowa. It created massive “cluster switches” that tied the wider network together. It built specialized “controller” software for running all this hardware. It even built its own routing protocol, dubbed Firehose, for efficiently moving data across the network. “We couldn’t buy the hardware we needed to build a network of the size and speed we needed to build,” Vahdat says. “It just didn’t exist.”

    The aim, Vahdat says, was twofold. A decade ago, the company’s network had grown so large, spanning so many machines, it needed a more efficient way of shuttling data between them all. Traditional gear wasn’t up to the task. But it also needed a way of cutting costs. Traditional gear was too expensive. So, rather than construct massively complex switches from scratch, it strung together enormous numbers of cheap commodity chips.

    Google’s online empire is unusual. It is likely the largest on earth. But as the rest of the Internet expands, others are facing similar problems. Facebook has designed a similar breed of networking hardware and software. And so many other online operations are moving in a same direction, including Amazon and Microsoft. AT&T, one of the world’s largest Internet providers, is now rebuilding its network in similar ways. “We’re not talking about it,” says Scott Mair, senior vice president of technology planning and engineering at AT&T. “We’re doing it.”

    Unlike Google and Facebook, the average online company isn’t likely to build its own hardware and software. But so many startups are now offering commercial technology that mimics The Google Way.

    Basically, they’re fashioning software that lets companies build complex networks atop cheap “bare metal” switches, moving the complexity out of the hardware and into the software. People call this software-defined networking, or SDN, and it provides a more nimble way of building, expanding, and reshaping computer networks.

    “It gives you agility, and it gives you scale,” says Mark Russinovich, who has helped build similar software at Microsoft. “If you don’t have this, you’re down to programming individual devices—rather than letting a smart controller do it for you.”

    It’s a movement that’s overturning the business models of traditional network vendors such as Cisco, Dell, and HP. Vahdat says that Google now designs 100 percent of the networking hardware used inside its data centers, using contract manufacturers in Asia and other locations to build the actual equipment. That means it’s not buying from Cisco, traditionally the world’s largest networking vendor. But for the Ciscos of the world, the bigger threat is that so many others are moving down the same road as Google.

    Reply
  31. Tomi Engdahl says:

    Google Reveals Data Center Net
    Openflow rising at search giant and beyond
    http://www.eetimes.com/document.asp?doc_id=1326901&

    Software-defined networks based on the Openflow standard are beginning to gain traction with new chip, system and software products on display at the Open Networking Summit here. At the event, Google revealed its data center networks are already using Openflow and AT&T said (in its own way) it will follow suit.

    Showing support for the emerging approach, systems giants from Brocade to ZTE participated in SDN demos for carriers, data centers and enterprises running on the show floor.

    SDN aims to cut through a rat’s nest of existing protocols implemented in existing proprietary ASICs and applications programming interfaces. If successful, it will let users more easily configure and manage network tasks using high-level programs run on x86 servers.

    The rapid rise of mobile and cloud traffic is driving the need for SDN. For example, Google has seen traffic in its data centers rise 50-fold in the last six years, said Amin Vahdat, technical lead for networking at Google.

    In a keynote here, Vahdat described Jupiter (above), Google’s data center network built internally to deal with the data flood. It uses 16x40G switch chips to create a 1.3 Petabit/second data center Clos network, and is the latest of five generations of SDN networks at the search giant.

    “We are opening this up so engineers can take advantage of our work,” Vahdat said, declining to name any specific companies adopting its Jupiter architecture.

    Reply
  32. Tomi Engdahl says:

    Google loses data as lightning strikes
    http://www.bbc.com/news/technology-33989384

    Google says data has been wiped from discs at one of its data centres in Belgium – after it was struck by lightning four times.

    Some people have permanently lost access to their files as a result.

    A number of disks damaged following the lightning strikes did, however, later became accessible.

    Generally, data centres require more lightning protection than most other buildings.

    While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.

    Justin Gale, project manager for the lightning protection service Orion, said lightning could strike power or telecommunications cables connected to a building at a distance and still cause disruptions.

    “The cabling alone can be struck anything up to a kilometre away, bring [the shock] back to the data centre and fuse everything that’s in it,” he said.

    In an online statement, Google said that data on just 0.000001% of disk space was permanently affected.

    “Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain,” it said.

    The company added it would continue to upgrade hardware and improve its response procedures to make future losses less likely.

    Google Compute Engine Incident #15056
    https://status.cloud.google.com/incident/compute/15056#5719570367119360

    Google Compute Engine Persistent Disk issue in europe-west1-b

    From Thursday 13 August 2015 to Monday 17 August 2015, errors occurred on a small proportion of Google Compute Engine persistent disks in the europe-west1-b zone. The affected disks sporadically returned I/O errors to their attached GCE instances, and also typically returned errors for management operations such as snapshot creation. In a very small fraction of cases (less than 0.000001% of PD space in europe-west1-b), there was permanent data loss.

    ROOT CAUSE:

    At 09:19 PDT on Thursday 13 August 2015, four successive lightning strikes on the local utilities grid that powers our European datacenter caused a brief loss of power to storage systems which host disk capacity for GCE instances in the europe-west1-b zone. Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain. In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.

    This outage is wholly Google’s responsibility.

    Reply
  33. Tomi Engdahl says:

    Cade Metz / Wired:
    Software needed to run every Google internet service spans 2B lines of source code, all in a single repository available to all 25K engineers — Google Is 2 Billion Lines of Code—And It’s All in One Place — How big is Google? We can answer that question in terms of revenue or stock price …

    Google Is 2 Billion Lines of Code—And It’s All in One Place
    http://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/

    How big is Google? We can answer that question in terms of revenue or stock price or customers or, well, metaphysical influence. But that’s not all. Google is, among other things, a vast empire of computer software. We can answer in terms of code.

    Google’s Rachel Potvin came pretty close to an answer Monday at an engineering conference in Silicon Valley. She estimates that the software needed to run all of Google’s Internet services—from Google Search to Gmail to Google Maps—spans some 2 billion lines of code. By comparison, Microsoft’s Windows operating system—one of the most complex software tools ever built for a single computer, a project under development since the 1980s—is likely in the realm of 50 million lines.

    So, building Google is roughly the equivalent of building the Windows operating system 40 times over.

    The comparison is more apt than you might think. Much like the code that underpins Windows, the 2 billion lines that drive Google are one thing. They drive Google Search, Google Maps, Google Docs, Google+, Google Calendar, Gmail, YouTube, and every other Google Internet service, and yet, all 2 billion lines sit in a single code repository available to all 25,000 Google engineers. Within the company, Google treats its code like an enormous operating system. “Though I can’t prove it,” Potvin says, “I would guess this is the largest single repository in use anywhere in the world.”

    Google is an extreme case. But its example shows how complex our software has grown in the Internet age—and how we’ve changed our coding tools and philosophies to accommodate this added complexity. Google’s enormous repository is available only to coders inside Google. But in a way, it’s analogous to GitHub, the public open source repository where engineers can share enormous amounts of code with the Internet at large.

    The two internet giants are working on an open source version control system that anyone can use to juggle code on a massive scale. It’s based on an existing system called Mercurial. “We’re attempting to see if we can scale Mercurial to the size of the Google repository,” Potvin says, indicating that Google is working hand-in-hand with programming guru Bryan O’Sullivan and others who help oversee coding work at Facebook.

    That may seem extreme. After all, few companies juggle as much code as Google or Facebook do today. But in the near future, they will.

    Reply
  34. Tomi Engdahl says:

    Google Joins Facebook’s Open Compute Project
    http://hardware.slashdot.org/story/16/03/10/0038246/google-joins-facebooks-open-compute-project

    Google has elected to open up some of its data center designs, which it has — until now — kept to itself. Google has joined the Open Compute Project, which was set up by Facebook to share low-cost, no-frills data center hardware specifications. Google will donate a specification for a rack that it designed for its own data centers. Google’s first contribution will be “a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,

    Google joins Facebook’s Open Compute Project, will donate rack design
    Google pulls back the curtain from some of its data center equipment.
    http://arstechnica.com/information-technology/2016/03/google-joins-facebooks-open-compute-project-will-donate-rack-design/

    Google today said it has joined the Open Compute Project (OCP), and the company will donate a specification for a rack that it designed for its own data centers.

    Google’s first contribution will be “a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers,” the company said. Google will also be participating in this week’s Open Compute Summit.

    “In 2009, we started evaluating alternatives to our 12V power designs that could drive better system efficiency and performance as our fleet demanded more power to support new high-performance computing products, such as high-power CPUs and GPUs,” Google wrote. “We kicked off the development of 48V rack power distribution in 2010, as we found it was at least 30 percent more energy-efficient and more cost-effective in supporting these higher-performance systems.”

    OCP Summit: Google joins and shares 48V tech
    http://www.datacenterdynamics.com/power-cooling/ocp-summit-google-joins-and-shares-48v-tech/95835.article

    Google has joined the Open Compute Project, and is contributing 48V DC power distribution technology to the group, which Facebook created to share efficient data center hardware designs.

    Urs Hölzle, Google’s senior vice president of technology, made the surprise announcement at the end of a lengthy keynote session on the first day of the Open Compute event. The 48V direct current “shallow” data center rack, has long been a part of Google’s mostl-secret data center architecture, but the giant now wants to share it.

    Hölzle said Google’s 48V rack specifications had increased its energy efficiency by 30 precent, through eliminating the multiple transformers usually deployed in a data center.

    Google is submitting the specification to OCP, and is now working with Facebook on a standard that can be built by vendors, and which Google and Facebook could both adopt, he said.

    “We have several years of experience with this,” said Hölzle, as Google has deployed 48V technology across large data centers.

    As well as using a simplified power distribution, Google’s racks are shallower than the norm, because IT equipment can now be built in shorter units. Shallower racks mean more aisles can fit into a given floorspace.

    Google is joining OCP because there is no need for multiple 48V distribution standards, said Hölzle, explaining that open source is good for “non-core” technologies, where “everyone benefits from a standardized solution”.

    Reply

Leave a Reply to fort lauderdale accounting firms Cancel reply

Your email address will not be published. Required fields are marked *

*

*