Data center backbone design

Cells vs. packets: What’s best in the cloud computing data center? article from few years back tells that resource constrained data centers cannot waste anything on their way to efficiency. One important piece on this is right communications technology between different parts of data center.

In the late 1990s and early 2000s, proprietary switch fabrics were developed by multiple companies to serve the telecom market with features for lossless operation, guaranteed bandwidth, and fine-grained traffic management. During this same time, Ethernet fabrics were relegated to the LAN and enterprise, where latency was not important and quality of service (QoS) meant adding more bandwidth or dropping packets during congestion.

Over the past few years, 10Gb Ethernet switches have emerged with congestion management and QoS features that rival proprietary telecom fabrics. With the emergence of more feature-rich 10GbE switches, InfiniBand no longer has a monopoly on low-latency fabrics. It’s important to find the right 10GbE switch architecture that can function effectively in a 2-tier fat tree.

137 Comments

  1. Tomi Engdahl says:

    Tray-mounted cabinet saves data center rack space
    http://www.cablinginstall.com/articles/2013/12/tde-tml-duo.html

    trans data elektronik GmbH (tde) recently introduced tML-Duo, a 19-inch cabinet that mounts directly underneath a mesh cable tray. “This space-saving and multifunctional mounting can be easily attached to the mesh tray above the server rack without tools,”

    The company said tML-Duo’s introduction follows the trend of relocating network cabling from a raised floor to a cable tray system. “This way, it is easier to reach the cabling, to rearrange it if necessary, and it is more energy-efficient because the air in the raised floor can circulate better,” tde noted.

    “Time-saving recabling, easy access to the network cabling system as well as energy efficiency are requirements which modern data centers have to meet.”

    Reply
  2. Tomi Engdahl says:

    Infonetics: NSA spygate underscores need for multi-layered data center security
    http://www.cablinginstall.com/articles/2013/12/infonetics-nsa-datacenter-security.html

    Infonetics Research has released its latest Data Center Security Products report, which tracks data center security appliances and virtual security appliances.

    “The most recent revelation that the NSA has been secretly siphoning data from Google and Yahoo! data centers worldwide has put a laser focus on the need for security at all levels of the data center, from layer 1 transport all the way up to individual applications and data,” asserts Jeff Wilson, principal analyst for security at Infonetics Research. “The world’s never been more tuned into privacy and security.”

    Significantly, while software-defined networks (SDN) and network functions virtualization (NFV) are forcing networking vendors to offer new form factors or to re-architect solutions, the report also states that working with OpenFlow and other SDN technologies is an evolutionary change for security vendors, who have been adapting products for virtualized environments for over 5 years.

    The report states that global revenue for the ported virtual security appliances segment of the larger data center security appliance market grew 4% between the first and second quarters of 2013, to $107 million. Additionally, the market for purpose-built virtual security appliances is forecasted to grow at a strong compound annual growth rate (CAGR) of 25% from 2012 to 2017. The virtual appliance vendor landscape is crowded with a mix of established security players, virtualization platform vendors, and specialist vendors, adds the analysis.

    Reply
  3. Tomi Engdahl says:

    DCIM integrates IT, facilities management
    http://www.cablinginstall.com/articles/2013/12/manageengine-dcim.html

    As business units put growing pressure on IT to deliver non-stop services, the data center plays an increasingly important role in the enterprise, contends ManageEngine. Yet many data center admins struggle to manage their entire data center infrastructures, including IT and non-IT resources. Manual processes, multiple tools and consoles, inefficient power management and other challenges interfere with optimal management of the data center and underlying components. For these companies, an automated DCIM solution that supports IT and facilities management from a single console promises to eliminate many of the challenges to data center optimization.

    “Energy management rivals IT management as one of the major challenges in running an efficient data center,” asserts Sridhar Iyengar, vice president, product management at ManageEngine. “Typically, the IT management and facilities infrastructure management are separate worlds, which prevent data center operations teams from readily correlating facilities and IT events. Now, IT360 unifies the facilities and IT management in a unified DCIM console, so data center admins can easily see facilities and IT relationships and manage the complete data center proactively.”

    Reply
  4. Tomi Engdahl says:

    Study: Downtime for U.S. data centers costs $7900 per minute
    http://www.cablinginstall.com/articles/2013/12/ponemon-downtime-study.html

    A study recently conducted by Ponemon Institute and sponsored by Emerson Network Power (ENP) shows that on average, an unplanned data center outage costs more than $7,900 per minute. That number is a 41-percent increase over the $5,600-per-minute quantification put on downtime from Ponemon’s similar 2010 study. “Data center downtime proves to remain a costly line item for organizations,” ENP said when announcing the study’s results.

    Reply
  5. Tomi Engdahl says:

    Re-defining data center economics
    http://www.cablinginstall.com/articles/2013/12/oracle-data-center-economics.html

    “Many data center teams are turning to cloud-style deployments to rapidly deploy new services and consolidate existing infrastructure for the best return on investment.”

    “Implementing successful cloud infrastructure requires more than state-of-the art data center technologies such as fast wide-area networks, powerful servers, immense storage capacities, and pervasive high-performance virtualization — it requires an end-to-end technology vision,” states the paper’s introduction.

    Reply
  6. Tomi Engdahl says:

    Schneider Electric acquires prefab data center provider AST Modular
    http://www.cablinginstall.com/articles/2014/01/schneider-ast-modular.html

    Schneider Electric recently announced the acquisition of AST Modular, a Barcelona, Spain-based provider of prefabricated data center modules, services, manufacturing expertise and project-engineering support.

    Schneider said with this acquisition, it adds capabilities to its position in the prefabricated data center solutions market. “With projects deployed in over 30 countries, AST Modular has successfully executed over 450 data center projects worldwide,”

    Reply
  7. Tomi Engdahl says:

    Building Russia’s cloud computing platform
    http://www.cablinginstall.com/articles/2013/12/russia-cloud-uptime-video.html

    Rostelecom is Russia’s largest national telecommunications operator, with presence in all Russian regions. In 2011, the organization launched a project to develop a national cloud computing platform. In the following video from Uptime Institute, Rostelecom’s Data Center Project Manager Alexander Martynyuk discusses how his organization is meeting priorities, deadlines and budget on this complex project.

    Rostelecom started building its main data center in Moscow, which will become the largest in Russia, around 37 megawatts and 40,000 square meters. The organization will build smaller, 3 MW data centers in regional locations.

    Reply
  8. Tomi Engdahl says:

    Not-so-shocking truths about UPS safety
    http://www.cablinginstall.com/articles/2014/01/eaton-ups-safety.html

    Featuring “a candid conversation about UPS service and safety,” a new white paper from Eaton answers common questions about UPS maintenance and addresses how to reduce the risks associated with servicing UPS systems and batteries.

    Reply
  9. Tomi Engdahl says:

    Averting a data center legal crisis
    Engineers involved in data center design can limit their potential exposure by taking a few simple steps.
    http://www.controleng.com/single-article/averting-a-data-center-legal-crisis/844cfee5cba1f07d90e4ca7d0e3b29bf.html

    Reply
  10. Tomi Engdahl says:

    Schneider Electric, HP collaborate on DCIM
    http://www.cablinginstall.com/articles/2014/01/schneider-hp-dcim.html

    Schneider Electric has announced a collaboration with HP to deliver a new converged data center and IT management platform that offers customers the ability to link physical infrastructure assets to business processes for improved analysis capabilities. The joint solution will feature HP’s Converged Management Consulting Services (CMCS) combined with Schneider Electric’s data center infrastructure management (DCIM) system, StruxureWare for Data Centers.

    “By collaborating with HP to provide a holistic approach to managing IT business process assets and workloads, we are continuing to bridge the gap between IT and facilities,” comments Soeren Jensen, vice president, Enterprise Management and Software, Schneider Electric. “Enabling IT service providers to instantly view the impact of any changes in their data center, as well as the operational costs associated with these changes is an important step towards improving energy efficiency in data centers and IT.”

    Reply
  11. Tomi Engdahl says:

    Intelligent module monitors for data center circuit overloads
    http://www.cablinginstall.com/articles/2014/01/snake-current-monitoring-module.html

    Snake Tray has announced the availability of its new “IT-addressable” current monitoring power reception module. The module allows data center managers the ability to observe the current of four independent power receptacles, either numerically displayed on the device or offsite using HTML protocol.

    Reply
  12. Clearwater water damage says:

    Hello Dear, are you actually visiting this web site on a regular basis, if so after that you will definitely take
    fastidious knowledge.

    Reply
  13. Tomi Engdahl says:

    Book Review: The Art of the Data Center
    http://books.slashdot.org/story/14/02/03/1435220/book-review-the-art-of-the-data-center

    The book takes a holistic view of how world-class data centers are designed and built. Many of the designers were able to start with a greenfield approach without any constraints; while others were limited by physical restrictions.

    Reply
  14. ac repair brandon says:

    Wonderful beat ! I wish to apprentice while you amend
    your website, how can i subscribe for a weblog site?

    The account helped me a acceptable deal.
    I have been a little bit acquainted of this your broadcast provided bright transparent idea

    Reply
  15. Tomi Engdahl says:

    Intel’s 800Gbps cables headed to cloud data centers and supercomputers
    64 fibers pushing 25Gbps apiece stuffed into one cable connector.
    http://arstechnica.com/information-technology/2014/03/intels-800gbps-cables-headed-to-cloud-data-centers-and-supercomputers/

    Intel and several of its partners said they will make 800Gbps cables available in the second half of this year, bringing big speed increases to supercomputers and data centers.

    The new cables are based on Intel’s Silicon Photonics technology that pushes 25Gbps across each fiber. Last year, Intel demonstrated speeds of 100Gbps in each direction, using eight fibers. A new connector that goes by the name “MXC” holds up to 64 fibers (32 for transmitting and 32 for receiving), enabling a jump to 800Gbps in one direction and 800Gbps in the other, or an aggregate of “1.6Tbps” as Intel prefers to call it. (In case you’re wondering, MXC is not an acronym for anything.)

    That’s a huge increase over the 10Gbps cables commonly used to connect switches and other equipment in data centers today.

    “MXC cable assemblies have been sampled by Corning to customers and will be in production in Q3 2014,” an Intel presentation said. “US Conec announced that it will sell MXC connector parts to Corning and other connector companies.”

    Microsoft and the Facebook-led Open Compute Project are among the organizations already testing out the MXC-based cables.

    Providing faster connections between top-of-rack switches and core switches, and connecting servers to extra storage or GPUs are among the expected use cases.

    Longer-term, Intel wants Silicon Photonics inside racks.

    Reply
  16. Tomi Engdahl says:

    3-D Printers Could Help Build Tomorrow’s Massive Data Centers
    http://www.wired.com/wiredenterprise/2014/03/io-data-center-3d-printing/

    It should come as no surprise, then, that the tech world has now applied 3-D printing to data centers

    The idea of a modular data center took root at Google about a decade ago.

    In the years since, others have taken a similar approach to data center design.

    various hardware vendors now sell modules that any business can use in building its own data centers. These vendors include tech giants such as Dell and HP as well as IO

    To prototype even the tiny bracket for a light fixture at the top of the module, the company might spend hundreds of dollars and wait two or three weeks for the prototype to arrive. But in recent months, it has started using a Makerbot 3-D printer to speed the process.

    The company can also 3-D print the basic design of the module itself, producing a physical model of the frame from digital blueprints mapped out by CAD
    These aren’t full-size prototypes.

    In the future, 3-D printing could be used to build not just prototypes but actual data center equipment, including computer motherboards and other circuitry, but this won’t happen for years — if it happens at all. “You can print things on a small scale to play with them. Architects do this all the time,”

    “But building a complete container would be a stupid thing to do.”

    Reply
  17. Tomi Engdahl says:

    Integration: The future of data centers
    At the NEBB Annual Conference, two data center experts pointed to integration as the key to success in data center design and commissioning.
    http://www.csemag.com/single-article/integration-the-future-of-data-centers/24983e700d203ae09c36b0dee6fd6e58.html

    At the NEBB Annual Conference, two data center experts pointed to integration as the key to success in data center design and commissioning.

    Modularity can be the answer.

    How much redundancy is normal?
    A: Refer to the Uptime Institute for information about data center tiers.

    Is there free cooling in northern climates?
    A: Yes, and carefully consider humidity.

    Reply
  18. Tomi Engdahl says:

    Pick the right UPS for your data center
    http://www.cablinginstall.com/articles/2014/04/emerson-datacenter-ups-paper.html

    A recent white paper from Emerson Network Power (NYSE: EMR) seeks to provide data center managers with a clearer understanding of key factors and considerations involved in selecting the right uninterruptible power supply (UPS)

    Reply
  19. Tomi Engdahl says:

    Electrical distribution equipment in the data center
    http://www.cablinginstall.com/articles/2014/03/apc-data-center-electrical.html

    A new white paper from APC-Schneider Electric explains electrical distribution terms and equipment types most commonly found in data centers.

    Electrical Distribution Equipment in Data Center Environments
    http://www.apcmedia.com/salestools/VAVR-8W4MEX/VAVR-8W4MEX_R0_EN.pdf

    IT professionals who are not familiar with the concepts,
    terminology, and equipment used in electrical distribu-
    tion, can benefit from understanding the names and
    purposes of equipment that support the data center,
    as well as the rest of the building in which the data
    center is located. This paper explains electrical distri-
    bution terms and equipment types and is intended to
    provide IT professionals with useful vocabulary and
    frame of reference.

    Reply
  20. Tomi Engdahl says:

    Data center tier classifications
    Tier classifications have been established to address several issues within data centers.
    http://www.csemag.com/single-article/data-center-tier-classifications/8d65647f135a9013c54684ab5c13634c.html

    Reply
  21. Tomi Engdahl says:

    Report: Data center security buyers may switch vendors for better performance
    http://www.cablinginstall.com/articles/2014/04/infonetics-datacenter-security-survey.html

    According to Infonetics, 77% percent of survey respondents said they need need security solutions with increased session handling performance, while about 70% need products with throughput and interfaces to match new high-speed networks i.e. those with 40G and 100G interfaces and 200G+ throughput.

    “The most significant transformation affecting enterprise data centers today is the adoption of server virtualization technology,”

    Reply
  22. Tomi Engdahl says:

    9 design goals for cloud data center networks
    http://www.cablinginstall.com/articles/2014/04/scaling-datacenter-cloud-designs.html

    Directly quoted, these goals are as follows:

    1. No proprietary protocols or vendor lock-ins.
    2. Fewer Tiers is better than more Tiers.
    3. No protocol religion.
    4. Modern infrastructure should be run active/active.
    5. Designs should be agile and allow for flexibility in port speeds.
    6. Scale-out designs enable infrastructure to start small and evolve over time.
    7. Large Buffers can be important.
    8. Consistent features and OS.
    9. Interoperability.

    Reply
  23. Tomi Engdahl says:

    The hidden costs of top-of-rack in the data center
    http://www.cablinginstall.com/articles/2014/04/siemon-tor-hidden-costs.html

    “We want to manage that distribution area with our cable plant, so that we have access to all of those switch ports via that cable,” explains Higbie. “If we do direct-attach copper in top-of-rack, we’re limited to 1, 3 or 5 meters. With 10GBase-T, we have 100 meters we can use — but under 30 meters, we have power savings; so we want to try to keep it under that 30 meter [range]. We can either do a chassis-based switch there, or we can do a stack of top-of-row switches.”

    She adds, “Just because they’re sold as ‘top-of-rack’ switches, doesn’t mean they physically have to sit in the top of a rack. If we use 10GBase-T [switches], we now have that 30 meters to be able to reach multiple server cabinets.”

    Reply
  24. Tomi Engdahl says:

    Explained: Cisco’s 40G BiDi optics for the data center
    http://www.cablinginstall.com/articles/2014/04/cisco-40g-bidi-video.html

    In a new video, Cisco’s director of product management, Thomas Scheibe, thoroughly explains the operation of the company’s recently released 40 Gbps BiDi transceiver optics for the data center.

    Reply
  25. Tomi Engdahl says:

    Cable trays’ capabilities grow along with responsibilities
    http://www.cablinginstall.com/articles/print/volume-22/issue-4/features/cable-trays-capabilities-grow-along-with-responsibilities.html?cmpid=$trackid

    Now visible in data centers that run their cabling overhead, cable trays combine evolution and revolution to serve their intended purposes.

    As data centers continue to grapple with the most-efficient way to do everything, cabling-system design is coming under increasing scrutiny. In many cases, whether by choice or necessity, data centers include overhead rather than underfloor cabling runs. In these cases, where cable is visible rather than hidden, the aesthetics of cable-conveyance systems becomes a consideration–but in fairness, function trumps form almost universally. So an overhead cable-conveyance system must perform and, like other components of a data center network, should do so efficiently.

    Reply
  26. Tomi Engdahl says:

    Enclosures play a role in high-density cable management
    http://www.cablinginstall.com/articles/print/volume-18/issue-10/features/enclosures-play-a-role-in-high-density-cable-management.html

    As part of an overall cable-management system, the use of enclosures can either help or hurt airflow in high-density data centers.

    Reply
  27. Tomi Engdahl says:

    Five Basic Steps for Efficient Space Organization within High Density Enclosures
    http://www.apcmedia.com/salestools/VAVR-7BQRS4/VAVR-7BQRS4_R1_EN.pdf

    Organizing components and cables within high density
    enclosures need not be a stressful, time consuming
    chore. In fact, thanks to the flexibility of new enclosure
    designs, a standard for organizing enclosure space,
    including power and data cables can be easily imple-
    mented. This paper provides a five step roadmap for
    standardizing and optimizing organization within both
    low and high density enclosures, with special emphasis
    on how to plan for higher densities.

    Reply
  28. Tomi Engdahl says:

    Paper compares Top-of-Rack vs. End-of-Row switching configurations
    http://www.cablinginstall.com/articles/2014/05/juniper-tor-vs-eor.html

    Reply
  29. NJ Lawn And Irrigation says:

    I believe everything posted made a ton of sense. But, what about this?
    suppose you added a little information? I mean, I don’t want
    to tell you how to run your website, but suppose you added a title to maybe grab a person’s attention?
    I mean Data center backbone design | is a little vanilla.
    You might peek at Yahoo’s home page and note how they write post headlines to get people to
    open the links. You might add a related video or a related pic or two to get readers interested
    about what you’ve got to say. In my opinion, it might make
    your website a little bit more interesting.

    Reply
  30. Tomi Engdahl says:

    Analyst: Market for close-coupled data center cooling is, well, cool
    http://www.cablinginstall.com/articles/2014/06/close-coupled-cooling-market.html

    The report “Data Center Cooling – World – 2014” authored by IHS senior analyst of data center and critical infrastructure Elizabeth Cruz, says that “despite years of high hopes for growth of the in-row and in-rack data center cooling segment, sales for these close-coupled cooling products are not producing the double-digit growth once hoped for.” The study found that revenues for these close-coupled cooling technologies fell 5 to 6 percent in 2013. It also said the perimeter-cooling market declined 2 to 3 percent last year.

    The reason these cooling units haven’t taken off in high-density applications either is that “row/rack products offer significant energy savings once rack densities approach the 8- to 10-kW range,” IHS added. “At that point, it becomes more efficient to install a row/rack product that to increase the flow or air from a CRAC or CRAH unit to cool down a hotspot in a data center. However, with average rack densities still in the sub-5kW range, the operational savings from a row/rack product are not realized, and the justification for the higher investment cost is difficult to make. There are a number of data centers that operate at higher densities, and these are the facilities helping to underpin the moderate growth projections for the next five years. But these are comparatively few relative to the large population of low-power-density data centers.”

    IHS and Cruz further explain that the data center market overall is sluggish.

    Reply
  31. Tomi Engdahl says:

    Investigating the use of ceiling-ducted air containment in data centers
    http://www.cablinginstall.com/articles/2014/06/apc-ceiling-ducted-datacenter-containment.html

    A new graphical white paper from APC-Schneider Electric investigates how ducted air containment can simultaneously improve the energy efficiency and reliability of data centers.

    “Ducting hot IT-equipment exhaust to a drop ceiling can be an effective air management strategy,” states the paper’s executive summary. “Typical approaches include ducting either individual racks or entire hot aisles and may be passive (ducting only) or active (includes fans).”

    Reply
  32. Tomi Engdahl says:

    Paper compares Top-of-Rack vs. End-of-Row switching configurations
    http://www.cablinginstall.com/articles/2014/05/juniper-tor-vs-eor.html

    A new white paper from Juniper Networks is entitled, Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking: Important Considerations When Selecting Top-of-Rack Switches. The company says the 6-page guide “will help [readers] take the next steps toward a ToR data center evolution.”

    “Once you have accepted the need to move to 10 GbE switching [in the data center], your next decision involves choosing switching architecture: top of rack (ToR) or end of row (EoR),” states the document. “At the access layer, ToR has an advantage over EoR in terms of power consumption, ease of scale when growing server PoDs [points of demarcation], and a reduction of cable management complexity. It’s easy to run cable from new compute or storage equipment to an existing switch and bring the new equipment online — that is, it should be easy.”

    “Often, however, cabling at an EoR Layer 2 (L2) switch is so messy and complicated that it undermines the agility advanced technologies were supposed to bring.”

    Reply
  33. Tomi Engdahl says:

    Analyst asks (and answers): Can Huawei shake up the data center industry?
    http://www.cablinginstall.com/articles/2014/06/huawei-data-center-play.html

    Huawei may not exactly be a household name in the United States, but it is a massive company. In 2013 it had global sales of $39.5 billion

    Huawei has two groups of offerings for data centers residing within their “enterprise” segment, which turned over $2.5 billion itself last year. The first group is their IT product line that includes servers, storage, and networking. The second group, called Network Energy, offers power and cooling systems. The name of the latter section bears a striking resemblance to Emerson’s Network Power division, a longtime leader in critical power systems.

    The fact that Huawei has these two groups relevant to data centers makes it unique. Their presence in IT puts them in competition with heavyweights like HP, IBM and Cisco, while the DCI market is dominated by the likes of Schneider Electric, Emerson and Eaton.

    Reply
  34. lakeland fl wine bar says:

    What’s up, this weekend is fastidious for me, because this
    point in time i am reading this enormous informative post here at my residence.

    Reply
  35. Geri Kompiuteriai says:

    Sweet blog! I found it while browsing on Yahoo News. Do you have any tips on how to get listed in Yahoo News? I’ve been trying for a while but I never seem to get there! Thank you

    Reply
  36. Tomi Engdahl says:

    The flattening of data center networks
    http://www.cablinginstall.com/articles/print/volume-22/issue-9/features/data-center/the-flattening-of-data-center-networks.html?cmpid=Enl_CIM_Sep-23-2014&cmpid=EnlDataCentersSeptember232014

    The ToR-EoR discussion is just one consideration in a changing landscape for the design of data center networks and cabling systems.

    Over the past several months the concepts of top-of-rack (ToR) and end-of-row (EoR) data center network layouts have been the source of many seminar presentations, articles in this magazine and others, and technical papers in the cabling industry as well as the wider networking industry. Considerations of when and where to use each approach include management of network moves, adds and changes; cooling the data center facility; network scalability and, of course, cost among others. But the question of whether to use ToR or EoR-while very close to the heart of professionals in the cabling industry-is one part of a broader shift taking place in networking.

    Specifically, data center networks are getting flatter in terms of their switching architectures.

    In essence, the flattening of data center network architectures eliminates at least one “hop” that data makes when moving from one server to another server. The traditional switching architecture is commonly called “three-tier.” Working backward from the servers, those switch tiers include access switches, aggregation switches, and core switches.

    Higbie points out that the well-intentioned three-tier architecture has been shown to have flaws, including the latency and energy-use issues that Bernstein also discusses, as well as others. “Three-tier was supposed to be a big problem-solver,” Higbie says, “but most data centers have found there is a lot of wasteful port spend.” Among that inefficient spend is the necessity to establish inactive links, particularly between the access and aggregation switches. “As you set out primary and secondary networks, only one of these can be active at a time,”

    Alternative architectures-network fabrics-are now regularly being deployed rather than three-tier architectures.

    In a fat-tree architecture, the volume of north-south traffic flow is significantly reduced compared to what takes place in three-tier architectures. Fat-tree achieves more-efficient east-west, server-to-server communication.

    Ties to virtualization

    The architecture comprises two layers of switching-access switches, which connect to servers, then interconnection switches, which connect to the access switches. Within a TIA-942-A-based data center arrangement, servers reside in the equipment distribution area (EDA), while access switches can reside in either the horizontal distribution area (HDA) for end-of-row setups or in the EDA for top-of-rack setups. Interconnection switches reside in the main distribution area (MDA), or potentially in the intermediate distribution area (IDA) when an IDA exists.

    Many data centers deployed a spine-leaf/fat-tree architecture before Cisco did so using its own equipment this summer. And many more are destined to do so in the future.

    Reply
  37. Tomi Engdahl says:

    Density vs. Isolation in the data center
    http://www.itworld.com/article/2832715/density-vs-isolation-in-the-data-center.html

    Less is more. Or maybe more is more.

    Filling a server rack with hardware can be done in an infinite number of ways. Unless you’re Google, Amazon, Microsoft, etc. you don’t have an infinite amount of space or money however. That being the case, what is the best approach to utilizing your limited space and budget most efficiently?

    So here’s the question, if we are going to add additional capacity, should we stick with another beefed up 1U or 2U server? Or break that out into several physical servers?

    So is it better to have a bunch of isolated servers which reduces the VM domino effect in exchange for increased hardware maintenance? Or just a few massive servers and be ready for the 4 am call to replace a CPU at any given moment?

    Reply
  38. Tomi Engdahl says:

    Frequency-flexible clock generators for Cloud data centers
    http://www.edn.com/design/analog/4436596/Frequency-flexible-clock-generators-for-Cloud-data-centers?elq=86a5fe3fe668409daa2f1e71b45af093&elqCampaignId=19905

    Skyrocketing network bandwidth demands driven by consumer mobile devices and cloud-based streaming services, such as Netflix, Hulu, YouTube, Spotify, Pandora, online gaming and others, are pushing Internet Infrastructure suppliers to develop data center systems that support dramatically higher data rates, such as 10G, 40G and 100 Gbps. In addition, the increasing popularity of commercial cloud computing, which offers network-based computing and storage as a service, is further accelerating the demand for application-flexible, high-bandwidth networks in today’s data centers.

    Figure 1 illustrates the impact of these popular cloud-based streaming services on the growth of Internet traffic bandwidth. Cisco’s Visual Networking Index (VNI) Forecast (June 2014) projects the following market trends:

    Cloud applications and services such as Netflix, YouTube, Pandora, and Spotify, will account for 90 percent of total mobile data traffic by 2018.
    Global network traffic will be three times larger in 2018 than in 2013, equivalent to streaming 33B DVDs/month, or 46M DVDs/hour.
    By 2018, consumer online gaming traffic will be four times higher than it was in 2013.

    To reliably deliver a Netflix video or a Spotify high-quality audio stream, service providers must be equipped with data center hardware that supports three primary networks, as shown in Figure 2:

    LAN/WAN networks commonly comprise 1 Gb, 10 Gb, and/or 100 Gb Ethernet switches connected in a mesh switch fabric for the data center LAN, and OTN (Optical Transport Networking) interconnects to the WAN. These networks deliver the content from the data center to the cloud and, ultimately, to the user.

    Compute networks comprise many server and switch “blades” interconnected using copper cables, PCB backplanes or optical links. These interconnects use a combination of 1 Gb, 10 Gb Ethernet, PCIe, and in some cases, InfiniBand. Network interfaces in compute networks must support not only high data rates but also very low latency, which is critical for streaming video and audio service quality.

    Storage networks are primarily based on Fiber Channel, Gb or 10Gb Ethernet switches and direct connections to storage subsystems using PCIe. These networks store considerable amounts of content, requiring multi-gigabit capable protocols.

    To meet the rapidly expanding Internet bandwidth demands of content providers, compute and storage networks for data centers must become flatter and more horizontally interconnected. Known as the “converged data center,” this flatter architecture is required to improve server-to-server and server-to-storage communication within the data center, which directly impacts latency and the quality of streaming services.

    Reply
  39. Tomi Engdahl says:

    Facebook’s Newest Data Center Is Now Online In Altoona, Iowa
    http://techcrunch.com/2014/11/14/facebooks-newest-data-center-comes-online-in-altoona-iowa/

    Facebook today announced that its newest data center in Altoona, Iowa, is now open for business. The new facility complements the company’s other centers in Prineville, Ore; Forest City, N.C. and Luleå, Sweden (the company also operates out of a number of smaller shared locations). This is the first of two data centers the company is building at this site in Altoona.

    What’s actually more interesting than the fact that the new location is now online is that it’s also the first data center to use Facebook’s new high-performance networking architecture.

    With Facebook’s new approach, however, the entire data center runs on a single high-performance network. There are no clusters, just server pods that are all connected to each other. Each pod has 48 server racks — that’s much smaller than Facebook’s old clusters — and all of those pods are then connected to the larger network.

    Reply
  40. Tomi Engdahl says:

    Facebook’s New Data Center Is Bad News for Cisco
    http://www.wired.com/2014/11/facebooks-new-data-center-bad-news-cisco/

    Facebook is now serving the American heartland from a data center in the tiny town of Altoona, Iowa. Christened on Friday morning, this is just one of the many massive computing facilities that deliver the social network to phones, tablets, laptops, and desktop PCs across the globe, but it’s a little different from the rest.

    As it announced that the Altoona data center is now serving traffic to some of its 1.35 billion users, the company also revealed how its engineers pieced together the computer network that moves all that digital information through the facility. The rather complicated arrangement shows, in stark fashion, that the largest internet companies are now constructing their computer networks in very different ways—ways that don’t require expensive networking gear from the likes of Cisco and Juniper, the hardware giants that played such a large role when the foundations of the net were laid.

    From the Old to the New

    Traditionally, when companies built computer networks to run their online operations, they built them in tiers. They would create a huge network “core” using enormously expensive and powerful networking gear. Then a smaller tier—able to move less data—would connect to this core. A still smaller tier would connect to that. And so on—until the network reached the computer servers that were actually housing the software people wanted to use.

    For the most part, the hardware that ran these many tiers—from the smaller “top-of-rack” switches that drove the racks of computer servers, to the massive switches in the backbone—were provided by hardware giants like Cisco and Juniper. But in recent years, this has started to change. Many under-the-radar Asian operations and other networking vendors now provide less expensive top-of-rack switches, and in an effort to further reduce costs and find better ways of designing and managing their networks, internet behemoths such as Google and Facebook are now designing their own top-of-racks switches.

    This is well documented. But that’s not all that’s happening. The internet giants are also moving to cheaper gear at the heart of their massive networks. That’s what Facebook has done inside its Altoona data center. In essence, it has abandoned the hierarchical model, moving away from the enormously expensive networking gear that used to drive the core of its networks.

    Reply
  41. Tomi Engdahl says:

    How to control fire in the data center
    http://www.cablinginstall.com/articles/print/volume-22/issue-11/features/data-center/how-to-control-fire-in-the-data-center.html?cmpid=EnlCIMNovember242014

    Prevention, detection, suppression and containment combine to make up a holistic approach to this disastrous circumstance.

    Outside of the physical assets, the system downtime can also be costly and lengthy based on the types of extinguishing materials used and the severity of the fire. In fact, recent study results indicate the average cost of data center downtime per hour is $163,674 (Source: Aberdeen Group, “Downtime and Data Loss: How Much Can You Afford?”). This cost to the business has increased by approximately 18 percent in the past year, according to the same Aberdeen Group study.

    Several recent high-profile data center fires have been reported around the world. In October 2013, a fire at the U.S. National Security Association (NSA) data center storage facility in Utah destroyed equipment and delayed the facility’s opening (Source: The Fiscal Times, “$2 Billion NSA Spy Center is Going Up in Flames,” October 2013). Storage and information management firm Iron Mountain experienced a fire at their facility in Buenos Aires, Argentina in February 2014. In addition to the damage to the building and lost information, several first responders were killed trying to extinguish the fire (Source: Technology Advice, “Fire Destroys Iron Mountain Data Warehouse in Buenos Aires,” February 2014).

    Only a few months later, a fire at the Samsung SDS building in Gwacheon, South Korea significantly affected data center operations. Consumers around the world reported “error messages on Samsung phones or tablets” and the Samsung.com website was down (Source: Engadget, “Samsung data center fire causes outages, errors on smart TVs and phones,” April 2014). The results of each of these events demonstrate the importance of effective fire management solutions to save data and equipment.

    Reply
  42. Tomi Engdahl says:

    Designing modular data centers
    http://www.csemag.com/single-article/designing-modular-data-centers/4a8e1ac71f49b4814889f6de994dfedd.html

    Modular data centers offer quick turnaround and clear project costs. Like traditional data centers, they require carefully engineered electrical and cooling systems.

    When people in the data center industry hear the term “modular data center” (MDC), each individual will have a different impression of what an MDC really is. The use of MDCs is on the rise: IDC reported that of the 74% of all respondents who stated they had undertaken major rebuilds, approximately 87% had deployed modular, prefabricated, or container data center components in the past 18 mo. But this has not assuaged the reality that most of the data center industry—whether focusing on IT or facilities—still does not have a consistent understanding of the different types of modular data centers.

    The authors prefer to use the term “flexible facility” because not all of the facilities are truly modular, but still offer many of the advantages of modularization. These data center facilities come in the form of containers (located indoors and outdoors), industrialized or prefabricated, and brick-and-mortar buildings. Yes, brick-and-mortar buildings can be considered a flexible facility if they are designed using the same scalability and agility concepts as a prefabricated or containerized data center. For the purpose of this article, we will focus on three primary types of flexible facilities: containerized solutions, industrialized or prefabricated data centers, and enterprise brick-and-mortar data centers.

    consider the following IDC data on the reasons there is an increasing proliferation of MDCs:

    Quick turnaround from contract signing to going live, which enables enterprises to quickly scale up computing solutions when they need them
    Greater clarity into project costs, which enables enterprises to have greater control over budgets and estimates on associated return on investment
    An ability to deploy solutions very rapidly and almost anywhere in the world, whether enterprises need additional data center capacity in an existing building or at a remote location, such as for redundancy or in support of a satellite location.

    There is an important foundational belief regarding flexible facilities: the facility is built around the IT vs. designing the IT around the facility, the latter being the typical model in the data center industry. Dynamic IT operations require a facility that can respond and adjust to changing needs without significant cost and downtime

    Other important, albeit not widely known, aspects of a flexible facility have to do with fabrication and on-site construction work streams. Efficiency gains are obtained from reallocating field labor and focusing it on the prefabrication of mechanical and electrical modules. In this model, supply chain management becomes one of the primary activities to ensure on-time schedule delivery and installation.

    Different flexible facility types can be fitted with different types of cooling systems. The type that is ultimately chosen depends on many factors

    In general terms, the cooling systems for a flexible facility will be of the same construct as a traditional data center, but will be in clear, discrete modules that have a one-to-one correspondence to a data hall

    Reply
  43. Tomi Engdahl says:

    Re-Architecting the Data Center
    http://www.cio.com/article/2868312/data-center/re-architecting-the-data-center.html

    “Disruption” has been celebrated in the IT industry for at least a decade now, despite the definition of the word being “to throw into turmoil or disorder.” Anyone who has experienced disruption knows that it can be challenging to view it in light of the opportunity that it provides. Yet, whenever disruptions have occurred throughout the history of business, amazing opportunities and surprising growth have sprung up as a result of the disruptions.

    In 2014 at GigaOM Structure, Diane Bryant went on stage with Tom Krazit to talk about the next wave of disruption the IT industry is experiencing. To quote from Diane’s blog about Intel’s vision: “We are in the midst of a bold industry transformation as IT evolves from supporting the business to being the business. This transformation and the move to cloud computing calls into question many of the fundamental principles of data center architecture. Two significant changes are the move to software defined infrastructure (SDI) and the move to scale-out, distributed applications.”

    It can be a challenge to determine the right place to start when implementing “software defined” architectures, given there are multiple stress points within today’s data center architectures – from storage, network, and compute all the way through data center operations. One specific example mentioned in Diane’s blog was the challenge related to the CAGR (compound annual growth rate) of data – structured and unstructured – estimated by IDC to be on the order of 40%.

    What type of innovation is necessary for emerging data center software solutions like Cloudera to operate in an increasingly efficient way? Intel is going under the hood with the NVM and storage industries to break out of the higher latency, more expensive box, making today’s “impossible” tomorrow’s possibility.

    There are three major components to Intel’s next generation NVM strategy today: Intel SSDs, the NVM Express I/O protocol, and Intel® Cache Acceleration Software. All of these work together to remove workload latency, shift the industry to a more standards-based plug and play I/O protocol, and drive improved TCO in a solution. Each one of these components offers a step towards the responsiveness and agility necessary to support “software defined” infrastructure.

    Reply
  44. Tomi Engdahl says:

    BSRIA data center cabling research shows upcoming migration to 40/100G, ToR’s popularity
    http://www.cablinginstall.com/articles/2015/01/bsria-data-center-cabling-survey-2015.html?cmpid=EnlDataCentersFebruary32015

    BSRIA has released data from a recent survey it conducted focused on data center cabling. Among the high-level conclusions that can be drawn from the survey are that the top-of-rack (ToR) architecture is more popular in colocation facilities than it is in enterprise data centers, the shift from 1G and 10G to 40G and 100G for switch-to-switch connections is on, and a significant number of data center operators consider their cabling to be permanent—with no history of replacing it nor any plans to do so.

    As a predicate to the survey results, BSRIA explained that approximately 19 percent of the global market for structured cabling is installed in data centers, while the remaining 81 percent is installed in LAN applications. Based on its yearly research, BSRIA has determined the structured cabling installed in data centers is estimated at $1.2 billion in 2014. “The data centre segment is expected to continue its increase with huge demand for backup data, video storage, peer-to-peer file sharing, cloud computing and the uptake of the Internet of Things with numerous devices being connected in the future,” BSRIA commented.

    This latest research covers five countries—U.S., U.K., Germany, China, and Brazil—which combine to account for approximately 68 percent of worldwide data center cabling, according to BSRIA.

    “The study highlighted quite significant levels of 1G and 10G both in switch-to-switch and switch-to-server links and expected progression to 40G and 100G planned for 2016,”

    “The uptake of ToR is higher for the colocation segment, at 61 percent, than for the enterprise segment, with 42 percent opting for ToR,” BSRIA reported. “A third of the enterprise data centres use centralized switching.”

    “The usage of non-structured cabling point-to-point links was significantly higher for the colocation segment, which also has a higher usage of ToR architecture,”

    In total, the survey yielded dead-even results with 50 percent using point-to-point and the other 50 percent using structured cabling.

    “The main reasons for the use of point-to-point links have remained largely unchanged from 2011,” BSRIA said. “They are: the ability to reduce the amount of cabling used, higher-density port count, the ability of SFP+ for fiber, Cisco’s recommendations, and the ability to use cheaper cable than structured category cable.”

    “The typical replacement rates are very similar for copper and fibre cable and connectivity. Around a quarter replace the cabling every 4 to 5 years, and around 15 percent every 8 to 10 years. A significant portion do not replace their cabling.”

    Reply
  45. Tomi Engdahl says:

    Critical factors to consider when specifying a cabinet solution
    http://www.cablinginstall.com/articles/print/volume-23/issue-1/features/data-center/critical-factors-to-consider-when-specifying-a-cabinet-solution.html?cmpid=EnlDataCentersFebruary32015

    Design components, industry regulations, and environmental factors all come into play when determining the best 19-inch electronics cabinet solution for your data center.

    With today’s powerful electronics generating more heat and increased power consumption, it is essential to specify and deploy an effective 19-inch cabinet solution that ensures consistent performance and protection. Preferably, these include cabinet solutions that are specifically designed to accommodate a variety of unique environmental conditions, such as extreme heat, high dust/moisture, radio-frequency/electromagnetic interference (RFI/EMI), and high shock and vibration.

    Using 19-inch cabinets that are designed for robust protection of sensitive electronics is an effective way to ensure continuous operation while minimizing the risk of downtime resulting from operational failure. With a wide variety of available cabinet configurations and protection features, design engineers may tailor an ideal cabinet solution that meets or exceeds their specific application requirements.

    When selecting a 19-inch cabinet, it is important to factor in essential mechanical structure and protection standards-design considerations. This article will discuss critical design considerations, industry-recognized standards, thermal management, seismic, shock and vibration, and electromagnetic compatibility.

    Nineteen-inch cabinets provide a standardized frame or enclosure for mounting various types of electronics equipment. Each piece of equipment is typically 19 inches (482.6 mm) wide, including edges or mounting ears, which allow for mounting to the rack frame. Cabinet height is identified in “Units” (U), each unit equal to an industry standard of 1.75 inches (44.45 mm).

    Rack-mountable equipment is usually designed to occupy a specified number of U.
    most rack-mountable computers are between 1U and 5U

    For 19-inch cabinets, the standard IEC 60297 (Mechanical Structures for Electronic Equipment – Dimensions of Mechanical Structures of the 482,6mm/19-inch Series), provides crucial information for designing 19-inch cabinets. The standard specifies the basic dimensions of front panels, subracks, chassis, racks and cabinets of the 19-inch series. Subsequent standards of the associated IEC 60297-3 series provide detailed dimensions for specific parts of the equipment practice, where the basic dimensions are used as interface to other associated parts.

    Network Equipment-Building System (NEBS) is not a regulatory requirement, but more of a best-practices standard that became widely referenced in the telecom industry. It is the most common set of safety, spatial and environmental design guidelines applied to telecommunications equipment in the United States.

    With today’s electronics’ increase in power density and heat, thermal management and cooling is critical.

    An ideal formula for calculating heat dissipation requirements within a cabinet is Watts = 0.316 x CFM x ΔT (CFM = cubic feet per minute).

    Reply
  46. Tomi Engdahl says:

    Rack PDU market to grow more than 5 percent this year
    http://www.cablinginstall.com/articles/2015/01/rack-pdu-market-2015.html?cmpid=EnlDataCentersFebruary32015

    Global revenue from rack power distribution units is forecast to grow 5.6 percent in 2015, according to a recently published report from IHS titled “Rack Power Distribution Units – 2015.” “This is twice as fast as the forecast unit shipment growth, highlighting the continued shift toward higher-priced rack PDU products,” IHS said.

    “The shift occurs for several reasons,” the firm continued. “The growth of intelligent products, for example, is driven by the need to monitor power usage, report efficiency metrics, decrease power use in the data center, and enable capacity planning. Three-phase rack PDUs and those with higher power ratings also command higher prices. Increased adoption of these products is driven by increasing rack densities.”

    Last year intelligent PDUs accounted for 19 percent of unit shipments globally and 58 percent of revenue, IHS noted.

    Reply

Leave a Reply to ac repair brandon Cancel reply

Your email address will not be published. Required fields are marked *

*

*