Better-performing, cheaper clouds ahead in 2016, IEEE predicts | ZDNet

http://www.zdnet.com/article/better-performing-cheaper-clouds-ahead-in-2016-ieee-predicts/

Posted from WordPress for Android

2 Comments

  1. Tomi Engdahl says:

    Same time, same server, next Tuesday? AWS can do that now
    Who wins with the new Scheduled Reserved Instances: you or Amazon?
    http://www.theregister.co.uk/2016/01/14/same_time_same_server_next_tuesday_aws_can_do_that_now/

    Amazon Web Services has just done something rather interesting, in the form of making it possible to reserve servers in advance for short bursts of computing.

    The new “Scheduled Reserved Instances” caper does what it says on the tin: you can decide what kind of server you want to run, how long to run it for and what you’re prepared to pay. You can even add a schedule to get the same instance type each week. Or each day. Or whatever, provided you commit for a year.

    AWS is talking up the new plan as ideal for those who predictably run certain jobs at certain times.

    Of course there’s also an upside for AWS in this new feature. Running a big public cloud means spending astounding amounts of cash on kit and operations. The more AWS and its ilk can predict usage rates, the easier it gets to plan future purchases, cashflow and capacity.

    Reply
  2. Tomi Engdahl says:

    Chip Advances Play Big Role In Cloud
    Semiconductor improvements add up to big savings in power and performance.
    http://semiengineering.com/chip-advances-play-larger-role-in-data-center/

    Semiconductor engineering teams have been collaborating with key players in the data center ecosystem in recent years, resulting in unforeseen and substantial changes in how data centers are architected and built. That includes everything from which boxes, boards, cards and cables go where, to how much it costs to run them.

    The result is that bedrock communication technology and standards like serializer/deserializer (SerDes) and Ethernet are getting renewed attention. Technology that has been taken for granted is being improved, refined, and updated on a grand scale.

    Some of this is being spurred by the demands and deep pockets of Facebook and Google and peers, with their billions of server hits per hour.

    “There has been a relentless progression with performance and power scaling to the point where computation almost looks like an infinite resource these days,” said Steven Woo, distinguished inventor and vice president of enterprise solutions technology at Rambus. “And there is a lot more data. You need to drive decisions on what you put, where, based on that data.”

    In the context of today’s cutting-edge IEEE 802.3by standard, which is uses 24-Gbps lanes to achieve 100 Gigabit throughput speeds, this is one place where chipmakers get involved.

    “A lot of these are concepts and waves of thinking in data flow architectures of the 1980s, and they’re making their way back,” said Woo. “But they’re very different now. Technologies have improved relative to each other and the ratios against each other are all different. Basically, what you’re doing is taking the data flow perspective and optimizing everything.”

    Minor considerations, big impact
    Optimizing everything is how Marvell Semiconductor sees it, as well. Marvell continues to churn out at Ethernet switch and PHY silicon, but performance demands are rising—and the payoff for meeting those demands is greater. The cabling between the top-of-rack Ethernet switches and the array of servers beneath them may seem like a minor consideration, but it has big impact for the data center design, cost and operation. The best SerDes enable 25Gbps throughput, but they also have long-reach capability that allows for ‘direct attach’ without supplemental power.

    This potential brought together a worldwide “meeting of the minds” among power users like Google, the rest of the industry, and IEEE to have a 25Gbps standard, and not go directly from 10Gbps to 40Gbps. Not only is power supply removed within the rack, but equally as important, the backplane can be copper, not fiber.

    Engineering teams are working overtime to develop 802.3by-capable silicon and systems in light of all of this.

    “We also just introduced something called ‘link training’ where you are decoding a communications link between Ethernet transceivers and replicating that link between 10Gbps and 25Gbps.”

    Marvell uses ARM cores in many of its switch families, which helps keep the silicon power consumption low. ARM has spent decades perfecting that.

    “The CPU must use DDR,” said Amit Avivi, senior product line manager at Marvell. “But the switch level bandwidth is way too high to use DDR. Advanced switch (silicon) within the switch (device), optimize the traffic to minimize the memory needs. There is lots of prioritization, and there are lots of handshakes to optimize that traffic.”

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*