Software-Defined Data Centers

Software defined seems to be the hype buzzword nowadays for many technology areas: We have Software-defined radio, Software-defined networking, Software defined storage and software-defined data center.

Some days ago AllThingsD ran a piece endorsing the idea of the software-defined data center titled What Is the Software Defined Data Center and Why Is It Important? Slashdot posting on it sees the writing as a good enough publication to explain all the relevant technology terms in ways that even a non-technical audience can understand but it also seen the writing problematic for several reasons.

Fortunately, there are a number of resources online to help tell hype from reality. Slashot short article Software-Defined Data Centers: Seeing Through the Hype is a good starting point. It tries to find a little truth behind the buzz surrounding software-defined data centers (SDDCs). Some parts of SDDC are in place, while others are being added to existing products. More than an actual technology, SDDC is the culmination of many other efforts at abstracting, consolidating, managing, provisioning, load balancing and distributing datacenter assets. Today software defined data centers are in the very early stages. It means that the true benefits of the platform won’t arrive for quite some time.

42 Comments

  1. Firv says:

    Tremendous things here. I’m very satisfied to see your article. Thanks so much and I’m having a look forward
    to contact you. Will you kindly drop me a mail?

    Reply
  2. Tomi Engdahl says:

    Google gets AGILE to increase IaaS cloud efficiency
    http://www.theregister.co.uk/2013/06/26/google_agile/

    Google has instrumented its infrastructure to the point where it can predict future demand 68% per cent better than previously, giving other cloud providers a primer for how to get the most out of their IT gear.

    The system was outlined in an academic paper AGILE: Elastic distributed resource scaling for Infrastructure-as-a-Service which was released by the giant on Wednesday at the USENIX conference in California.

    Agile lets Google predict future resource demands for workloads through wavelet analysis, which uses telemetry from across the Google stack to look at resource utilization in an application and then make a prediction about likely future resource use. Google then uses this information to spin up VMs in advance of demand, letting it avoid downtime.

    Though some of our beloved commentards may scoff at this and point out that such auto-workload assigning features have been available on mainframes for decades, Google’s approach involves the use of low-cost commodity hardware at a hitherto unparalleled scale, and wraps in predictive elements made possible by various design choices made by the giant.

    AGILE works via a Slave agent which monitors resource use of different servers running inside local KVM virtual machines, and it feeds this data to the AGILE Master, which predicts future demand via wavelet analysis (pictured) and automatically adds or subtracts servers from each application.

    The system can make good predictions when looking ahead for one or two minutes, which gives Google time to clone or spin-up new virtual machines to handle workload growth. The AGILE slave imposes less than 1 per cent CPU overhead per server, making it lightweight enough to be deployed widely.

    Reply
  3. Jasa SEO says:

    This page definitely has all of the info I needed about this subject and didn’t know who to ask.

    Reply
  4. Tomi says:

    Software-Defined Data Centers Might Cost Companies More Than They Save
    http://hardware.slashdot.org/story/13/07/29/0247253/software-defined-data-centers-might-cost-companies-more-than-they-save

    “As more and more companies move to virtualized, or software-defined, data centers, cost savings might not be one of the benefits. Sure, utilization rates might go up as resources are pooled, but if the end result is that IT resources become easier for end users to access and provision, they might end up using more resources, not less.”

    Reply
  5. Tomi Engdahl says:

    Tier 3 debuts DIY cloud networks
    Network administration with a robot touch
    http://www.theregister.co.uk/2013/08/14/tier_3_cloud_updates/

    Cloud operator Tier 3 has added reconfigurable networking to its cloud technology in an attempt to make it easier for resellers to massage the tech to suit their needs.

    The company announced the upgrades on Wednesday. They see Tier 3 implement self-service networking capabilities for its ESX-based cloud that will let customers and resellers create and manage load balancers, virtual LANS, site-to-site VPNs, and custom IP ports for edge firewalls – and all without having to contact the company’s network operations centre.

    This upgrade is designed to give Tier 3 customers greater reconfigurability for their networks and follows the introduction of a global object store based on the well-thought-of Riak CS technology.

    “What we’ve spent a lot of time doing is trying to raise the bar on self service,” Jared Ruckel, Tier 3′s director of product marketing, says.

    Though the company is built on top of VMware, it does not use VMware’s software-defined networking “Nicira” technology to deliver this service.

    “NSX (VMware Nicira) is something that we are currently looking into.”

    Though Tier 3′s press materials bill it as a “leading public cloud,” it is rather slight: the company has some 30TB of RAM capacity spread across an undisclosed quantity of Dell servers in 9 colocation facilities spread across North America and Western Europe.

    Reply
  6. Tomi Engdahl says:

    VMware aims to define software-defined data center with new portfolio
    http://www.zdnet.com/vmware-aims-to-define-software-defined-data-center-with-new-portfolio-7000019813/

    Summary: UPDATED: The buzz around software-defined data centers is still cloudy (so to speak) for some, but it defines VMware’s game plan for the near future.

    Reply
  7. Tomi Engdahl says:

    Report: Software-defined networking (SDN) market worth > $3.5 billion by 2018
    http://www.cablinginstall.com/articles/2013/08/sdn-market-report.html

    The interest in software-defined networking (SDN) will translate into a global market worth $3.52 billion by 2018, says a new study by Transparency Market Research. The increasing need for efficient infrastructure and mobility, as well as the popularity of cloud services, will drive this growth, according to the report. The market research firm predicts SDN spending worldwide will grow at a compound annual growth rate of 61.5% from 2012 to 2018. Transparency Research cites three main markets for SDN: enterprises, cloud services providers, and telecommunications services providers.

    Enterprises represented 35% of the SDN market in 2012. However, cloud service providers are expected to be the fastest growing market segment throughout the years the report covers. Transparency Research says that SDN’s ability to reduce opex and capex while enabling the delivery of new services will spearhead its use by cloud service providers.

    Cloud provisioning and orchestration products currently dominate the global SDN market, the report states. SDN switching held the second largest revenue share of the SDN market in 2012. SDN products and applications also will be used to design, optimize, secure, and monitor the network, the market research firm predicts.

    Reply
  8. seo blog says:

    The Seo specialists that you choose should certainly realize the value of relevant back hyperlinks. Additionally they should really have the ability to guide you in creating very good choices for the demands and place your welfare ahead of absolutely everyone else. Speak to an Search engine optimisation specialist for full search engine optimization for your internet site. You’ll be glad you.

    Reply
  9. Tomi Engdahl says:

    Review: Puppet vs. Chef vs. Ansible vs. Salt
    The leading configuration management and orchestration tools take different paths to server automation
    http://www.infoworld.com/d/data-center/review-puppet-vs-chef-vs-ansible-vs-salt-231308

    The proliferation of virtualization coupled with the increasing power of industry-standard servers and the availability of cloud computing has led to a significant uptick in the number of servers that need to be managed within and without an organization. Where we once made do with racks of physical servers that we could access in the data center down the hall, we now have to manage many more servers that could be spread all over the globe.

    This is where data center orchestration and configuration management tools come into play. In many cases, we’re managing groups of identical servers, running identical applications and services. They’re deployed on virtualization frameworks within the organization, or they’re running as cloud or hosted instances in remote data centers.

    Puppet, Chef, Ansible, and Salt were all built with that very goal in mind: to make it much easier to configure and maintain dozens, hundreds, or even thousands of servers. That’s not to say that smaller shops won’t benefit from these tools, as automation and orchestration generally make life easier in an infrastructure of any size.

    Reply
  10. Tomi Engdahl says:

    Project Mystic’s Potential Competitors To VMware: Bring It On
    http://www.crn.com/news/storage/300072030/project-mystics-potential-competitors-to-vmware-bring-it-on.htm

    As first reported by CRN, VMware and EMC are teaming up to develop “Project Mystic,” an EMC-branded converged infrastructure appliance based on software VMware is developing that could be integrated by distributors on industry-standard server hardware.

    Converged infrastructure combines server, storage, networking and virtualization technologies from multiple vendors in such a way that they can be managed as if it were a single appliance.

    The best-known converged infrastructure offerings to date are those from multiple vendors

    Other vendors, including Hewlett-Packard, IBM, Dell and Oracle, have converged infrastructure offerings based almost exclusively on their own technologies.

    Project Mystic’s biggest potential target, however, could be the market held by developers of hyper-converged infrastructure technology, which differs from converged infrastructure in that the server, storage, networking and virtualization technology is all software-defined rather than coming from separate hardware components.

    Reply
  11. Tomi Engdahl says:

    Don’t believe the hyper-converged hype: Why are we spending stupid amounts on hardware?
    Isn’t it called the software-defined data centre?
    http://www.theregister.co.uk/2014/06/09/hyper_converged_kit_what_for/

    In a software-defined data centre, why are some of the hottest properties hardware platforms?

    There are plenty of newly formed startups that will come to mind: highly converged, sometimes described as hyper-converged, servers.

    I think that it demonstrates what a mess our data centres have got into that products such as these are attractive. Is it the case that we have built-in processes that are so slow and inflexible that a hardware platform that resembles a games console for virtualisation becomes attractive?

    Surely the value has to be in the software: so have we got so bad at building our data centres that it makes sense to pay a premium for a hardware platform? There is certainly a large premium for some of them.

    Now I don’t doubt that deployment times are quicker, but my real concern is why we have got to this situation.

    It really doesn’t matter how quickly you can rack, stack and deploy your hypervisor if it takes you weeks to cable it to to talk the outside world or give it an IP address

    Reply
  12. Tomi Engdahl says:

    Unified Networking from Intel: Virtualize Network, Storage & Compute for Optimal Cloud Performance
    https://www.youtube.com/watch?v=oYrlRWMbpO0

    Reply
  13. Tomi Engdahl says:

    IBM releases Software-Defined Storage For Dummies – no joke
    Plus Big Blue sexes up its boring old GPFS line with new name
    http://www.theregister.co.uk/2014/06/13/ibm_writes_software_defined_storage_for_dummies_book/

    IBM has written its own Software Defined Storage for Dummies book (PDF) focusing on – you guessed it – its home-brewed General Parallel File System Elastic Storage

    GPFS is currently being sexed up as Elastic Storage, which has trendy, cloud-like, pay-for-usage connotations.

    Reply
  14. Tomi Engdahl says:

    VMware puts a price on NSX and tells partners to open fire
    Indoctrination phase complete: let the selling begin!
    http://www.theregister.co.uk/2014/06/16/vmware_puts_a_price_on_nsx_and_tells_partners_to_open_fire/

    VMware’s NSX network virtualisation software has been added to the company’s price list, a small-but-important milestone that sees the product available to resellers for the first time.

    Without further ado, NSX comes in three cuts:

    A subscription version priced at $AUD550 ($US517 or £304) per virtual machine, per year;
    A licence for an add-on to the vCloud Suite, at $AUD4700 ($US4,420 or £2,600) per CPU;
    NSX for vSphere at $AUD8,065 per CPU ($US7,585 or £4,464).

    Lest those prices raise eyebrows, VMware is at pains to point out that its vision for NSX is that it will deliver without the need for new networking hardware. That’s in contrast to other network virtualisation frameworks that suggest new boxen built for purpose are the best way to hand the control plane over to servers.

    Reply
  15. Tomi Engdahl says:

    Disrupting the Data Center to Create the Digital Services Economy
    https://communities.intel.com/community/itpeernetwork/datastack/blog/2014/06/18/disrupting-the-data-center-to-create-the-digital-services-economy

    We are in the midst of a bold industry transformation as IT evolves from supporting the business to being the business. This transformation and the move to cloud computing calls into question many of the fundamental principles of data center architecture. Two significant changes are the move to software defined infrastructure (SDI) and the move to scale-out, distributed applications. The speed of application development and deployment of new services is rapid. The infrastructure must keep pace. It must move from statically configured to dynamic, from manually operated to fully automated, and from fixed function to open standard.

    As a first step, we start with a commitment to deliver the best technology for all data center workloads – spanning servers, network and storage.

    But what we find even more exciting is our next innovation in processor design that can dramatically increase application performance through fully custom accelerators. We are integrating our industry leading Xeon processor with a coherent FPGA in a single package, socket compatible to our standard Xeon E5 processor offerings.

    Our new Xeon+FPGA solution provides yet another customized option, one more tool for customers to use to improve their critical data center metric of “Performance/TCO”.

    Reply
  16. Tomi Engdahl says:

    Speaking in Tech: ‘Software-defined’ anything makes me BARF in my MOUTH
    Plus: Open source is a MYTH – look at OpenStack…
    http://www.theregister.co.uk/2014/07/02/speaking_in_tech_episode_116/

    Reply
  17. Tomi Engdahl says:

    Data Centers to World: Not Dead Yet!
    Anticipating the next-gen data center
    http://www.networkworld.com/article/2459721/data-center/data-centers-to-world-not-dead-yet.html

    Agility and flexibility are two of the most popular words to describe the attributes expected from IT in helping achieve future business objectives. But how do you apply those attributes to what many large enterprises still consider the linchpin of IT infrastructure – the data center?

    There are not, yet, many companies like Condé Nast, which recently shuttered its data center to go “all in with the cloud.” Let’s face it, if you’re a content company, albeit one of the select few with a still thriving print business, transforming to an all-cloud strategy makes a lot of sense.

    For just about any other industry, cloud may drive new growth and innovation, but the bulk of business is still dependent on heavy-duty data center servers and applications to run the daily operations.

    Reply
  18. Tomi Engdahl says:

    VMware’s high-wire balancing act: EVO might drag us ALL down
    Get it right, EMC, or there’ll be STORAGE CIVIL WAR. Mark my words
    http://www.theregister.co.uk/2014/08/26/vmwares_high_wire_balancing_act/

    In the battle for the software-defined data centre, one of VMware’s challenges is how to deliver software-defined/controlled storage without screwing up parent EMC’s hardware-based storage revenues.

    VMware is an overall EMC Federation member along with Pivotal and the EMC Information Infrastructure (EMC II) unit. The three are allowed to compete, but what will EMC’s chairman and overall CEO Joe Tucci say if one federation member screws up another’s revenues and strategy?

    EMC revenues are largely based on hardware storage arrays with software over-pinnings and connective products. VMware revenues are based on server virtualisation software with growing seedlings for software-defined networking (NSX) and storage (VSAN).

    But VMware has now made it official – it is in the converged appliance business with the EVO range announced at VMworld.

    Reply
  19. Tomi Engdahl says:

    WTH: Once we have delivered on the software-defined data centre promise, what will be the next thing in IT infrastructure?

    AB: Well the notion of the software-defined data centre is a vision we have laid out, but very few traditional companies have been able to execute on this. Only companies like Amazon, Google and Facebook have succeeded; these are large companies with vast resources that have done everything internally.

    I think many traditional enterprises are quickly becoming software companies that need learn to do rapid software development. IT Infrastructure should be an enabler to this, not an inhibitor.

    That said, most companies have yet to implement the fundamental SDD building blocks in the compute, network and storage layer. We have a long way to go.

    Source: http://www.theregister.co.uk/2014/09/01/battery_ventures_vc_looks_at_sandisk/

    Reply
  20. Tomi Engdahl says:

    Tech kingpins: Your kit would be tastier with a spot of open source
    Come on EMC, open-source ViPR
    http://www.theregister.co.uk/2014/09/01/an_opening_is_needed/

    s storage infrastructure companies try to move to a more software oriented world, they are having to try different things to try to grab our business.

    In today’s world, tin is not the differentiator and they need to compete head-on with open source – which might mean they have to take a more open-source type approach. Of course, they will argue that they have been moving this way with some of their products for some time, but said products have tended to be outside of their key infrastructure market.

    The only way I can see software-defined products like EMC’s ViPR gaining any kind of penetration will be for the big companies that make them to actually open-source them. There is a strong demand for a ViPR-like product, especially in the arena of storage management, but it is far too easy for EMC’s competitors to ignore it and subtly block it. So for it to gain any kind of traction, it will need open-sourcing.

    The same goes for ScaleIO, which is competing against a number of open-source products.

    If EMC is not quite ready for such a radical step, perhaps the first move could be a commercial free-to-use licence

    Reply
  21. Tomi Engdahl says:

    Webcast follow-up: HP makes the case for convergence
    Moonshot on the launchpad
    http://www.theregister.co.uk/2014/09/05/virtualisation_convergence/

    One of the things that surprised many in our recent Regcast on converged infrastructure (CI) was the claim by HP’s Clive Freeman that hardware optimisation for well-defined workloads far outstrips the performance you would get for the same price using software optimisation on a standard platform.

    HP’s internal data concentrates on Moonshot, the company’s new range of ultra high-density, low-power servers.

    There are 45 Moonshot cartridges in a 4.3U chassis and each cartridge has either one or four servers, depending on the model. Each server consumes 10W.

    HP refers to Moonshot as “software-defined servers”: the software load defines the specialised cartridge you choose. HP plans specialised systems for applications such as big data or desktop virtualisation across its entire range of converged systems.

    The internal research shows that Moonshot’s low-power servers, based on Intel Atom or AMD Opteron CPUs, require 89 per cent less energy, 80 per cent less space and 77 per cent less cost than generic servers.

    But this is not a general rule: HP anticipates that the market is for 19 per cent of the volume servers that will be sold between now and 2016.

    What workloads can benefit? Converged systems are useful if you can define a predictable, specific workload or type of workloads. Optimising the hardware for these workloads, say the advocates of converged systems, offers both performance and management advantages.

    Converged architecture is not so useful for compute-intensive apps or virtualisation

    Reply
  22. Tomi Engdahl says:

    It’s Getting Hot in Here: Titans Clash over SDN Standards
    Cloud providers’ hyperscale operations driving network architecture advances
    http://www.networkworld.com/article/2460170/sdnt-s-getting-hot-in-here-titans-clash-over-sdn-s/sdn/it-s-getting-hot-in-here-titans-clash-over-sdn-standards.html

    It’s going to get hot at the IEEE Hot Interconnects conference in late August. That’s when, according to EE Times, Facebook and Google will face off with “similar and competing” visions of software-defined networking.

    As Network World’s Jim Metzler reported last year, “Software-defined networking (SDN) is the hottest thing going today, but there is considerable confusion surrounding everything from the definition of the term to the different architectures and technologies suppliers are putting forward.”

    SDN, according to Brocade’s definition “is an emerging concept that proposes to disaggregate traditional, vertically integrated networking stacks to improve network service velocity and customize network operations for specialized environments.”

    Well said, but although the technology is maturing and real world use cases are rea

    Facebook and Google are intent on leading the way, (whether separately or together remains to be seen) and they have a lot of clout.

    “Facebook has taken networking into its own hands, building a switch to link servers inside its data centers, and wants to make the platform available to others,”

    Google has developed an SDN architecture for a data center WAN interconnect, known as B4, that ties together its data centers globally.

    OpenFlow seems like it’s a bit further along commercially with probably more industry support than what seems like a very Facebook-centric approach at OCP. But that’s not the point here; rather all data center owners/operators should be thrilled to see these two giants pushing the pedal to the mettle. Google, Facebook, Amazon Web Services and Microsoft Azure all have a vested stake in SDN as a foundational element of the data centers of tomorrow.

    Reply
  23. Tomi Engdahl says:

    Dell, Emerson, HP, Intel join to create new data center management specification
    http://www.cablinginstall.com/articles/2014/09/redfish-datacenter-spec-development.html

    Dell, Emerson Network Power (NYSE: EMR), HP and Intel (NASDAQ: INTC) have announced the creation of Redfish, a specification under development for data center and systems management that the companies say delivers comprehensive functionality, scalability and security. In a joint press release issued by the companies, Redfish is billed as “one of the most comprehensive specifications since the Intelligent Platform Management Interface (IPMI) was launched in 1998.”

    Redfish reportedly uses “a modern network interface style, allowing access to data using even simple, script-based programming methods.” The companies say that, going forward, “the specification will be designed to improve scalability and expand data access and analysis, help lower costs, and further enable feature-rich remote management while ensuring a secure solution that protects investment.”

    The specification effort leveraged the combined experience of the collaborating companies in IT systems hardware, microprocessors and data center infrastructure management (DCIM) technologies.

    Reply
  24. Tomi Engdahl says:

    Datacenter Manageability fit for the 21st Century
    http://www.redfishspecification.org/

    Redfish is a modern intelligent manageability interface and lightweight data model specification that is scalable, discoverable and extensible.
    Redfish is suitable for a multitude of end-users, from the datacenter operator to an enterprise management console.

    Reply
  25. Tomi Engdahl says:

    SDI wars: WTF is software defined infrastructure?
    This time we play for ALL the marbles
    http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure/

    The Software Defined Infrastructure (SDI) war is coming, and it will reshape the information technology landscape like nothing has since the invention of the PC itself.

    It consists of sub-wars, each important in their own right, but the game is bigger than any of them.

    We have just been through the worst of the storage wars. The networking wars are almost in full swing. The orchestration and automation wars are just beginning and the predictive analytics wars can be seen on the horizon.

    Each of these wars would be major events unto themselves. Billions upon billions of dollars will change hands. Empires will rise and startups will fall. Yet despite all of that, each of those wars is a tactical skirmish compared to the strategic – and tactical – war that is only just beginning.

    The SDI war is to be the net result of all of the sub-wars listed above, as well as several other smaller ones that are mostly irrelevant. The SDI war is the final commoditisation of servers – and entire datacenters – in one last gasp to counter the ease of use of public cloud computing and the inflated expectations brought about by the proliferation of walled garden smartphone and tablet technology.

    The SDI wars will not focus on storage, networking or compute, but on radically changing the atomic element of computing consumed. Instead of buying “a server” or “an array”, loading it with a hypervisor, then backups, monitoring, WAN acceleration and so forth, we will buy an “omni-converged” compute unit. I shall dub this an SDI block until someone comes up with better a marketing buzzword.

    The ultimate goal is that of true stateless provisioning. This would be similar to the “golden master” concept so familiar to those employing Virtual Desktop Infrastructure (VDI) brought to all workloads.

    So you want a MySQL database tuned for the SDI block you are running? It will deploy a golden master from the orchestration software pre-configured and pre-tested to run optimally on that hardware. Your data and customizations are separate from the OS and the application itself. When the OS and app are updated, the image will be altered by the vendor; you simply restart the VM and you’re good to go.

    Reply
  26. Tomi Engdahl says:

    Software-Defined Storage: The Next-Generation Data Platform for the Software-Defined Datacenter
    http://www.csc.com/infrastructure_services/insights/112933-software_defined_storage_the_next_generation_data_platform_for_the_software_defined_datacenter?utm_campaign=0914-GDC-Outbrain30&utm_source=outbrain&utm_medium=ocpc

    Enterprise IT is evolving into the software-defined datacenter as a result of cheaper commodity hardware, faster connectivity, and the continued rise of virtualization, Big Data, mobility, and social computing. Applications and infrastructure are no longer dependent on physical resources; rather, they can reside virtually anywhere in the corporate network or cloud.

    Rapidly following this trend is the move to software-defined storage (SDS), in which data storage also is less dependent on physical infrastructure and can be located where it is most needed.

    Reply
  27. Tomi Engdahl says:

    If It Ain’t Automated, You’re Doing It Wrong
    http://www.thenewip.net/author.asp?section_id=289&doc_id=710937&cid=oubtrain&wc=4

    In all the excitement over virtualization and the impact that NFV and SDN will have on telecom networks, one stark reality remains for every IP network operator: However you are evolving your network, if you aren’t automating the back-end processes, you’re doing it wrong.

    This has been a reality for telecom network operators for years now, and most have been working very hard at this task, not only because automation leads to higher service quality and faster service delivery but because it also generally means lower costs of operation.

    As those engaged in this process know all too well, introducing automation means extracting people, reducing the human error factor in the process, and enabling flow-through processes that start with the customer input.

    Reply
  28. Tomi Engdahl says:

    Horizon View and Virtual SAN Reference Architecture
    http://blogs.vmware.com/vsphere/2014/07/horizon-view-virtual-san-reference-architecture.html

    The VMware Software-Defined Storage group and VMware End-User computing group has teamed up to create an in-depth Reference Architecture detailing the performance and configuration of Horizon View on Virtual SAN.

    Reply
  29. Tomi Engdahl says:

    Data-center upstart grabs Wozniak, jumps into virtual storage fight
    Primary Data launches at EMC and Quantum with ‘data hypervisor’
    http://www.theregister.co.uk/2014/11/19/primary_data/

    A fresh startup called Primary Data reckons it will reinvent “file virtualization” for software-defined data centers – and thus take on EMC’s ViPR and Quantum’s StorNext.

    Primary Data, cofounded by David Flynn, made the boast as Apple cofounder Steve Wozniak announced he has left Fusion-io to join Flynn at Primary Data as chief scientist.

    Now Flynn and Woz are back on the same team – and Primary Data is emerging from stealth mode to reveal an outline of what it’s developing.

    Flynn, Primary Data’s CTO, said: “Data virtualization is the inevitable next step for enterprise architectures, as it seamlessly integrates existing infrastructure and the full spectrum of specialized capabilities provided by ultra-performance and ultra-capacity storage resources.”

    The technology defines a “data hypervisor” that hides all the storage hardware and software below a global file namespace. There are separate channels for sending and receiving data, and for controlling access to the stored data.

    Admins can set policy definitions for placing and moving information, with rules reflecting storage performance, price and protection needs.

    The data hypervisor allows clients to dip into the storage systems in a protocol-agnostic way. Primary Data’s software does the hard work underneath to provision capacity and keep the bytes in line.

    Reply
  30. Tomi Engdahl says:

    30 – count ‘em – 30 orgs sign up for Cumulus on Dell networking kit
    Is this software-defined networking thing hot or not?
    http://www.theregister.co.uk/2015/04/15/30_count_em_30_orgs_sign_up_for_cumulus_on_dell_networking_kit/

    January 2014, Dell announced it would make it possible to run Cumulus Networks’ operating system on its networking gear.

    15 months later, 30 organisations have done so. As in a three followed by a zero. Worldwide.

    So says John McCloskey, executive director for enterprise solutions at Dell Australia and New Zealand. McCloskey’s not worried: he reckons software-defined networking is a very early stage market.

    So is Dell in a software-defined networking hole?

    VMware has said it’s won 400 VSAN customers worldwide, hardly a sign that software-defined storage is knocking off the array business.

    Cisco, meanwhile, says it has over 1,700 customers for its SDN effort, ACI, and that the tally of adherents is growing fast.

    Reply
  31. Tomi Engdahl says:

    Meet Evolving Business Demands with Software-Defined Storage
    https://webinar.informationweek.com/19710?keycode=IKWE02

    The data storage landscape is becoming increasingly complex. Firms are looking for effective and quick ways to upgrade their storage infrastructure so that they can respond with services that win and retain customers – yet the status quo of storage provisioning continues to be slow and clumsy. Businesses are beginning to discover the advantages of a software-defined storage approach – one that accelerates the delivery of storage resources in today’s complex and dynamic infrastructures.

    Why 55% of technology decision makers are expressing interest in or starting to implement software-defined storage

    How distributed systems technology transforms commodity server infrastructure into a scalable, resilient, self-service storage platform

    Reply
  32. Tomi Engdahl says:

    Powering Converged Infrastructure
    http://powerquality.eaton.com/About-Us/Markets/Converged-Infrastructure/Default.asp
    http://www.google.fi/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Flit.powerware.com%2Fll_download.asp%3Ffile%3DWP_PoweringConvergedInfrastructures.pdf&ei=dSJHVdzFFoH7sgGPmIGYAQ&usg=AFQjCNFCqqOXpbmrlYiMNs5ookIS64iggw&bvm=bv.92291466,d.bGg

    Converged infrastructures utilize virtualization and automation to achieve high levels of availability in a cost-effective manner. In fact, converged infrastructures are so resilient that some IT managers believe they can be safely and reliably operated without the assistance of uninterruptible power systems (UPSs), power distribution units (PDUs) and other power protection technologies. In truth, however, such beliefs are dangerously mistaken.

    What is converged infrastructure?
    Simply put, converged infrastructures are pre-integrated hardware and software bundles designed to reduce the cost and complexity of deploying and maintaining virtualized solutions. Most converged infrastructure products include these four elements:
    1. Server hardware
    2. Storage hardware
    3. Networking hardware
    4. Software (including a hypervisor, operating system, automated management tools and sometimes email systems, collaboration tools or other applications)

    Why use converged infrastructure?
    According to analyst firm IDC, the worldwide market for converged infrastructure solutions will expand at a compound annual growth rate of 40 percent between 2012 and 2016, rising from $4.6 billion to $17.8 billion. Sales of non-converged server, storage and networking hardware, by contrast, will increase at a CAGR of just a little over two percent over the same period. Benefits like the following help explain why adoption of converged infrastructures is rising so sharply:
    Faster, simpler deployment. Converged infrastructures are pre-integrated and tested, so they take far less time to install and configure. According to a study from analyst firm IDC, in fact, Hewlett-Packard converged infrastructures typically enable businesses to cut application provisioning time by 75 percent.
    Lower costs. Converged infrastructure products usually sell for less than the combined cost of their individual components, enabling businesses to conserve capital when rolling out new solutions. Furthermore, the automated management software included with most converged infrastructure offerings decreases operating expenses by simplifying system administration. Indeed, the HP converged infrastructure users studied by IDC shifted over 50 percent of their IT resources from maintenance to innovation on average.
    Enhanced agility. Thanks to their ease of deployment, affordability and scalability, converged infrastructures enable companies to add new IT capabilities or augment existing ones more quickly and cost-effectively.

    Power protection equipment plays a key role in automatically triggering virtual machine migration processes during utility outages. Converged infrastructures execute automated failover routines only when informed that there’s a reason to do so. During utility failures, network connected UPSs can provide that information by notifying downstream devices that power is no longer available. At companies without UPSs, technicians must initiate the virtual machine transfer processes manually, which is far slower and less reliable.

    A converged infrastructure’s failover features can’t function without electrical power.

    Converged infrastructures are vulnerable to power spikes and other electrical disturbances.

    The fifth element of converged infrastructures: Intelligent power protection
    Power distribution units suitable for use with converged infrastructures do more than simply distribute power.

    Management software
    Most converged infrastructure solutions come with built-in system management software that helps make them highly resilient. Adding VM-centric power management software increases resilience even further by enabling technicians to do the following:
    Manage all of their converged IT and power protection assets through a single console.

    Reply
  33. Tomi Engdahl says:

    Software-defined storage
    http://en.wikipedia.org/wiki/Software-defined_storage

    Software-defined storage (SDS) is an evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of hardware. Software-defined storage definitions typically include a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment may also provide policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup. SDS definitions are sometimes compared with those of Software-based Storage.

    By consensus and early advocacy,[1] SDS software is separate from the hardware it is managing. That hardware may or may not have abstraction, pooling, or automation software embedded. This philosophical span has made software-defined storage difficult to categorize. When implemented as software only in conjunction with commodity servers with internal disks, it may suggest software such as a virtual or global file system. If it is software layered over sophisticated large storage arrays, it suggests software such as storage virtualization or storage resource management, categories of products that address separate and different problems.

    Based on similar concepts as software-defined networking (SDN),[4] interest in SDS rose after VMware acquired Nicira (known for “software-defined networking”) for over a billion dollars in 2012.[5][6]

    SDS – software-defined storage
    http://www.webopedia.com/TERM/S/software-defined_storage_sds.html

    Storage infrastructure that is managed and automated by intelligent software as opposed to by the storage hardware itself. In this way, the pooled storage infrastructure resources in a software-defined storage (SDS) environment can be automatically and efficiently allocated to match the application needs of an enterprise.

    Separating the Storage Hardware from the Software

    By separating the storage hardware from the software that manages the storage infrastructure, software-defined storage enables enterprises to purchase heterogeneous storage hardware without having to worry as much about issues such as interoperability, under- or over-utilization of specific storage resources, and manual oversight of storage resources.

    The software that enables a software-defined storage environment can provide functionality such as deduplication, replication, thin provisioning, snapshots and other backup and restore capabilities across a wide range of server hardware components. The key benefits of software-defined storage over traditional storage are increased flexibility, automated management and cost efficiency.

    Reply
  34. Tomi Engdahl says:

    ‘Composable infrastructure’: new servers give software more to define
    Cisco, Intel, IBM and HP are bridging virtualisation and the software-defined data centre
    http://www.theregister.co.uk/2015/10/26/composable_infrastructure_servers_that_give_software_more_to_define/

    “Composable infrastructure” is a term you’re about to start hearing a lot more, and the good news is that while it is marketing jargon behind the shiny is pleasing advances in server design that will advance server virtualisation and private clouds.

    The new term has its roots in server virtualisation, which is of course an eminently sensible idea that anyone sensible uses whenever possible. Intel and AMD both gave server virtualisation a mighty shunt forward with their respective virtualisation extensions that equipped their CPUs with the smarts to help multiple virtual machines to do their thing at once.

    Servers have changed shape and components in the years since server virtualisation boomed. But now they’re changing more profoundly.

    Exhibit A is the M-series of Cisco’s UCS servers, which offer shared storage, networking, cooling and power to “cartridges” that contain RAM and CPU. Cisco’s idea is that instead of having blade servers with dedicated resources, the M-series allows users to assemble components into servers with their preferred configurations, with less overhead than is required to operate virtual machines that span different boxes or touch a SAN for resources.

    In a composable infrastructure world, APIs make it possible for code to whip up the servers it wants. That’s important, because composable infrastructure is seen as a bridge between server virtualisation and the software-defined data centre. The thinking is that infrastructure that allows itself to be configured gives software more to define, which is probably a good thing.

    HP has announced it plans to get into the composable caper and like Cisco uses the “composable infrastructure” moniker.

    Reply
  35. Tomi Engdahl says:

    Data center power can be software defined too
    http://www.datacenterdynamics.com/critical-environment/data-center-power-can-be-software-defined-too/84915.fullarticle

    More and more data is being collected, stored and transacted today thanks to the internet, social networking, smartphones and credit cards. All this activity takes place in real time, so application availability is more important than ever and reliability requirements are increasingly stringent. Much application downtime today is caused by power problems, either in a data center’s power delivery network or the utility distribution grid. This is likely to become even more so as reliability of the electrical grid continues to deteriorate.

    Part of the reason power is such a frequent cause of application downtime is the effort to abstract IT hardware from applications through virtualization and “software defined data center” technologies. While abstracting servers, storage and networking, the concept of software defined infrastructure has ignored power.

    It is a purely IT-centric view of the data center. Standing separately are facilities staff who operate building management systems and other infrastructure components. If you want an integrated management environment for this infrastructure you get what is called data center infrastruture management (DCIM) software.

    Software defined data center technologies and DCIM software are valuable tools for their respective purposes, neither addresses power-related downtime. This problem is generally addressed by setting up multiple geographically dispersed, often fully redundant data centers, configured for either hot or cold backup and failover. But automated failover and recovery is still very often plagued by problems.

    Application failover to another site requires manual intervention nearly 80% of the time. A study by Symantec found that even before you get to manual intervention, 25% of disaster recovery failover tests fail completely even before getting to the manual part.

    Today, software defined data center and DCIM solutions do not address the relationship between applications and power. Power should be the next resource to become software defined. While you can use software to allocate IT resources, it is not possible with power. You cannot dynamically adjust the amount of power going to a rack or an outlet, but you can, however, dynamically change the amount of power consumed by IT gear plugged into an outlet by shifting the workload. Software defined power involves adjusting server capacity to accommodate workloads and indirectly manage the power consumed.

    The approach could combine power capacity management with disaster recovery procedures and other functions, such as participation in utility demand response programs.

    Because load shifting does not occur until availability of the destination has been verified, the process is risk free, and when disaster does strike, the chances of smooth transition are dramatically improved.

    implementation of software defined power brings together application monitoring, IT management, DCIM, power monitoring, enterprise-scale automation, analytics and energy market intelligence

    Reply
  36. Tomi Engdahl says:

    OPENCORES: Tools
    http://opencores.org/opencores,tools

    There are plenty of good EDA tools that are open source available. The use of such tools makes it easier to collaborate at the opencores site. An IP that has readily available scripts for an open source HDL simulator makes it easier for an other person to verify and possibly update that particular core. A test environment that is built for a commercial simulator that only a limited number of people have access to makes verification more complicated.

    Reply
  37. Tomi Engdahl says:

    The Register guide to software-defined infrastructure
    Our very own Trevor Pott does his best to cut through the marketing fluff
    http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/

    Software-Defined Infrastructure (SDI) has, in a very short time, become a completely overused term.

    As the individual components of SDI have started to become automated the marketing usage of the term has approached “cloud” or “X as a Service” levels of abstracted pointlessness.

    Understanding what different groups mean when they use the term “software-defined” means cutting through a lot of fluff to find it. Ultimately, this is why I chose to eventually use the term Infrastructure Endgame Machine to describe what I see as the ultimate evolution of SDI: the marketing bullshit has run so far ahead of the technical realities that describing theoretical concepts can only be done using ridiculous absolutist terminology like “endgame machine”.

    I don’t think even tech marketers are willing to go there quite yet.

    SDI wars: WTF is software defined infrastructure?
    This time we play for ALL the marbles
    http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure

    Reply
  38. Tomi Engdahl says:

    The Register guide to software-defined infrastructure
    Our very own Trevor Pott does his best to cut through the marketing fluff
    http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/

    Software-Defined Infrastructure (SDI) has, in a very short time, become a completely overused term.

    As the individual components of SDI have started to become automated the marketing usage of the term has approached “cloud” or “X as a Service” levels of abstracted pointlessness.

    Understanding what different groups mean when they use the term “software-defined” means cutting through a lot of fluff to find it. Ultimately, this is why I chose to eventually use the term Infrastructure Endgame Machine to describe what I see as the ultimate evolution of SDI: the marketing bullshit has run so far ahead of the technical realities that describing theoretical concepts can only be done using ridiculous absolutist terminology like “endgame machine”.

    I don’t think even tech marketers are willing to go there quite yet.

    SDI wars: WTF is software defined infrastructure?
    This time we play for ALL the marbles
    http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure

    In order to understand the problem with “software-defined” anything, let’s start the discussion with the most overused subterm of all: Software-Defined Storage (SDS).

    All storage is software-defined.

    SDS vendors want you to become locked into their software instead of being locked in to EMC’s combination of software and hardware. Pure and simple.

    Software-Defined Networking (SDN) is another often confused term. It comes in two flavours: virtual and physical, and is often lumped together with Network Functions Virtualisation (NFV) which also comes in two flavours: telco and everyone else.

    The two flavours of SDN are not mutually incompatible. Indeed, a hybrid between the two is starting to emerge as the most likely candidate, once everyone is done stabbing Cisco to death with the shiv of cutthroat margins.

    Software-defined really means developer-controlled

    But the thing to notice here is the bit about the “API-fiddling developer”. When you strip all of the blither, marketing speak, infighting, politics, lies, damned lies and the pestilent reek of desperation away what you have is Amazon envy. “Software-defined” means nothing more than “be as good as – or better than – Amazon at making the lives of developers easy”.

    That’s it, right there, ladies and gentlemen. The holy grail of modern tech CxO thinking. It’s been nearly 10 years since AWS launched and the movers and shakers in our industry still can’t come up with anything better. Software-defined X, the Docker/containerisation love affair, the “rise of the API”, keynotes about the irrelevance of open source and the replacement of it with “open standards” … all of it is nothing more than the perpetual, frenetic and frenzied attempt to be like Amazon.

    Developers are not engineers

    Where it all goes wrong – and it has – is that while many engineers are developers, not all developers are engineers. In the “bad old days”, we had a separation of powers. In a well-balanced IT department no one idiot could ruin everything for everyone one else.

    A virtual admin with a burning idea would need to get the network, storage, OS, application and security guys to all sign off on it.

    The new way is to dispense with all of that and let the devs run the asylum. Hell, most software teams have almost entirely done away with testing and quality assurance. It’s common practice for even the mightiest software houses to throw beta software out as “release” and let the customers beat through the bugs in production.

    It’s a rare company that – like Netflix – invests in building a chaos monkey. Rarer still are those still building software using proper engineering principles.

    Software-defined change management

    With the exception of a handful of Israeli startups run by terrifying ex-Mossad InfoSec types, these are the sorts of questions and discussions that make software-defined X startups very, very angry. They really don’t want to talk about things like rate limiting change requests from a given authentication key, how one might implement mitigation via segmentation or automated incident response.

    There’s money to be made and any concerns about privacy, security or data sovereignty are to be viciously stamped out. The hell of it is … they’re not wrong.

    Change management is seen as a problematic impediment by pretty much anyone who isn’t a traditional infrastructure nerd or a security specialist. Developers, sales, marketing and most executives want what they want and they want it now. If IT can’t deliver, they’ll go do their thing in Amazon. Every time that happens that is money those startups – or even the staid old guard – aren’t getting.

    Eventually, the software-defined crew will realise that if they are going to be around for more than a single refresh cycle they need to put a truly unholy amount of time and effort into idiot-proofing their offerings. Those that don’t won’t be around long.

    When someone talks about “software-defined”, that’s what they’re trying to be. Or, at least, they’re trying to be some small piece of that puzzle. If they do talk about “software-defined”, however, take the time to ask them hard questions about security, privacy and data sovereignty. After all, in a “software-defined” world, those sorts of considerations are now automated. Welcome to the future.

    Reply
  39. Tomi Engdahl says:

    Security the key to software-defined datacentre takeup
    http://www.cloudpro.co.uk/saas/5997/security-the-key-to-software-defined-datacentre-takeup

    94 per cent of executives think security is more important than cost savings

    A report by HyTrust has revealed security is the key factor that will make more executives take up Software-Defined Data Centre (SDDC) services, ranking higher than cost savings, agility and performance enhancements.

    A total 94 per cent of the executives questioned said better security would help companies realise the benefits of the technology. Additionally, 93 per cent agreed that the benefits of migration to virtualisation and the cloud are undeniable and quantifiable, suggesting there will be a faster drive towards SDDC infrastructure in the future.

    A further 88 per cent of respondents think optimal SDDC strategies and deployment will drive the take-up of virtualisation ratios and server optimisation, while also improving finances in the organisation.

    “It’s always been hard to deny the potential benefits of SDDC infrastructure, but in the past the obvious advantages have sometimes been overshadowed by concerns over security and compliance,” said Eric Chiu, president of HyTrust.

    Almost all (94 per cent) think current security levels on SDDC platforms and strategies meet their organisation’s needs ‘very well’ or ‘somewhat well’, with only four per cent saying they don’t address the needs of the company.

    “What we’re seeing now is clear progress in this exciting arena, as technology solutions that balance high-quality workload security with effortless automation push back those fears,” Chiu added.

    “The focus is now exactly where it should be: ensuring that the virtualized or cloud infrastructure enables tremendous cost savings with unparalleled agility and flexibility.”

    Reply
  40. Tomi Engdahl says:

    Could ‘software-defined power’ unlock hidden data center capacities?
    http://www.cablinginstall.com/articles/pt/2017/01/could-software-defined-power-unlock-hidden-data-center-capacities.html?cmpid=enl_cim_cimdatacenternewsletter_2017-01-31

    Booming demand for cloud computing and data services will only accelerate as the number of conventional computer-based users is rapidly dwarfed by the multitude of connected “things” that the Internet of Things (IoT) threatens to bring about.

    What’s needed here is a method to even out power-supply loads, perhaps by redistributing processing tasks to other servers or by pausing non-time-critical tasks or rescheduling them to quieter times of day. Other methods can address demand fluctuations by using battery-power storage to meet peak demands without impacting the load presented to the utility supply.

    What characterizes all these potential solutions is the need for greater intelligence, not just in the management of the data center’s processing operations, but in the way power is managed. One potential solution is Software Defined Power (SDP), which might unlock the underutilized power capacity available within existing systems.

    Reply
  41. tomi says:

    I use WordPress with custom made theme.

    Reply

Leave a Reply to Tomi Cancel reply

Your email address will not be published. Required fields are marked *

*

*