Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    You’ve heard of the internet, right? Well this here might just be the INTERCLOUD
    Now you can have a server in every port
    By Jack Clark, 30 Apr 2014
    http://www.theregister.co.uk/2014/04/30/equinix_cloud_exchange/

    Equinix has removed one of the most intractable stumbling blocks in cloud computing’s evolution from a high-price, differentiated market into one of low-cost utilities – by making it trivial for customers to suck data out of one provider and pour it into another.

    The colocation provider announced its Cloud Exchange service on Wednesday. In so doing, it brought the cloud market one step closer to behaving more like a utility such as electricity by reducing the cost of switching data and compute between providers.

    Cloud Exchange uses Equinix’s globe-spanning fleet of data centers, combined with the huge presence of edge equipment of major cloud providers in its facilities, to gin up a system for giving customers the option of connecting up with clouds from Amazon and Microsoft through dedicated 1Gb or 10Gb connections.

    “Equinix provides a Cloud Exchange Portal and APIs that simplify the process of managing connections to multiple cloud services,” the company explained in a press release. “Customers can use the portal and APIs to allocate, monitor and modify virtual circuits in near real-time with the provisioning of those circuits automated end-to-end from the Cloud Exchange to the service provider.”

    Reply
  2. Tomi Engdahl says:

    Microsoft acquires New Zealand-based cloud computing company GreenButton
    http://thenextweb.com/microsoft/2014/05/01/microsoft-acquires-new-zealand-based-cloud-computing-company-greenbutton/

    Microsoft is stepping up its cloud computing game after acquiring GreenButton, a New Zealand-based startup that uses its Azure platform and has an existing relationship with the Redmond company.

    GreenButton provides ‘on-demand’ cloud solutions, based around a high-intensity cloud burst technology — for example, customers can use its products to run high performance computer, or big data analytics in the cloud.

    Microsoft says that the GreenButton solution and the company’s SDK will be become part of a new offering from Microsoft Azure.

    Reply
  3. Tomi Engdahl says:

    Licensed to BILL: How much should you cough for software licences?
    Nobody really knows – so we all get screwed
    http://www.theregister.co.uk/2014/05/02/licensed_to_bill_storagebod/

    “Yet another change to a licensing model. You can bet it’s not going to work out any cheaper for me,” was the first thought that flickered through my mind during a presentation about GPFS 4.1 at the GPFS UG meeting in London.

    This started up another train of thought: in this new world of software-defined storage, how should the software be licensed? And how should the value be reflected?

    And if I fully embrace a programmatic provisioning model that dynamically changes the storage configuration, does any model make any sense apart from some kind of flat-fee, all-you-can-eat model?

    Reply
  4. Tomi Engdahl says:

    Security guru: You can’t blame EDWARD SNOWDEN for making US clouds LOOK leaky
    And anyway, people AREN’T switching away
    http://www.theregister.co.uk/2014/04/30/mikko_hypponen_infosec_keynote_speech/

    Infosec 2014 Accusations that the revelations from rogue National Security Agency sysadmin whistleblower Edward Snowden have damaged the US technology industry are misplaced, according to influential security guru Mikko Hypponen.

    Hypponen, chief research officer at security firm F-Secure, said that the disclosure that US tech was either “booby-trapped or monitored” may have had a damaging effect on the US cloud industry. But blaming this on Snowden was misplaced and akin to “blaming Al Gore for global warming”.

    Reply
  5. Tomi Engdahl says:

    Managed services for legacy apps may be your cloud on-ramp
    Dimension Data bets on Microsoft-based hybrid cloud
    http://www.theregister.co.uk/2014/05/05/managed_services_for_legacy_apps_may_be_your_cloud_onramp/

    The battle between Amazon Web Services, Google and Microsoft for the public cloud gets a lot of attention, but global managed services provider Dimension Data has thrown its hat into an adjacent ring with a new hybrid Windows Server cloud.

    DiData’s service uses Microsoft’s Cloud Platform and Hyper-V everywhere, which means a homogeneous rig spanning on-premises kit, Azure and the DiData cloud, all manageable using the likes of System Center.

    The company’s plan is not to offer an “I need ten servers NOW – here’s my credit card” model. Instead, the aim is to give customers the chance to migrate a legacy Windows app into a cloudy environment where it can be given the managed services treatment, complete with service level agreement.

    “If everyone’s apps had been designed to run in the cloud natively, that would be great,” he told Vulture South. “The cost of re-platforming them is significant. We are getting traction to manage them and patch them.”

    Also perhaps surprising is that Dimension Data is not alone in this kind of play. The likes of Fujitsu and CSC also operate clouds on very large scales

    Reply
  6. Tomi Engdahl says:

    Dotcom Thanks RIAA and MPAA for Mega’s Massive Growth
    http://torrentfreak.com/dotcoms-mega-thanks-riaa-mpaa-140501/

    Mega.co.nz, the cloud storage company founded by Kim Dotcom, has seen the number of uploads triple in the past six months. Mega users now upload a total of half a billion files per month. According to Kim Dotcom, the MPAA and RIAA deserve some credit for the unprecedented growth.

    Acting on a lead from the entertainment industry, the U.S. Government shut down Megaupload early 2012.

    Exactly a year later Kim Dotcom made a comeback with a new file-storage venture. Together with several old colleagues and new investors, Mega was launched. The new service, which has a heavy focus on privacy and security, has expanded ever since.

    This morning Dotcom posted an image showing how user uploads have increased more than 300% over the past six months. The graph doesn’t specify the scale, but the New Zealand-based entrepreneur told TF that the service now processes over half a billion uploads per month.

    That’s more than 10,000 files per minute….

    “We are experiencing massive growth. We can’t add new servers and bandwidth fast enough,” Dotcom tells us.

    Reply
  7. Tomi Engdahl says:

    Review: GFI Cloud eliminates need to nursemaid Windows
    Help for the hard-pressed sysadmin
    http://www.theregister.co.uk/2014/05/06/gfi_cloud_review/

    The purpose of GFI Cloud is simply to manage and secure a Window-based network of desktops and servers. With GFI Cloud, the sysadmin can ensure that they are properly maintained and that nobody has done anything outrageously stupid to them.

    There are two different ways to implement a cloud management service: single agent or per-system agent. Talk to various cloud management companies and you will get impassioned speeches about why one method is better than the other.

    The first requires you to run a single agent on a network and poke holes in the firewall of every other system so that they can be managed by this agent.

    In my opinion, the single-agent approach is a terrible plan for the kind of customer who would want the simplified management services provided by GFI Cloud.

    While it may seem onerous at first blush to install an agent on every system to be monitored, I believe this is significantly less problematic than managing firewall and security settings across one’s entire estate.

    GFI Cloud was designed to be simple and easy to use. It was targeted not to compete with the likes of Microsoft’s System Center but with “nothing at all”.

    You start off with the dashboard which gives you an overall view of all the devices on your network.

    The concept of having your Windows event logs displayed in a single pane of glass interface is quite novel

    Push the asset-tracking button, select hardware and suddenly you know everything there is to know about the bits that make up that system. Similarly, you can pull full software information for that device.

    Poking the monitoring button in Device view also gives you access to performance charts for CPU, memory, disk busy time and disk queue length.

    GFI Cloud’s web protection is a simple and well implemented corporate web filter. Put your computers in a group and allow or deny access to different categories of website.

    GFI Cloud is not just a simple management interface for the overworked sysadmin. Far more critically, it also functions as a checklist of the basic items that need to be covered to keep your network functioning.

    GFI cloud is the kind of management tool that offers that first step away from constantly fighting fires towards getting IT under control.

    Reply
  8. crc-news.tk says:

    If some one wants to be updated with latest technologies after that he must be pay a visit
    this site and be up to date everyday.

    Reply
  9. Tomi Engdahl says:

    HP’s Helion is a commitment to Openstack cloud
    http://www.theinquirer.net/inquirer/feature/2343786/hps-helion-is-a-commitment-to-openstack-cloud

    HP HAS ANNOUNCED that it will bring together the components of its cloud computing services under the Helion brand.

    As we reported earlier this week, the company has committed $1bn to cloud research and development based on Openstack, allowing its contributions of code to become part of the fabric of the Linux community.

    “Our view on cloud computing is that it will be hybrid. Our view has always been that to give customers the best consumption options, you must have an open platform and when we talk about “open platform” these days, really we’re talking about open software,”

    With its colours pinned, we asked, does HP believe there is still a market for proprietary solutions or is only open source practically and financially viable?

    Clifford replied, “I think that open-source computing has proved that it does destabilise proprietary offerings. I used to be a programmer and 30 years ago, proprietary solutions made for excellent machines, but when the enterprise version of Unix arrived, I felt the opportunities for me as a programmer expand.”

    The first version of Helion is already available. The community edition is aimed at proof of concept and learning the system ready for a full enterprise rollout later in the year, however Clifford is keen that its power not be underestimated. “You can take this and run with it now. At the moment, we see the limit as 30 nodes, not because you can’t run it any bigger but more because we see it as a proof of concept licence that will allow people to understand it better. It’s certainly sufficient to spin up a small cloud. It’s not some sort of lab test – it’s the real thing.”

    Reply
  10. Tomi Engdahl says:

    Box zings rivals with massive GE customer win
    GE plans to roll out Box’s cloud storage and file sharing service to its 300,000 employees
    http://www.cio.com/article/752613/Box_zings_rivals_with_massive_GE_customer_win?page=1&taxonomyId=600010

    Rocked recently by reports that it has delayed its IPO, Box has rebounded with a major customer win, snapping up General Electric, which plans to roll out the cloud storage and file sharing service to its 300,000 employees worldwide.

    The deal, announced Thursday, represents Box’s largest single customer deployment, and boosts its credibility as an enterprise IT provider as it fiercely battles rivals like Microsoft, Google, IBM, Dropbox, EMC, Citrix and Egnyte.

    “Moving to a cloud technology like Box allows us to centralize all of our content and provides more efficiency, speed and simplicity for our employees,” said Jamie Miller, GE’s CIO, in a press release.

    GE’s endorsement of Box is also further validation of the trend among businesses to store data in vendors’ cloud data centers, where employees can access files via the Internet and share them from a variety of devices, including smartphones and tablets.

    Based in Los Altos, California, Box has about 1,000 employees

    It has about 34,000 paying corporate customers, 40 percent of them Fortune 500 companies. However, only 7 percent of its 25 million end users pay for the service.

    Once Box is in via this bottom-up approach driven by end users, IT departments often embrace the service, which has an extensive set of IT management and security controls, because, unlike Dropbox, it has been designed from the start for workplace use.

    Reply
  11. Tomi Engdahl says:

    HP drops $1bn, two-year OpenStack cash bomb
    Floats own-branded open-source plus IP protection
    By Gavin Clarke, 7 May 2014
    http://www.theregister.co.uk/2014/05/07/hp_1_billion_openstack_investment/

    Hewlett-Packard is has unveiled a $1bn, two-year campaign promoting its open-source cloud, now rebranded as Helion.

    The PC maker says it will be spending on R&D, the development of cloud products and hiring “hundreds” of experts in a new OpenStack professional services practice. Experts are being hired to cover planning advice, building and migration, and operations and management.

    Underpinning this will be a tried and tested HP-branded version of the OpenStack distro released in two packages – one free, the other commercial.

    Helion OpenStack Community edition is the free version but will feature relatively limited scale, for use in pilots and testing.

    The commercial edition of the HP-branded code is promised for next month

    HP is also offering Helion a financial umbrella should patent sharks come nibbling.

    HP will promise indemnification against IP infringement claims to direct customers and customers of service providers and resellers on Helion.

    The technology and legal push are to pave the way for a rollout of Helion OpenStack-based cloud services in 20 of HP’s 80 data centres in the next 18 months.

    Also, HP’s OpenStack will be “tightly integrated” with its server, storage and networking platforms including its 3Par, StorVirtual VSA and SDN Controller.

    Reply
  12. Tomi Engdahl says:

    App-happy cloud bod? You’d better keep eyes on your networks
    Network performance monitoring: getting more difficult… and MORE important
    http://www.theregister.co.uk/2014/03/28/you_need_a_thousand_eyes/

    Measuring how well your application is performing is not a straightforward task. Many enterprises have multiple locations, work with cloud-based applications and have end users stationed in different parts of the world.

    With the future of IT infrastructure trending toward more interconnectivity and external dependencies, it becomes both increasing difficult and important to know how your application is performing.

    Performance management biz Thousand Eyes has created a network performance measurement tool that is ready for modern cloud application environments. It is able to collect data inside your enterprise environment and data from various probing points on the internet and tell you exactly how your application is performing

    Enterprise environments are changing so much, you need a nimble deployment that allows a lot of flexibility. We wanted to make people understand their network problems outside their own network environment, which is a hard thing to do.

    The reality is that applications are starting to move out to the cloud. When your Office 365 deployment is not working well, you do not really know whether it is your enterprise network slowing down, or your internet connection is having a bad day, or Office 365 itslef is actually having problems.

    The solution is a SAAS application; you do not have to set up servers on your side. There are two ways we collect data. The first is with “private agents” which are probes that are deployed locally. If you are running multiple locations then you will need one private agent per site.

    Reply
  13. Tomi Engdahl says:

    Microsoft gins up admin-soothing Azure file vault
    Wanna link many VMs to one storage pool? Now you can. Monday’s looking up, eh? Eh?
    http://www.theregister.co.uk/2014/05/12/microsoft_azure_files_launch/

    Microsoft has made it easier for Windows admins to migrate applications up into the cloud via a sysadmin-friendly file-sharing service.

    The Microsoft Azure Files tech was announced at TechEd on Monday and gives admins a way to create a pool of storage and attach it to multiple virtual machines within the same data center.

    This, Redmond says, marks the first time an admin can do that, and is thanks to Microsoft creating a storage service that works via version 2.1 of the Server Message Block protocol.

    “The focus here is existing customers. On-premise, when they run workloads, they deal with VMs and disks and file shares,”

    “Today you can attach a given disk to a VM in the cloud but that disk can only be attached to one VM,” he explained.

    The maximum size of a shared storage pool is 5TB, he said, and it will initially launch with the ability to do around 1000 input-output operations per second. The storage service has strong consistency

    Though the VMs need to be located in the same Microsoft data center region as the storage pool, they do not need to share the same server rack, Calder explained. To achieve this Microsoft is using its NVGRE-based flat network to lash the services together.

    “What we wanted to enable was give customers something they are doing on-premise and do in the cloud,”

    Reply
  14. Tomi Engdahl says:

    Rackspace beats Q1 earnings targets with $421M in revenue
    http://www.zdnet.com/rackspace-beats-q1-earnings-targets-with-421m-in-revenue-7000029351/

    Summary: Rackspace shares shot up by roughly 10 percent in after-hours trading as soon as the report dropped.

    The IT hosting company reported a net income of $25 million, or 19 cents per share (statement).

    Non-GAAP earnings were 20 cents per share on a revenue of $421 million, up 3.2 percent sequentially and 16 percent annually.

    Wall Street was looking for earnings of 12 cents per share on a revenue of $419.53 million.

    Reply
  15. Tomi Engdahl says:

    Virtually yours: Microsoft unveils Windows-as-a-service
    Are you Azure about this?
    http://www.theregister.co.uk/2014/05/13/microsoft_desktop_as_service/

    Microsoft has uncanned its desktop-a-service, giving mobile users a preview edition of Windows apps on the go.

    The software giant yesterday announced the pilot of Azure RemoteApp, previously codenamed Mohoro.

    The Azure Remote service is due by September this year and it is currently just for Windows but support promised for a “range” of devices that Microsoft has said means Mac, Windows Phone and Windows RT.

    Apps delivered in the pilot are Office 2013 ProPlus and you get 50GB space on Windows Azure Storage.

    Desktop as a service pilot is available in Microsoft’s Windows Azure US West, US East, Western Europe, North Europe, East Asia and Southeast Asia regions.

    The service runs on Microsoft’s Remote Desktop Service (RDS)

    This time, Microsoft is competing on remote access with Citrix and VMware as well as Amazon, which released its desktop-as-service Workspaces in March.
    Amazon’s offering serves up a Windows 7 desktop

    Reply
  16. Tomi Engdahl says:

    Windows Apps in the Cloud: Introducing Microsoft Azure RemoteApp
    http://blogs.msdn.com/b/rds/archive/2014/05/12/windows-apps-in-the-cloud-introducing-microsoft-azure-remoteapp.aspx

    Cloud deployment offers a standalone, turnkey way to host applications in the cloud. Provisioning is easy and fast: users can logon and use applications within minutes. The apps and the operating system are kept always up-to-date through regular updates, and Microsoft Anti-Malware endpoint protection provides continuous defense. Users use Microsoft Account or their corporate credentials to connect. As an IT administrator, you only need to think about which apps to offer and to whom. The rest is taken care of for you.

    Setting up RemoteApp in cloud-only mode is easy.

    You can use the “RemoteApp Programs” list to manage which applications are published. Several apps from the Office suite are published by default

    The cloud deployment model is an ideal way to provide access to a standard office productivity app suite. In contrast, the hybrid deployment model offers significantly more customization: Apps, OS, and settings are under your control.

    Reply
  17. Tomi Engdahl says:

    Graphics pros left hanging as Adobe Creative Cloud outage nears 24 hours
    Cloudy services left titsup in login lockout
    http://www.theregister.co.uk/2014/05/15/graphics_pros_left_hanging_as_adobe_creative_cloud_outage_nears_24_hours/

    Adobe is struggling to correct a global outage that has already locked customers out of its Creative Cloud online services for nearly 24 hours.

    The downtime is a blow for Adobe, which has based its new business model on strong-arming its customers into cloud-connected subscriptions. It nixed the boxed retail version of its Creative Suite graphics apps last May – including Photoshop, Illustrator, Dreamweaver, and InDesign, among others – and said that all future versions would be available for rental only.

    But although Adobe is adept at creating industry-leading creative software, it hasn’t demonstrated much acumen as a provider of online services. At one point, its cloud storage services failed to sync users’ files for two whole weeks, and a data breach in October is believed to have leaked the encrypted passwords for as many as 38 million customers.

    Reply
  18. Tomi Engdahl says:

    Cloud computing is FAIL and here’s why
    Stick that online service up your SaaS
    http://www.theregister.co.uk/2014/05/16/cloud_computing_is_fail_and_heres_why/

    Something for the Weekend, Sir? Adobe’s spectacular FAIL over the last 48 hours confirmed, rather than revealed, cloud computing to be so unreliable as to be positively dangerous. Cloud computing is shite. It takes over everything you’ve got, then farts in your face and runs away giggling.

    For those readers blissfully ignorant of what us media production types had to put up with between Wednesday evening and the early hours of Friday morning, Adobe’s login services were down during this brief period. Across multiple continents where Adobe software is used, customers were being beaten back by errors telling them their “Adobe ID” login credentials were incorrect.

    Yes, I know that doesn’t sound like much.

    Yet the knock-on effect of being unable to log in to software that increasingly demands you to keep logging in simply to persuade it to run is problematic, I hope you’ll agree.

    At this point Adobe began telling irate Creative Cloud subscribers via its Twitter feed that they could circumvent the entire CC authentication system by temporarily pulling the plug on their internet connections and relaunching the apps.

    Brilliant, that. Running unauthenticated software? Pull the plug out the back and it’s yours free for a month!

    Except… what if you’re just returning from a 30-day stint in the sticks beyond the evil influence of the maleficent interwebs? You’d have got back to your desk, fired up your Adobe software and seen a lovely big message reading “Trial period over” inviting you to subscribe to Adobe Creative Cloud even though you already do and then resolutely refusing to allow you to log in to tell it so.

    Worse, Adobe’s pre-CC apps were not offering 30 days of grace but a paltry seven.

    Back in the old days of print publishing, if your printing press broke down, you just put the job on another printing press. But here the cloud servers hadn’t broken down as far as I can gather, so it wasn’t a matter of switching to other servers. The problem was that we couldn’t get to them, upload any to them or publish through them simply because our metaphorical car keys were, like Luca Brasi, sleeping with the fishes.

    The problem as I see it is that cloud computing is essentially unattainable.

    Reply
  19. Tomi Engdahl says:

    Adobe blames ‘maintenance failure’ for 27-hour outage
    Unidentified ‘root cause’ borks the system during work on database
    http://www.theregister.co.uk/2014/05/16/adobe_outage_database_maintenance/

    Reply
  20. Tomi Engdahl says:

    Rackspace is looking towards Openstack cloud exit
    Struggling to challenge Red Hat and HP
    http://www.theinquirer.net/inquirer/news/2345194/rackspace-is-looking-towards-openstack-cloud-exit

    HOSTED SERVER VENDOR Rackspace has revealed that it might be looking for an exit strategy to get out of the cloud market.

    With the giants of the internet services sector such as Amazon, Google and Microsoft muscling in with some serious firepower, along with direct competition from Openstack solutions from Red Hat, Canonical and most recently HP Helion, Rackspace has found it increasingly hard to maintain its niche.

    The company has confirmed that it is looking at strategic partnerships as part of its future direction. In a statement, the company said, “In recent months, Rackspace has been approached by multiple parties who have expressed interest in a strategic relationship with Rackspace, ranging from partnership to acquisition.”

    “Our board decided to hire Morgan Stanley to evaluate the inbound strategic proposals, and to explore any other alternatives which could advance Rackspace’s long-term strategy.”

    Reply
  21. Tomi Engdahl says:

    Cloud computing, or ‘The future is trying to KILL YOU’
    The brutal tech truth that links the problems of Rackspace, Dell, HP, IBM, Oracle, SAP, others
    http://www.theregister.co.uk/2014/05/17/cloud_computing_doom_analysis/

    What do all ailing enterprise IT companies have in common? Trouble in their core businesses due to the rise of cloud computing.

    Just how serious are the effects?

    Tech tectonics reshape the landscape

    The reason why this is all happening is that during the past ten years there have been a series of advances within the technology landscape that make cloud computing’s rise inevitable – even with the NSA revelations.

    Reply
  22. Tomi Engdahl says:

    Cloud is favorite choice for some organizations

    Finnish organizations for almost 80 per cent to take advantage of cloud services , research firm Market Vision says.

    Market Vision to provide services in the cloud according to a report in almost half of the cloud is an equal option among others, and has been part of the cloud of all your favorite option. The most common way to take advantage of the SaaS cloud -based services , but also the infrastructure as a service is purchased , the report reveals.

    Cloud services are not necessarily reduce other IT services needs. The transition to the cloud may increase the need for integration and security services

    A large part of the services are bought directly from the producers of cloud services , but according to the Market Supervision of intermediaries, ie Brokers, is becoming increasingly common .

    Source: http://www.tietoviikko.fi/kaikki_uutiset/osalle+pilvi+on+mieluisin+vaihtoehto/a988854

    Reply
  23. Tomi Engdahl says:

    OpenStack: the Open Source Cloud That Vendors Love and Users Are Ignoring
    http://news.slashdot.org/story/14/05/20/1920244/openstack-the-open-source-cloud-that-vendors-love-and-users-are-ignoring

    OpenStack has no shortage of corporate backers. Rackspace, Red Hat, IBM, Dell, HP, Cisco and many others have hopped on board. But many wonder, after four years, shouldn’t there be more end users by this point?

    Reply
  24. Tomi Engdahl says:

    European Grid Infrastructure project condenses shared cloud
    Academic iron across 19 nations now more accessible
    http://www.theregister.co.uk/2014/05/22/egi_launches_federated_cloud/

    After a three-year project, the European Grid Infrastructure project has pulled the big red switch on its federated cloud, which it says pools the resources of academic iron in 19 EU countries.

    Drawing on the “academic private clouds and virtualised resources” of its members, the EGI Federated Cloud hosts a variety of OS images for members that don’t want to create their own virtual appliances– Ubuntu, FreeBSD, Scientific Linux, and various application-specific VMs.

    Cloud stacks supported for access include OpenStack, OpenNebula, Synnefo (through the Catania Science Gateway Framework); StratusLab, OpenStack, Abiquo, CloudSigma and Amazon EC2 via SlipStream; or, under VMDIRAC, OpenNebula, OpenStack, CloudStack and Amazon EC2.

    Other supported standards include the OCCI (Open Cloud Computing Interface) and the CDMI (Cloud Data Management Interface).

    Reply
  25. Tomi Engdahl says:

    Microsoft: Pssst, small resellers, want to sling our cloud?
    Redmond opens up Azure with Open Licensing
    http://www.theregister.co.uk/2014/05/22/microsoft_azure_open_licensing/

    Microsoft is preparing to give its channel partners another way of selling Azure to their customers.

    The new “Open Licensing” option gives resellers and other channel partner types a way to sell a small amount of Microsoft’s Azure public cloud to a customer without having to enter into an Enterprise Agreement (EA) with Redmond or buy directly from the web portal.

    “‘Open License’ is a licensing program by Microsoft designed for a ‘corporate, academic, charitable, or government organization’ that wants to purchase a mimimum of five software licenses through the agreement,” Microsoft says in an FAQ outlining the scheme.

    “When you resell Azure in Open Licensing, you purchase tokens from your preferred Distributor and apply the credit to the customer’s Azure Portal in increments of $100,”

    Reply
  26. Tomi Engdahl says:

    Microsoft started herding IT pros to Azure at TechEd 2014
    The Reg’s man on the spot wraps Redmond’s tech talkfest
    http://www.theregister.co.uk/2014/05/22/teched_2014_wrapup/

    TechEd North America has wrapped up for 2014, and many IT pros have been left with the impression that Microsoft’s cloud solution Azure has gone from optional to mandatory. With that in mind, the word ‘Azure’ is going to appear many times in this article.

    Right from the keynote, corporate veep Brad Anderson hammered home to anyone listening the ‘mobile first, cloud first’ mantra that Microsoft has now adopted. It might as well just say ‘cloud first and second’ because cloud is how Microsoft makes the mobile experience better in their all-encompassing solutions.

    Azure was the main source of news at the event, thanks to new offerings like the following:

    Azure ExpressRoute, a partnership between Microsoft and certain telecommunication companies enabling private links between your company and Azure;
    Azure Files lets you use a Server Message Block (SMB) share from a file share created in the Azure cloud. Strangely this one is only using SMB version 2.1 and not the newer v3.x that comes with Windows Server 2012;
    Antimalware for Azure which lets you install an anti-malware agent inside Azure Virtual Machines as well as other cloud services. This one was built in conjunction with Symantec and Trend Micro, which may mean they have invested fairly heavily in this feature.
    Office 365 for business saw the addition of encryption and data loss prevention for SharePoint Online and OneDrive for Business. Both of these are more likely covering holes that would convince many business users so far to not put their data ‘up there’, but is that enough?
    ASP.NET vNext was also announced, unsurprisingly with new features and optimisations for the cloud. Since I’m not a developer, I won’t pretend to know anything more, but those inclined can watch “The Future of .NET on the Server” here.

    Reply
  27. Tomi Engdahl says:

    The Trouble With IBM
    http://www.businessweek.com/articles/2014-05-22/ibms-eps-target-unhelpful-amid-cloud-computing-challenges

    In the summer of 2012, five American technology companies bid on a project for a demanding new client: the CIA. The spy agency was collecting so much information, its computers couldn’t keep up. To deal with the onslaught of data, the CIA wanted to build its own private cloud computing system—an internal version of the vast fleets of efficient, adaptable servers that run technically complex commercial services such as Netflix (NFLX). For the agency, the power of the cloud was tantalizing. “It is nearly within our grasp to compute on all human-generated information,”

    IBM (IBM) was one of two finalists. The company would have been a logical, even obvious, choice. Big Blue had a decades-long history of contracting with the federal government, and many of the breakthroughs in distributed computing can be traced back to its labs. The cloud was a priority and a point of pride.

    On Feb. 14, 2013, the CIA awarded the contract to Amazon.com (AMZN). The e-commerce company, a pioneer in offering cloud computing services to corporate customers from Nokia (NOK) to Pfizer (PFE), had persuaded the spymasters that its public cloud could be replicated within the CIA’s walls. Amazon had been bleeding IBM for years—its rent-a-server-with-your-credit-card model was a direct threat to IBM’s IT outsourcing business—but this was different. Amazon beat IBM for a plum contract on something like its home turf, and it hadn’t done so simply by undercutting IBM on price. IBM learned that its bid was more than a third cheaper than Amazon’s and officially protested the CIA decision.

    It would have been better to walk away. As the Government Accountability Office reviewed the award, documents showed the CIA’s opinion of IBM was tepid at best. The agency had “grave” concerns about the ability of IBM technology to scale up and down in response to usage spikes, and it rated the company’s technical demo as “marginal.” Overall, the CIA concluded, IBM was a high-risk choice.

    In a court filing, Amazon blasted the elder company as a “late entrant to the cloud computing market” with an “uncompetitive, materially deficient proposal.”

    “Let me start with this idea that we are going to lead the IT industry through this change,” Rometty said. “I’m very clear with my words in that this industry is going to reorder. It will not look the same 10 years from now. And we will be the leader in this industry.” Cloud sales delivered as a service, she said, were growing rapidly, on pace for $2.3 billion in 2014. IBM’s total revenue is $100 billion. “Look, this is not the first time we’ve transformed,” Rometty said. “This will not be the last time.” The $20-per-share target for 2015, she confirmed, is still the plan. No one asked her if there would be a Roadmap 2020.

    Reply
  28. Tomi Engdahl says:

    HP’s $1bn ‘Linux for the cloud’ dream: Will Helion float?
    Plus: Why it wants to be like IBM, not Sun
    http://www.theregister.co.uk/2014/05/23/hp_1_billion_open_stack_bet/

    Hewlett-Packard has committed $1bn to OpenStack, a Linux for the cloud, over the next two years.

    The cash is going on R&D, products, engineering and services, HP said.

    The company will iron out the kinks in the OpenStack code and make it work and try to sniff out OpenStack clouds on HP hardware out of the box.

    To grease up the pump on HP’s OpenStack business, CEO Meg Whitman is setting up an OpenStack professional services unit consisting of consultants and engineers.

    In the background to all of this is HP’s OpenStack service, with OpenStack services available from 20 of HP’s 80 data centres in the next 18 months.

    HP’s vice president in charge of building the company’s cloud products and services, Bill Hilf, told The Reg the $1bn is a “a bold statement” by HP.

    “So many vendors put these big numbers out there… as we were preparing for this, we wanted to be clear this is not some fictional thing,” Hilf said.

    HP appears to be modelling its bravado on the example of another big systems company: IBM, which also famously put its wallet where its Linux was.

    Twice, in 2000 and 2013, IBM committed to spending $1bn on Linux. $1bn was peanuts for IBM, as it is for HP, but in PR terms it got the world’s attention, which HP certainly seems to be grabbing now.

    Reply
  29. Tomi Engdahl says:

    Cloud computing aka ‘The future is trying to KILL YOU’
    The brutal tech truth that links the problems of Rackspace, Dell, HP, IBM, Oracle, SAP, others
    http://www.theregister.co.uk/2014/05/17/cloud_computing_doom_analysis/

    What do all ailing enterprise IT companies have in common? Trouble in their core businesses due to the rise of cloud computing.

    Just how serious are the effects?

    Rackspace was reported on Thursday to be in talks with Morgan Stanley to help it partner or sell
    SAP is reported to be carrying out some “unavoidable” layoffs
    IBM has agreed to sell its server division
    EMC has had to create a new strategic software package named “ViPR”
    HP has partnered with Asian manufacturing giant Foxconn
    Oracle’s proprietary hardware division has consistently failed
    Cisco is being hit by a slowdown in its traditional business

    a few companies benefiting immensely from this shift. For example:

    Cloudera received hundreds of millions of dollars from Intel
    Amazon’s Amazon Web Services division is on track to pull in almost $4bn in revenue this year
    MongoDB – the anti-Oracle database startup whose tech is deployed widely on clouds
    Startups – both frivolous and not-so-frivolous – are being given huge valuations

    Google and Facebook: two advertising giants raking in phenomenal amounts of cash entirely due to the strength of their technical expertise.

    This meant they created systems – in Google’s case, GFS and MapReduce, which are the basis of Hadoop – in which it would be trivial to add another server, or ten servers, into a system and see a gain in performance. This triggered a movement away from integrating hardware with software and towards making software not care about hardware in the slightest, other than as additional capacity.

    Google and, later, Facebook, were able to start designing their own data center systems out of cheap components

    The cycle of change spins ever faster

    HP, Cisco, IBM, Oracle, Dell, and other big incumbents should be extremely worried: every trend in technology points to a future that has no bias toward their current profit-generating businesses. The growth days are over and winter has arrived, forcing these companies to battle each other to maintain margins and shipments, and distracting them from the threats coming from below.

    Reply
  30. Tomi Engdahl says:

    Google brings futuristic Linux software CoreOS onto its cloud
    A container-based operating system on a virtualized cloud on a container-based distributed system = M.C. Escher’s cloud
    http://www.theregister.co.uk/2014/05/23/google_loads_coreos_onto_its_cloud/

    Fans of new Linux operating system “CoreOS” can now run the lightweight tech on Google’s main cloud service.

    This means developers who want a Linux OS that takes up just 168MB of RAM, runs all of its applications within containers, and is designed for marshaling mammoth clusters of computers, can now do so on top of Google’s cloud.

    “In the next few days, CoreOS will become available as a default image type in the GCE control panel, making running your first CoreOS cluster on GCE an easy, browser-based experience,” wrote CoreOS’s chief technology officer Brandon Philips in a blog post. “CoreOS is an ideal host for distributed systems and Google Compute Engine is a perfect base for CoreOS clusters.”

    Reply
  31. Tomi Engdahl says:

    Webcast: How to build an open cloud
    Saying no to silos
    http://www.theregister.co.uk/2014/05/27/build_an_open_cloud/

    It’s a hybrid world. But proprietary clouds are hard to integrate and less flexible. On the other hand, you’re already using them.

    To coordinate your internal and external clouds, migration, administration, data protection, compliance, service levels and support need to be coordinated too.

    Reply
  32. Tomi Engdahl says:

    Hang on, lads. I’ve got a great idea, says NetApp as it teeters on the edge
    Don’t worry about the falling sales, maybe Amazon will pour ONTAP into its mega-cloud
    http://www.theregister.co.uk/2014/05/22/netapp_fiscal_2014/

    NetApp and the Cloud

    NetApp wants to provide a seamless private-public cloud data management environment by deploying its operating system, Data ONTAP, on customer premises and in public clouds, even the largest ones run by the so-called hyperscalers.

    Georgens said: “We are also looking to embrace the hyperscalers into our data management framework … all of [these] components of the cloud, both the traditional service providers and the hyperscalers, are integral to our overall strategy.”

    He’s saying that hyperscalers could use ONTAP to seamlessly integrate their offerings with NetApp’s on-premises installations.

    He mentioned working with OpenStack, and 200-plus cloud service providers already providing ONTAP-based services. He said: “The end goal is how do we create a set of services that could be consumed by the enterprise and then afterward the seamless expansion of on-premise computing and the data management that goes with it is very, very, very important to realising the hybrid cloud vision.”

    Analysts with dreams of the storage market’s long-term transformation away from on-premises kit to the cloud will say: “See, that’s the impact of the changes we’re talking about.”

    NetApp might say: No, it’s not; it’s a tough market but we are on top of things, gaining share, and helping our customers embrace the cloud, not resisting it.

    Reply
  33. Bruno says:

    Do you have Tap-to-Call and Service buttons seo packages on your front page?
    The company you select should be able to take a look inside and check
    what’s good and what’s not so great about Go Daddy WebsiteBuilder.

    Looking for local SEO services new york to your business expansion. It is easy to seo packages
    reach top10. Article submission helps you get unidirectional and back links.

    Reply
  34. Tomi Engdahl says:

    Watch this! Introduction to Amazon Web Services
    Ungated training session
    http://www.theregister.co.uk/2014/05/27/introduction_to_amazon_web_services_webinar/

    In this presentation, QA’s Philip Stirpe introduces Amazon Web Services – AWS and describe the infrastructure and services that they provide.

    Reply
  35. Tomi Engdahl says:

    Amazon Wants To Run Your High-Performance Databases
    http://hardware.slashdot.org/story/14/05/30/0248231/amazon-wants-to-run-your-high-performance-databases

    “Amazon is pushing hard to be as ubiquitous in the world of cloud computing as it is in bookselling. The company’s latest pitch is that even your highest-performing databases will run more efficiently on Amazon Web Services cloud servers than on your own hardware.”

    Reply
  36. Tomi Engdahl says:

    Amazon wants to run your high-performance databases
    The company’s new R3 instances have up to 262GB of RAM
    http://www.itworld.com/software/420809/amazon-wants-run-your-high-performance-databases

    Amazon Web Services is making a pitch for enterprises’ high-performance databases to run on its infrastructure, launching new instances optimized for the task.

    The R3 instance family has been added to Amazon RDS (Relational Database Service), which takes care of the administrative grunt work for databases such as MySQL and SQL Server.

    The new instances are optimized for memory-intensive applications

    The five R3 instances have between 16GB and 262GB of RAM plus between two and 32 virtual CPUs. The highest performing instance has a network speed of up to 10Gbps.

    Right now, users can launch databases based on version 5.6 of MySQL, PostgreSQL, or SQL Server.

    On-demand pricing for MySQL R3 instances start at US$0.240 per hour in the US West region. They are available from Amazon’s datacenters in Europe, the Asia Pacfic region and the U.S.

    Reply
  37. Tomi Engdahl says:

    Fat-fingered admin downs entire Joyent data center
    Cloud operator now home to most mortified sysadmin in the USA
    http://www.theregister.co.uk/2014/05/28/joyent_cloud_down/

    loud operator Joyent went through a major failure on Tuesday when a fat-fingered admin brought down an entire data center’s compute assets.

    The cloud provider began reporting “transient availability issues” for its US-East-1 data center at around six-thirty in the evening, East Coast time.

    “Due to an operator error, all compute nodes in us-east-1 were simultaneously rebooted,”

    The problems were mostly fixed an hour or so later.

    The cause of the outage was that an admin was using a tool to remotely update the software on some new servers in Joyent’s data center and, when trying to reboot them, accidentally rebooted all of the servers in the facility.

    “The command to reboot the select set of new systems that needed to be updated was mis-typed, and instead specified all servers in the datacenter,” Joyent wrote.

    Reply
  38. Tomi Engdahl says:

    As Mining Demand Grows, Data Center Firms Begin Accepting Bitcoin
    http://www.datacenterknowledge.com/archives/2014/05/30/mining-demand-grows-data-center-firms-begin-accepting-bitcoin/

    Cryptocurrency businesses can now use Bitcoin to purchase large chunks of data center space for their mining operations. Wholesale data center developer Server Farm Realty and colocation specialist C7 Data Centers each announced this week that they will accept customer payments in Bitcoin, the digital currency seeking to move from novelty status to payment platform.

    Reply
  39. Tomi Engdahl says:

    CDOT relatively crap for flash, hyperscalers crap for constant storage
    NetApp CTO lays it on the line
    http://www.theregister.co.uk/2014/04/08/cdot_relatively_bad_for_flash_hyperscalers_bad_for_constant_storage/

    Chief NetApp techie Jay Kidd had some strong words for the flash and cloud crowd at a Wells Fargo event for investors.

    Kidd thought that cloud storage economics didn’t stack up for mainstream enterprises except for transient workloads like test and development. He said recent price cuts by Google and Amazon “didn’t really change the dynamic at all”. Google and Amazon cost 2.5 cents/GB for archive class (S3) storage. Over three years that equated to 80 to 90 cents/GB: “You can buy and manage enterprise-class storage for well less than 90 cents/GB” for this use case.

    Cloud block storage like EBS is around 10 cents/GB, which equates to $4/GB over three to three-and-a-half years: “You can certainly buy enterprise storage for well less than $4/GB.”

    His view is that Amazon and Google – the hyperscalers – storage clouds are a good fit for transient and spiky requirements but not for constant or slowly growing storage needs. He said: “Amazon and Google reduce the cost of failure. They significantly increase the cost of success.”

    The discussion then moved on to flash, which Kidd says is a nascent market (meaning NetApp isn’t behind, I think). The main customer moves here are a transfer if latency-sensitive database apps, typically loaded up on a Symmetric-type large array, to all-flash arrays (AFAs) for faster speed.

    Reply
  40. Tomi Engdahl says:

    Digesting WWDC: cloudy
    http://ben-evans.com/benedictevans/2014/6/4/digesting-wwdc-cloudy

    First, Apple is continuing the steady process of removing restrictions on what developers can do – but doing so in a very specific way. Almost all of these restrictions are necessarily trade-offs – on a smartphone more flexibility is ipso facto less security and less battery life.

    The second theme, and a very interesting one, is cloud, the big Apple weakness. The whole of WWDC is full of cloud. A very large proportion of the new user-facing features touch the cloud in some way, as a conduit or as storage. And the ones that don’t use what you might call the personal cloud – the Bluetooth LE/Wifi mesh around you (such as HealthKit or HomeKit). So edit a photo and the edits are on all your devices, run out of room and your photos stay on the cloud but all but the previews are cleared off your phone, tap a phone number on a web page on your Mac and your phone dials it. But none of this says ‘CLOUD™’ and none of it is done in a web browser. Web browsers are for web pages, not for apps. Hence one could suggest that Apple loves the cloud, just not the web (or, not URLs).

    This is obviously a contest with Google, which has pretty much the opposite approach. For Google, devices are dumb glass and the intelligence is in the cloud, but for Apple the cloud is just dumb storage and the device is the place for intelligence.

    Reply
  41. Tomi Engdahl says:

    LIVE TODAY: Simplifying IT with cloud apps
    Breaking the customisation habit
    http://www.theregister.co.uk/2014/06/05/simplifying_it_with_cloud_apps/

    Why? Because as well as the obvious benefits of cloud, these apps allow you to concentrate customisation where it has a benefit. They mean you can ban “bespoke” apps and plug into a common architecture to take advantage of economies of scale, but customise based on policy and workflow to increase efficiency.

    Reply
  42. Tomi Engdahl says:

    IBM escapes SEC action on its cloud revenue reporting
    http://www.computerweekly.com/news/2240221849/IBM-escapes-SEC-action-on-its-cloud-revenue-reporting

    US regulatory watchdog Securities and Exchange Commission (SEC) has concluded that it will not take any enforcement action against IBM following an investigation into how Big Blue reports its cloud computing revenue.

    The Division of Enforcement of the SEC informed IBM on 30 May that it has concluded its probe into IBM’s cloud revenue reporting. “IBM was notified that based on the information to date, the Division of Enforcement does not intend to recommend any enforcement action by the Commission against IBM,” an SEC document read.

    The conclusion comes a year after the regulatory watchdog launched a probe into how IBM reports cloud revenues.

    IBM, a traditional mainframe supplier, started building a cloud computing portfolio from 2008 to suit the changing technology landscape.

    The company aims to achieve $7bn annually in cloud revenue by the end of 2015. In 2013, the cloud computing division yielded a revenue of $4.4bn.

    In the first quarter of 2014, its cloud revenues were up more than 50%, it said. For cloud delivered as a service, first-quarter annual run rate of $2.3bn doubled year on year, its financial earnings report stated.

    This year alone, IBM has ploughed billions of dollars into the cloud via the expansion of its cloud datacentre network for $1.2bn, the launch of its middleware PaaS offering Bluemix and its ongoing acquisition strategy.

    Reply
  43. Tomi Engdahl says:

    Google Embraces Docker, the Next Big Thing in Cloud Computing
    http://www.wired.com/2014/06/eric-brewer-google-docker/

    Google is putting its considerable weight behind an open source technology that’s already one of the hottest new ideas in the world of cloud computing.

    This technology is called Docker. You can think of it as a shipping container for things on the internet–a tool that lets online software makers neatly package their creations so they can rapidly move them from machine to machine to machine. On the modern internet–where software runs across hundreds or even thousands of machines–this is no small thing. Google sees Docker as something that can change the way we think about building software, making it easier for anyone to instantly tap massive amounts of computing power. In other words, Google sees Docker as something that can help everyone else do what it has been doing for years.

    ‘Google and Docker are a very natural fit. We both have the same vision of how applications should be built.’

    Reply
  44. Tomi Engdahl says:

    Microsoft fights U.S. search warrant for customer e-mails held in overseas server
    http://www.washingtonpost.com/world/national-security/microsoft-fights-us-search-warrant-for-customer-e-mails-held-in-overseas-server/2014/06/10/6b8416ae-f0a7-11e3-914c-1fbd0614e2d4_story.html

    Microsoft, one of the world’s largest e-mail providers, is resisting a government search warrant to compel the firm to turn over customer data held in a server located overseas.

    In what could be a landmark case, the Redmond, Wash., company is arguing that such a warrant is not justified by law or the Constitution. Microsoft and other tech firms also fear that if the government prevails and can reach across borders, foreign individuals and businesses will flee to their non-U.S. competitors.

    “If the government’s position prevails, it would have huge detrimental impacts on American cloud companies that do business abroad,’’

    Reply
  45. Tomi Engdahl says:

    Troubled Truecrypt the ONLY OPTION for S3, but Amazon stays silent
    No noise from web warehouse as hacking rumours fly.
    http://www.theregister.co.uk/2014/06/11/troubled_truecrypt_the_only_option_for_s3_but_amazon_stays_silent/

    Amazon Web Services (AWS) has kept mum on whether it will dump the troubled TrueCrypt platform used to encrypt data data imported and exported to its Simple Storage Service (S3).

    Security bods midway through a comprehensive Truecrypt security audit vowed to continue and said they had not found any reason to dump the platform.

    Questions remained whether Amazon considered TrueCrypt safe enough to be the only option for encrypting S3 data.

    “TrueCrypt is the only device encryption supported by AWS import / export,”

    All data exported from S3 was encrypted with TrueCrypt using a supplied password, but Amazon Glacier and Elastic Block Store customers could use any encryption method they liked.

    Reply
  46. Tomi Engdahl says:

    Verizon: Only a CROWD of storage tech firms can hold up our hefty cloud
    Mega telco hooked up to HDS, Oracle, NetApp … you name it
    http://www.theregister.co.uk/2014/06/12/verizon_cloud_service_partners/

    While Verizon has adopted Amplidata’s Himalaya object technology for its Cloud Storage service, it told The Register that this does not mean it has given up on HDS’s Content Platform (HCP) – which is also object-based.

    HDS’ Content Platform will be a service offered by Verizon whereas Amplidata will power Verizon’s own service:

    Reply
  47. custom designed engagement ring says:

    Great blog! Is your theme custom made or did you download it from somewhere?

    A design like yours with a few simple tweeks would really make my blog stand
    out. Please let me know where you got your design. Appreciate it

    Reply
  48. Tomi Engdahl says:

    Look at this vision video from 2010 and check how it matches the current reality:

    Intel Cloud Computing 2015 Vision
    https://www.youtube.com/watch?v=gpzM6Mask80

    Reply

Leave a Reply to crc-news.tk Cancel reply

Your email address will not be published. Required fields are marked *

*

*