Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    Can Marten Mickos make ‘Linux for the cloud’ work for HP?
    The ‘not-another-Unix play’ play
    http://www.theregister.co.uk/2014/09/23/marten_mickos_hp_convert/

    Hewlett-Packard didn’t just buy cloudy startup Eucalyptus Systems to build its fledgling OpenStack cloud biz, it also bought Marten Mickos, the firm’s Finnish CEO.

    HP isn’t the first to pay for Mickos’ expertise – that was Sun Microsystems, when it acquired his venture previous venture, MySQL AB, for $1bn in 2008

    Just who is this Mickos bloke and why do big systems companies like him and what he has to offer?

    Eucalyptus lets you build clouds using APIs compatible with those of Amazon Web Services – both for EC2 on compute and S3 on storage. OpenStack was spun up in 2010 to provide a set of open-source APIs for those who didn’t wish to use AWS.

    HP has also given Mickos a seat at the top table: he’s been made general manager of HP’s Cloud organisation, tasked with building HP’s OpenStack-based Helion cloud. Helion is HP’s supported spin of OpenStack code. Mickos is reporting straight to the queen herself, HP CEO Meg Whitman.

    OpenStack is starting to look dangerously like Unix or CORBA – as Mickos noted in August.

    “It’s difficult to produce technically brilliant products when governance is shared among very large corporations, each one with their own agenda,” Mickos said. “In an all-embracing collective it is difficult to say no to new ideas, but ‘no’ is a vital component of designs that win.

    Reply
  2. Tomi Engdahl says:

    AWS comes to Germany as Amazon unveils second EU region, out of Frankfurt
    https://gigaom.com/2014/10/23/aws-comes-to-germany-as-amazon-unveils-second-eu-region-running-out-of-frankfurt/

    Amazon has launched its long-awaited German-based region – its second in Europe, after the one based in Ireland, and its eleventh worldwide.

    The move has big implications for latency and resilience, and of course data protection — a particular concern for German businesses.

    Reply
  3. Tomi Engdahl says:

    Amazon’s AWS opens data center in Germany – just as we said
    Scalability away from Uncle Sam, in theory
    http://www.theregister.co.uk/2014/10/23/aws_frankfurt_region/

    Amazon’s European mainland customers wary of US spies can now build scalable clouds on AWS and stay entirely on the Continent. The giant today announced the opening of a data centre in Frankfurt, Germany – just as we reported it would in July.

    The data centre – or “region” as Amazon calls them – is the company’s second in the EU; the first was built in Dublin, Ireland. The region is Amazon’s 11th worldwide.

    Opening the data centre, Amazon stressed data privacy, saying customers’ content can now fall entirely under the umbrella of European Union data protection laws and outside the reach of some United States regulations

    Reply
  4. Tomi Engdahl says:

    Microsoft to modernize Special Olympics, raise system to the cloud
    http://news.microsoft.com/features/microsoft-to-modernize-special-olympics-raise-system-to-the-cloud/

    Microsoft Monday announced a three-year, multimillion dollar partnership with the Special Olympics to modernize the nonprofit’s software and games management system, and elevate it to the cloud.

    “Our company is about reinventing productivity, to allow people to achieve more,” says Jeff Hansen, general manager of Microsoft Brand Studio. “If you think about the Special Olympics and their mission to celebrate the achievements of people with intellectual disabilities, you understand that we couldn’t be more aligned.”

    Reply
  5. Tomi Engdahl says:

    The Agility Edge for Pharmaceutical R&D
    https://www.cloudinsights.com/community/nicole/the-agility-edge-for-pharmaceu-680021937.html

    When it comes to critical research and development, companies across the vertical spectrum are challenged with freeing the vital internal resources required to push their business beyond the moment. In every major industry, the cloud is being seen as a secure, reliable, and robust option for keeping vital R&D efforts alive.

    This trend of conducting mission-critical research and development using Amazon Web Services (AWS) extends to the large, diverse life sciences segment. Some of the most important life science research and development is discovery of and testing new compounds and pharmaceuticals. This is essential to future business, since success hinges on making the most important discoveries and ensuring their viability and safety first.

    Pharmaceutical giants, including Pfizer, are among the many life sciences companies that have realized the possibilities for research and development given the limitless access to compute, storage, and robust application tools. The company’s internal HPC software and systems groups are tasked with supporting massive-scale research, analysis, and modeling efforts to push toward more effective, safer drugs. But they need a scalable, on-demand, secure way to push time-critical, data-intensive, and computationally challenging R&D projects into a greater well of storage and compute resources.

    Given regulatory, security, and performance concerns, the company has looked to the Amazon Virtual Private Cloud (Amazon VPC) to help handle peak times in their R&D cycle. According to Pfizer’s lead for HPC and R&D, Dr. Michael Miller, “Research can be unpredictable, especially as the ongoing science raises new questions.”

    Reply
  6. Tomi Engdahl says:

    Internet service users they like often get in front of him or the terms of use agreement, which must be accepted to use the service. Although the texts are in many cases not affect the read-Communications Agency Kyberturvallisuuskeskus advised to read the terms and conditions. Services is moving more and more users privacy-related information, so it is not the same, what will accept.

    “Contracts can sometimes seem like a lengthy and difficult to understand.”

    Services are divided into a lot of privacy-related information, and therefore their data sharing should be paid attention to. As the use of Internet services has become an integral part of daily life, the risks associated with violating someone’s privacy, have increased.

    Privacy composed of the rights of the individual to know as well as his capacity to influence their own data processing. The home address or phone number is not necessary for all service providers to inform, even if they ask for it.

    In general, the user’s data processing services for the Internet is based on consent. Consent may be given in standard contracts. In practice, contracts are created in such a way that the user is registered to the Service, accepts the terms and conditions by clicking to accept or just use the services.

    It is essential that the user is aware of how and for what purposes their data are used.

    What? Where? Why? When? At least these questions the user is good to get the answer before accepting internet service terms and conditions, and having access to the service. It is appropriate also to find out what the various privacy issues related to Internet service contract may agree at all.

    Sources:
    http://www.tivi.fi/kaikki_uutiset/kyberturvallisuuskeskus+kaikkeen+ei+ole+pakko+suostua/a1023706
    https://www.viestintavirasto.fi/tietoturva/tietoturvanyt/2014/10/ttn201410280921.html

    Reply
  7. Tomi Engdahl says:

    Microsoft shows off shiny new Win 10 PCs, compute-tastic Azure
    Joe Belfiore presents auto-provisioning biz boxes
    http://www.theregister.co.uk/2014/10/28/microsoft_shows_off_autoenrolling_windows_10_pcs_hpc_from_azure_batch_processing/

    The idea is that provisioning a corporate PC could be as easy as getting a new iPad, though this can only work for apps that can be deployed from the Store.

    Microsoft also talked up new features in the Azure cloud platform. Azure Batch is a preview service for automating compute-intensive tasks across multiple Azure resources. It uses technology acquired with GreenButton, a specialist company in this area, back in May.

    The example showed at TechEd featured the open source Blender rendering engine, processing a large render across 37 VM (Virtual Machine) instances.

    Azure Operational Insights is another new tool, this one a dashboard for IT admins showing system health and alerts across cloud and on-premises environments.

    Azure VMs can now support multiple virtual network cards, so that you can implement your own load balancers and firewalls.

    Reply
  8. Tomi Engdahl says:

    Storage array giants will point their back ends at Azure
    Azure Site Recovery expands to save and serve SAN snapshots, scare backup vendors
    http://www.theregister.co.uk/2014/10/29/storage_array_giants_will_point_their_back_ends_at_azure/

    Azure has turned itself into a destination for storage of SAN snapshots captured on devices provided by EMC, NetApp, HP and Hitachi Data Systems, further enhancing the Microsoft Cloud’s disaster recovery prowess.

    Microsoft already offers share ‘n’ sync for virtual machines under the “Azure Site Recovery” (ASR) service.

    At TechEd in Barcelona this week, Microsoft revealed ASR will soon gain the ability to hook into arrays using the SMI-S spec, with the result that SANs capable of taking snapshots of themselves can do so and send them to Azure. Once the snapshots are in Microsoft’s conveniently world-spanning-and-ever-so-redundant (mostly) cloud, the snapshots are available for later retrieval when disaster strikes.

    You’ll need System Center Virtual Machine Manager (SCVMM) to make this new feature work, and it will also help if your SAN vendor points their SMI-S implementation at Azure. The good news is that EMC (VNX and VMAX Symmetrix) and NetApp (Clustered Data ONTAP 8.2) are on board, with HDS and HP (3Par) ready to joint the party.

    Microsoft’s calling the rig above “end-to-end storage array-based replication and disaster recovery”, and is putting its money where its mouth

    Reply
  9. Tomi Engdahl says:

    Google may hook Kubernetes deep into own cloud
    ‘Highly differentiated experience’ promised, details likely at November gabfest
    http://www.theregister.co.uk/2014/10/29/google_may_hook_kubernetes_deep_into_own_cloud/

    Google’s Cloud Platform Live event in the USA next week may offer up some news on how The Chocolate Factory will allow developers to put Kubernetes to work in its own cloud.

    Kubernetes is a tool Google developed and used to make containerisation more useful by making it possible to manage containerised applications. As explained by Craig McLuckie, Google’s point man for all things cloud, Docker is very good at helping developers to create apps running in containers. Kubernetes tries to take things further by getting code in containers to work together to deliver an application, and to help manage those containers and their joint and interlinked operations once an app goes into production.

    Kubernetes can work alongside any Docker implementation, and therefore in any of the major clouds that can handle Docker. Which as of two weeks ago, when Microsoft became the latest cloud operator to embrace Docker, is just about everyone that matters.

    Google Cloud Platform also supports Kubernetes. But as Google developed Kubernetes out of code it needed for its own operations, it’s in a position to make the software work especially well on its own cloud.

    Might Google do it?

    Reply
  10. Tomi Engdahl says:

    SHOW ME THE MONEY! Ballmer on Amazon: ‘They’re not a real biz, they make NO cash’
    ‘Proud of the BEELLIONS of $$$ I made at Microsoft’
    http://www.theregister.co.uk/2014/10/25/steve_ballmer_amazon_they_make_no_money/

    Ex-Microsoft boss Steve Ballmer has attacked retail giant Amazon for failing to make a profit after more than two decades of trading its wares online.

    “They make no money, Charlie. In my world, you’re not a real business until you make some money. I have a hard time with businesses that don’t make money at some point,” Big Steve said in a TV interview with Charlie Rose.

    “I get it if you don’t make money for two or three years, but Amazon’s what – 21 years old – and not making money.”

    Amazon’s latest financial report saw the online retail giant’s net sales rise 20 per cent for the third quarter, year-over-year, bringing in a whopping $20.58bn.

    However, the Seattle-based company posted a net loss of $437m for the three months ended 30 September. Business as usual, then, as noted by a perplexed but typically animated Ballmer.

    “I think one capability a business is expected to have is the capability to make money. It requires a certain kind of discipline a certain mindset,” he said.

    Reply
  11. Tomi Engdahl says:

    Technology Group Promises Scientists Their Own Clouds
    http://science.slashdot.org/story/14/10/29/228209/technology-group-promises-scientists-their-own-clouds

    On Tuesday, Internet2 announced that it will let researchers create and connect to their own private data clouds on the high-speed network (mainly used by colleges), within which they will be able to conduct research across disciplines and experiment on the nature of the Internet.

    Technology Group Promises Scientists Their Own Clouds (the Data Kind)
    http://chronicle.com/blogs/wiredcampus/technology-group-promises-scientists-their-own-clouds-the-data-kind/55055

    Scientists will soon have access to their very own clouds. Not the meteorological sort—although these clouds might help advance weather research as well as improve medical systems and power-grid management.

    The new clouds for scientists are the kind that store data on servers, as part of a trend known as cloud computing. Consumers use the commercial variety to store documents, photographs, and music. Researchers use those too, but they sometimes need more control over and information about cloud systems than host companies, such as Apple and Amazon, provide.

    Advances in network architecture aim to deal with the problem. On Tuesday the nonprofit organization Internet2 announced developments that will let researchers create and connect to virtual spaces, within which they will be able to conduct research across disciplines and to experiment on the nature of the web.

    Reply
  12. Tomi Engdahl says:

    Amazon’s hybrid cloud: EC2 wrangled by Microsoft’s control freak
    Plug-in for System Centre gives Windows Server control of Bezos’ bit barns
    http://www.theregister.co.uk/2014/10/30/amazons_hybrid_cloud_our_cloud_plus_microsofts_control_freak/

    Hybrid clouds are the new black: world+dog has decided that some workloads just won’t ever ascend into the elastosphere, but that running a private and public cloud from separate control freaks is a dumb idea.

    That’s why vSphere can span your on-premises bit barn and vCloud Air, and Azure Pack does the same trick but with Azure at the cloudy end.

    And Amazon Web Services? As of today, the cloud colossus’ hybrid story has improved markedly thanks to the release of the AWS System Manager for Microsoft System Center Virtual Machine Manager (SCVMM).

    The add-in’s role is simple: it lets you monitor and manage EC2 instances in Amazon’s cloud from the familiar on-premises console of of the SCVMM. AWS advises you can “launch new instances and you can also perform common maintenance tasks such as restarting, stopping, and removing instances” from within the Microsoft tool.

    If it works as advertised and really does make EC2 instances the equal of other virtual machines handled under SCVMM, this is kind of a big deal as it will make the AWS cloud as accessible as any other for Windows-centric cloud users.

    Reply
  13. Tomi Engdahl says:

    Microsoft, Dropbox execs go public with their Office hookup
    Getting into bed … with iOS and Android
    http://www.theregister.co.uk/2014/11/04/microsoft_dropbox_office_deal/

    Microsoft and Dropbox have inked a deal to integrate their cloud-hosted stuff: mobile users of Office 365 will be able to automatically save files to their Dropbox account from within Redmond’s software.

    And Dropbox users will also be able to edit Word, Excel, and PowerPoint files within the Dropbox and share them within the firm’s business groups system.

    “People need easier ways to create, share and collaborate regardless of their device or platform,” said Microsoft CEO Satya Nadella before narrowing down “regardless of their device or platform” to “Android and iOS.”

    Office 365 apps for the pair of mobile operating systems will be updated shortly, Microsoft said.

    And over the next couple of months, Dropbox will build a Windows tablet and Windows Phone app for its service that drills into Office 365; the ability to edit and share files within Dropbox groups should be available by the middle of next year.

    Reply
  14. Tomi Engdahl says:

    Microsoft brings the CLOUD that GOES ON FOREVER
    Sky’s the limit with unrestricted space in the cloud
    http://www.theregister.co.uk/2014/10/27/onedrive_storage_caps_lifted_for_office_365/

    With cloud storage rapidly becoming a commodity, Microsoft has taken the next logical step in the file-syncing bunfight, doing away with storage quotas altogether for paying customers of its subscription Office 365 offerings.

    Redmond began opening up the data-hosting sluices in June, when it gave every Office 365 subscriber 1TB of OneDrive storage each. Now even that limit is being removed, and customers will be able to use the service to store as much as they want.

    Unlimited cloud storage will be a feature of every paid Office 365 plan, Redmond says, even including the humble Office 365 Personal tier, which currently goes for $7 (£6) per month or $70 (£60) when billed annually.

    Users of the free Office Online version, on the other hand, will still have just 15GB of OneDrive available.

    Reply
  15. Tomi Engdahl says:

    Google Cloud Platform Live: Introducing Container Engine, Cloud Networking and much more
    http://googlecloudplatform.blogspot.fi/2014/11/google-cloud-platform-live-introducing-container-engine-cloud-networking-and-much-more.html

    Google Container Engine: run Docker containers in compute clusters, powered by Kubernetes
    Google Container Engine lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Create and wire together container-based services, and gain common capabilities like logging, monitoring and health management with no additional effort. Based on the open source Kubernetes project and running on Google Compute Engine VMs, Container Engine is an optimized and efficient way to build your container-based applications. Because it uses the open source project, it also offers a high level of workload mobility, making it easy to move applications between development machines, on-premise systems, and public cloud providers. Container-based applications can run anywhere, but the combination of fast booting, efficient VM hosts and seamless virtualized network integration make Google Cloud Platform the best place to run them.

    Managed VMs in App Engine: PaaS – Evolved
    App Engine was born of our vision to enable customers to focus on their applications rather than the plumbing. Earlier this year, we gave you a sneak peek at the next step in the evolution of App Engine — Managed VMs — which will give you all the benefits of App Engine in a flexible virtual machine environment. Today, Managed VMs goes beta and adds auto-scaling support, Cloud SDK integration and support for runtimes built on Docker containers. App Engine provisions and configures all of the ancillary services that are required to build production applications — network routing, load balancing, auto scaling, monitoring and logging — enabling you to focus on application code. Users can run any language or library and customize or replace the entire runtime stack (want to run Node.js on App Engine? Now you can). Furthermore, you have access to the broader array of machine types that Compute Engine offers.

    three new connectivity options:

    Direct peering gives you a fast network pipe directly to Google in any of over 70 points of presence in 33 countries around the world
    Carrier Interconnect enables you to connect to Google with our carrier partners including Equinix, IX Reach, Level 3, TATA Communications, Telx, Verizon, and Zayo
    Next month, we will introduce VPN-based connectivity

    Reply
  16. Tomi Engdahl says:

    Google’s cloud steals more business from Amazon — Airbnb, Netflix, Rovio announced as customers
    http://venturebeat.com/2014/11/04/google-cloud-airbnb/

    If you’re looking for proof that Google’s public cloud is taking off, well, Google’s got you covered.

    During the company’s Google Cloud Platform Live event, Google cited several examples of companies that had started to use the Google Cloud Platform in lieu of, or in addition to, the largest public cloud around, Amazon Web Services.

    Among the Cloud Platform customers Google teased today were Airbnb, Atomic Fiction, Citrix, Netflix, and Rovio, all of which have come forward as Amazon cloud customers in the past.

    And it’s not just the popular Google App Engine (GAE), one of the first platform-as-a-service clouds to hit the market, that companies are using. Home-sharing startup Airbnb, for instance, is going beyond usage of GAE and BigQuery service for storing and querying data.

    “They’re using some GCE (Google Compute Engine),” Shailesh Rao, head of Google’s cloud business unit,” told reporters in a press conference today. “They just started using some GCE.”

    Those are fighting words. Amazon has long touted Airbnb as a major user of its cloud. Now Airbnb appears to be using the cloud market’s highly competitive nature as a way to get great deals with cloud providers.

    And Google isn’t just targeting Web companies.

    Reply
  17. Tomi Engdahl says:

    PaaS security considerations
    http://embeddedexperience.blogspot.fi/2014/10/paas-security-considerations.html

    Security is the first concern which arise when talking about cloud services. Let’s take a closer look.

    Cloud services are usually categorized as SAAS, PAAS, and IAAS. What comes to security, I personally trust PaaS most.

    Summary: IAAS – You’re on your own. PAAS – Limited but protected. SAAS – You just got to trust.

    Let’s dig into some details of security mechanisms of PaaS service. I’m using IBM Bluemix as an example here.

    Control of external communication
    Only HTTP/S and WebSocket/S connections are allowed. All other connection attempts are discarded. All external connections go through external appliance for improved security.

    API isolation
    Only selected set of application programming interfaces are provided to developer. Even if the application is behaving badly, it can not do much harm.

    Data protection
    Data is proven to be available to given application only. However, several instances may share the same data store, if configured so.

    Platform instantiation
    Each application runs in its own container that has specific resource limits for processor, memory, and disk.

    Reply
  18. Tomi Engdahl says:

    Bluemix
    IBM is challenging the IoT market with it’s new cloud offering.
    Tuesday, May 27, 2014
    http://embeddedexperience.blogspot.fi/2014/05/bluemix.html

    Last summer IBM acquired hosting company SoftLayer, and has invested to increase the network to cover 40 data centers worldwide. Since the acquisition, IBM has invested significantly to cloud software portfolio.

    Bluemix is IBM’s Platform as a Service (PAAS) solution a top of SoftLayer’s Infrastructure as a Service (IAAS). SoftLayer hosting is compatible with OpenStack specification. Bluemix provides CloudFoundry compatible cloud computing environment with simple web user interface for easy development and deployment.

    At the moment, Bluemix is in beta, and available for developers free of charge. Commercial launch of the service is scheduled at summer. Pricing is not yet published. Bluemix represents a new software licensing model to IBM. Instead of initial investment and annual maintenance fee, customer pay per use (capacity), just like in case of popular hosting services.

    Reply
  19. Tomi Engdahl says:

    Microsoft releases free Antimalware for Azure
    http://www.zdnet.com/microsoft-releases-free-antimalware-for-azure-7000035467/

    Summary: The service, using the same engine and signatures as Microsoft’s other offerings, is now available to most Azure virtual machines. The software is free, but use of it may cost money.

    Reply
  20. Tomi Engdahl says:

    Google Wants to Store Your Genome
    For $25 a year, Google will keep a copy of any genome in the cloud.
    http://www.technologyreview.com/news/532266/google-wants-to-store-your-genome/

    Google is approaching hospitals and universities with a new pitch. Have genomes? Store them with us.

    The search giant’s first product for the DNA age is Google Genomics, a cloud computing service that it launched last March but went mostly unnoticed amid a barrage of high profile R&D announcements from Google, like one late last month about a far-fetched plan to battle cancer with nanoparticles

    Google Genomics could prove more significant than any of these moonshots. Connecting and comparing genomes by the thousands, and soon by the millions, is what’s going to propel medical discoveries for the next decade. The question of who will store the data is already a point of growing competition between Amazon, Google, IBM, and Microsoft.

    Google began work on Google Genomics 18 months ago, meeting with scientists and building an interface, or API, that lets them move DNA data into its server farms and do experiments there using the same database technology that indexes the Web and tracks billions of Internet users.

    “We saw biologists moving from studying one genome at a time to studying millions,”

    Reply
  21. Tomi Engdahl says:

    Words to put dread in a sysadmin’s heart: ‘We are moving our cloud from Windows to Linux’
    If you must pick Windows, pick early and stick
    http://www.theregister.co.uk/2014/11/10/start_ups_doing_it_right/

    The worldview of elastic compute, or mine at least, has historically had very little Microsoft involved in it. Recently however, I have attended several job interviews and one question that has invariably been asked is: “We are planning on moving from Windows to Linux. Have you done it before?”

    This situation usually arrives through a well-worn path brought around due a number of factors. Most startups don’t hire full-time (or even part-time) sysadmins in the beginning to save on costs. Developers in startups, the ones whom I have spoken with, seem to know enough Windows to get a basic web stack configuration working, perhaps even a cluster.

    The issues of how to scale out, build resilience and the costs involved are the last things on their mind.

    Trying to manage your own physical and virtual infrastructure to scale with Microsoft products isn’t any more difficult than it is with Linux, given a good admin. The real problem (perceived or otherwise) is the cost of Windows deployment and trying to work out the licensing models.

    How do you keep your app or service on Microsoft and still run it yourself?

    It’s that question that leads us to the main issue of how licensing works for elastic compute in a Microsoft environment, because the key deliverable is also the main issue: how do you deal with a Microsoft virtual machine in a highly elastic environment where the server might only live for a couple of hours?

    Historically, Redmond has been complex. After coming across the question of licence-cost-induced migration several times, I decided to investigate and there are a number of options, all with different pros and cons.

    Microsoft’s virtualisation card is Hyper V. If you are going all in, then you require the high-end Windows Datacenter Edition, which gives an unlimited number of virtual instances.

    The list price is $6,155, no discounts. The average price on a data centre, dual CPU, unlimited core licence is $4,809 for the operating system. In a discussion I had with Microsoft, the firm quoted just under $3,500.

    Often, server pricing does not include Client Access Licenses (CALs), a licence for every device accessing the server. CALs are extra and are a not very friendly licensing model for a web-facing firm that will have fluctuating numbers of customers. That’s because this pricing model grew up in the enterprise, where the number of clients was known.

    On the positive side, while the processor licensing sounds expensive, Datacenter edition does give you the ability to stop worrying about licensing and if you run several machines on each host, the cost comes right down. Of course you still have the hardware, cooling and lighting costs, too.

    This compares to a similarly specified virtual machine in Microsoft’s Azure cloud. Such a configuration, assuming 24/7 usage, excluding disk and network traffic, comes in at $2,640 for the year – according to Microsoft. I realise this is not a true when you compare like for like but it does give an indication of price.

    You get what you pay for…

    When it comes to comparing Microsoft and Linux, let’s be clear: there’s no such thing as a free lunch.

    Reply
  22. Tomi Engdahl says:

    How do Reg readers keep their vendors in line?
    CIOs talk sticks, carrots and account managers
    http://www.theregister.co.uk/2014/10/27/reg_roundtable_2_writeup/

    It’s difficult to speak openly about how to squeeze the best out of your suppliers. On the one hand, you always suspect there’s more you could be doing. On the other, you don’t want to give away your secrets.

    All the IT execs were facing multiple challenges in managing cloud vendors, not just in the way that they may not even have an account manager.

    Data sovereignty also loomed through the smog as something that alarmed the execs in terms of the US government’s fight with Microsoft over accessing data in the EU. But since it hasn’t actually happened yet they do not feel they can make a case for changing how they select which cloud. In contrast the CTOs of startups loved the cloud, since their appetite for risk isn’t just higher, it has a different structure. They said they will deal with the fallout of international privacy issues if and when they happen. What they don’t want is to lock up capital in a data centre and in custom software development that they might never grow to support.

    The standard issues with Cloud data in terms of vendor fragility, government snooping, retrieving your data and the “the contract is whatever we say it is this week” attitude of many Cloud suppliers. But the execs were clear what business unit managers fail to even notice is: the sheer scale of the business logic in terms of rules, procedures, reporting and governance that you’ve built up in your older systems; that it is horribly hard to extract from whatever confection of VB, Excel, SQL, and even Cobol your firm has built up over the years; and that when they’ve looked at the Cloud they see even bigger risks. At least if you have the legacy code it can be understood and translated by a team of contractors who are prepared to do dull work for good money, but if you’re in a proprietary environment like Salesforce.com this is a lot harder than decoding ancient VB 6.

    Reply
  23. Tomi Engdahl says:

    HPC ace grills vendor over virtual desktop flavours: What about the POWER USERS?
    Cloud service
    http://www.theregister.co.uk/2014/11/10/virty_desktops/

    The idea that you can run a bunch of user desktop sessions on servers isn’t new. The benefits promised by desktop virtualisation are pretty profound, including large reductions in enterprise tech costs, better support for users, improved security and even increased user happiness.

    That last promise, the increased user happiness, has been difficult to keep – particularly when you’re dealing with “power users” who use a wide range of demanding applications. Data centres would also run into trouble when trying to scale virtual desktop infrastructure efficiently and economically.

    However, times change, and new technology has come along to make it possible, and much easier, to deliver solid quality of desktop service to any users – regardless of their demands.

    In the video, we talk about the firm’s “desktop in a cloud” DaaS service and its plans to roll it out to public clouds. It now has a new mechanism that supports NVIDIA GPUs to provide full 3D graphics to users, whether they’re in public or private cloud environments.

    Reply
  24. Tomi Engdahl says:

    Amazon Cloud Drive Gets Its Own API
    http://techcrunch.com/2014/11/11/amazon-cloud-drive-gets-its-own-api/

    Amazon is working to make its Cloud Drive service more competitive in the crowded online storage market. After recently bundling free, unlimited photo storage on Cloud Drive for Amazon Prime customers, today the company is going after developers with a new Cloud Drive API. The API will allow third-party developers to integrate Cloud Drive into their own applications, so they can focus more on their app’s feature set rather than the complexities involved with storage.

    Developers, of course, have been using Amazon Web Services for some time, but the Cloud Drive API is focused on building similar, more consumer-facing technology into applications. For instance, a photo-editing app could allow users to browse and edit photos they have stored on Amazon Cloud Drive.

    Amazon says developers who use the Cloud Drive API won’t have to worry about things like various screen resolutions, metadata management, indexing, search or sync functionality; it will be included after the API’s integration. Cloud Drive is also available within other developer tools, including Filepicker and Temboo, which makes it easier on those working with a range of cloud services.

    Reply
  25. Tomi Engdahl says:

    Demystifying Kubernetes: the tool to manage Google-scale workloads in the cloud
    http://www.computerweekly.com/feature/Demystifying-Kubernetes-the-tool-to-manage-Google-scale-workloads-in-the-cloud

    Once every five years, the IT industry witnesses a major technology shift. In the past two decades, we have seen server paradigm evolve into web-based architecture that matured to service orientation before finally moving to the cloud. Today it is containers.

    When launched in 2008, Amazon EC2 was nothing short of a revolution – a self-service portal that launched virtual servers at the click of a button fundamentally changed the lives of developers and IT administrators.

    Docker resurrecting container technology

    The concept of containers is not new – FreeBSD, Solaris, Linux and even Microsoft Windows had some sort of isolation to run self-contained applications. When an application runs within a container, it gets an illusion that it has exclusive access to the operating system. This reminds us of virtualisation, where the guest operating system (OS) lives in an illusion that it has exclusive access to the underlying hardware.

    Containers and virtual machines (VMs) share many similarities but are fundamentally different because of the architecture. Containers run as lightweight processes within a host OS, whereas VMs depend on a hypervisor to emulate the x86 architecture. Since there is no hypervisor involved, containers are faster, more efficient and easier to manage.

    One company that democratised the use of Linux containers is Docker. Though it did not create the container technology, it deserves the credit for building a set of tools and the application programming interface (API) that made containers more manageable.

    Though Docker hogs the limelight in the cloud world, there is another company that mastered the art of running scalable, production workloads in containers. And that is Google, which deals with more than two billion containers per week. That’s a lot of containers to manage. Popular Google services such as Gmail, Search, Apps and Maps run inside containers.

    With Google entering the cloud business through App Engine, Compute Engine and other services, it is opening up the container management technology to the developers.

    New era of containers with Kubernetes

    One of the first tools that Google decided to make open source is called Kubernetes, which means “pilot” or “helmsman” in Greek.

    Kubernetes works in conjunction with Docker. While Docker provides the lifecycle management of containers, Kubernetes takes it to the next level by providing orchestration and managing clusters of containers.

    Kubernetes
    http://kubernetes.io/
    Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops.

    Reply
  26. Tomi Engdahl says:

    An Introduction to Kubernetes
    https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

    Kubernetes is a powerful system, developed by Google, for managing containerized applications in a clustered environment. It aims to provide better ways of managing related, distributed components across varied infrastructure.

    In this guide, we’ll discuss some of Kubernetes’ basic concepts. We will talk about the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling.

    If you are not familiar with CoreOS, it may be helpful to review some basic information about the CoreOS system in order to understand the types of environments that Kubernetes is meant to be deployed on.

    Kubernetes, at its basic level, is a system for managing containerized applications across a cluster of nodes. In many ways, Kubernetes was designed to address the disconnect between the way that modern, clustered infrastructure is designed, and some of the assumptions that most applications and services have about their environments.

    An Introduction to CoreOS System Components
    https://www.digitalocean.com/community/tutorials/an-introduction-to-coreos-system-components

    CoreOS is a powerful Linux distribution built to make large, scalable deployments on varied infrastructure simple to manage. Based on a build of Chrome OS, CoreOS maintains a lightweight host system and uses Docker containers for all applications. This system provides process isolation and also allows applications to be moved throughout a cluster easily.

    To manage these clusters, CoreOS uses a globally distributed key-value store called etcd to pass configuration data between nodes. This component is also the platform for service discovery, allowing applications to be dynamically configured based on the information available through the shared resource.

    Reply
  27. Tomi Engdahl says:

    Microsoft backs cloud rival Google’s open-source Kubernetes project
    http://www.computerweekly.com/news/2240224321/Microsoft-backs-cloud-rival-Googles-open-source-Kubernetes-project

    Cloud provider Microsoft has joined rival Google to bring support for the Kubernetes open-source project on its Azure platform. The project is aimed at allowing application and workload portability and letting users avoid supplier lock-in.

    Kubernetes, currently in pre-production beta, is an open-source implementation of container cluster management. It was introduced by Google in June, when it declared support for Docker – the open-source program that enables a Linux application and its dependencies to be packaged as a container. Docker is Linux OS-agnostic, which means even Mac and Windows users are able to run Docker by installing a small Linux kernel on their infrastructure.

    Reply
  28. Tomi Engdahl says:

    729 teraflops, 71,000-core Super cost just US$5,500 to build
    Cloud doubters, this isn’t going to be your best day
    http://www.theregister.co.uk/2014/11/12/aws_cloud_turns_super_again/

    CycleCloud has helped hard drive giant Western Digital shove a month’s worth of simulations into eight hours on Amazon cores.

    The simulation workload was non-trivial: to check out new hard drive head designs, the company runs a million simulations, each of which involved a sweep of 22 head design parameters on three types of media.

    In that context, HGST’s in-house computing became a serious bottleneck, with each simulation run taking as much as 30 days to complete.

    Hence, in what it describes as the largest enterprise cloud run so far, CloudCycle spun up nearly 71,000 AWS cores for an eight-hour run.

    CloudCycle claims the cluster delivered 729 teraflops to run HGST’s MRM/MatLab app under the control of CycleCloud cluster-creation software and Chef automation system.

    The cloud outfit says it spun the app up from zero to 50,000 cores in 23 minutes, and calculates that the run, dubbed “Gojira”, completed nearly 620,000 compute-hours.

    While Gojira wouldn’t quite make it into the Top 50 supers in the world, it only cost a little over US$5,500.

    Reply
  29. Tomi Engdahl says:

    AWS: With more than 1 million active customers, we’re your stack
    http://www.zdnet.com/aws-with-more-than-1-million-active-customers-were-your-stack-7000035737/

    Summary: Andy Jassy, senior vice president of Amazon Web Services, positioned the company’s cloud as a feature rich stack instead of a commodity infrastructure play.

    Amazon Web Services has more than 1 million active customers as defined by non-Amazon customers who use the cloud at least once a month. The takeaway: “The cloud is the new normal,” said Andy Jassy, senior vice president of Amazon Web Services.

    Jassy positioned that AWS is the fastest growing IT company in the world and touted customer wins and ecosystem growth. Jassy called out systems integrators and 1,900 products on AWS’ marketplace. “This area of our business has grown dramatically,” said Jassy.

    The themes from Jassy emerged at AWS’ re:Invent 2014 conference.

    Overall, AWS is positioning itself as the new normal for infrastructure with a platform that can do everything from analytics, to app services to management to mobile services, administration and security as well as the core compute, storage, networking and databases.

    With Jassy’s talk it became clear that AWS sees itself as more than a commodity infrastructure play. Jassy talked up features that have evolved over time for compute, storage and data warehousing.

    “The pace of innovation is accelerating,” said Jassy, who brought up a bevy of customers including MLB Advanced Media, which builds on AWS.

    The goal here is clear: Create a stack for enterprises and a nice daily annuity.

    Yes, daily. For instance, Antony Setiawan, cloud systems engineer at Adobe, is using Splunk to monitor and log its AWS instances. Adobe uses AWS infrastructure for its various services including the Creative Cloud. Setiawan oversees about 3.5 TB of data a day and aims to spend $12,000 per TB a day on AWS. Multiply that Adobe use case by hundreds of enterprise and you get the picture for AWS.

    Reply
  30. Tomi Engdahl says:

    Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS
    http://aws.amazon.com/blogs/aws/highly-scalable-mysql-compat-rds-db-engine/

    We launched the Amazon Relational Database Service (RDS) service way back in 2009 to help you to set up, operate, and scale a MySQL database in the cloud. Since that time, we have added a multitude of options to RDS including extensive console support, three additional database engines ( Oracle, SQL Server, and PostgreSQL), high availability (multiple Availability Zones) and dozens of other features.

    We have come a long way in five years, but there’s always room to do better! The database engines that I listed above were designed to function in a constrained and somewhat simplistic hardware environment — a constrained network, a handful of processors, a spinning disk or two, and limited opportunities for parallel processing or a large number of concurrent I/O operations.

    Amazon Aurora – New MySQL-Compatible Database Engine
    Today we are launching Aurora, is a fully-managed, MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.

    When you use Amazon RDS for Aurora, you’ll spend less time managing and tuning your database, leaving you with more time to focus on building your application and your business. As your business grows, Amazon Aurora will scale with you. You won’t need to take your application off line in order to add storage. Instead, Amazon Aurora will add storage in 10 GB increments on as as-needed basis, all the way up to 64 TB. Baseline storage performance is rapid, reliable and predictable—it scales linearly as you store more data, and allows you to burst to higher rates on occasion. You can scale the instance size in minutes and you can add replicas with a couple of clicks.

    Storage is automatically replicated across three AWS Availability Zones (AZs) for durability and high availability, with two copies of the data in each Availability Zone. This two-dimensional redundancy (within and across Availability Zones) allows Amazon Aurora to make use of quorum writes. Instead of waiting for all writes to finish before proceeding, Amazon Aurora can move ahead as soon as at least 4 of 6 writes are complete. Storage is allocated in 10 GB blocks distributed across a large array of SSD-powered storage.

    Reply
  31. Tomi Engdahl says:

    Amazon cloud apps LIVE! Bezos’ boys spin up ‘lifecycle management’ services
    Integration, deployment tools to speed AWS development
    http://www.theregister.co.uk/2014/11/13/amazon_spins_up_ifecycle_management_services_for_cloudy_apps/

    Amazon thinks it knows a thing or two about deploying applications in the cloud, and to prove it, it’s rolling out a collection of new tools to help developers improve the build and deployment lifecycles of their cloudy apps.

    The first of the three tools unveiled on Wednesday at the AWS re:Invent conference in Las Vegas is CodeDeploy, which aims to make it fast and easy for developers to deploy and update applications on Amazon EC2.

    CodeDeploy is based on an internal Amazon cloud deployment tool called Apollo, which Amazon Web Services senior veep Andy Jassy said is consistently rated by Amazon’s own programmers as one of the best things about working at the company.

    “To give you an idea of how much use it’s getting,” Jassy said during his opening keynote, “in the last 12 months we’ve pushed 50 million deployments through Apollo. That’s roughly 95 per minute. So this is a service that’s acquired a lot of battle testing and usage.”

    The tool allows developers to roll out code updates to thousands of EC2 instances all at once, or they can deploy to smaller subgroups of instances, to make sure the new code doesn’t break anything. If it does, the deployment process can be halted and the affected instances can be easily rolled back to their prior state.

    What’s more, Amazon is offering the service to all of its AWS customers at no charge. That’s partly because it doesn’t cost Amazon much to operate, Jassy said, but also because it helps fulfil the cloud giant’s goal of bringing more users to its platform.

    CodeCommit to CodePipeline

    One is CodePipeline, a continuous build, test, and integration service for the AWS cloud that was also based on internal Amazon tools. Jassy said CodePipeline is powerful and flexible enough that 80 per cent of Amazon’s development teams are now using the tech.

    “You can take code from any repository and spin up any kind of integration policies that you want or tasks that you want,” Jassy said. “It’s got workflow model visualization so that you can see what you’re integrating and what’s working and what’s not working. It integrates with all the existing tools that you use.”

    Completing the trio of tools will be CodeCommit, a managed source code control service that hosts private Git repositories on the AWS cloud.

    Amazon has not shared any information about pricing or release dates for CodePipeline or CodeCommit, other than to say we should expect them in “early 2015″.

    Reply
  32. Tomi Engdahl says:

    Nvidia launches Grid-powered on-demand cloud gaming service for Shield
    Will launch with titles like Batman: Arkham City and Borderlands 2
    http://www.theinquirer.net/inquirer/news/2381279/nvidia-launches-grid-powered-on-demand-cloud-gaming-service-for-shield

    NVIDIA has announced that it will roll out an on-demand cloud-based gaming service for its Shield tablet.

    The service is powered by the firm’s Grid virtual GPU technology, which takes advantage of Citrix’s XenDesktop 7.1 and Citrix XenServer 6.2 virtualisation software to allow users to stream graphically intensive applications remotely. It will be free for Shield Tablet users until 30 June 2015.

    “While Grid makes gaming gratification immediate, it took us a decade to invent the technology behind the service that streams GeForce GTX-quality graphics to Shield devices,” explained Nvidia, which touted the technology as a “gaming supercomputer in the cloud”.

    “The enabling technologies of Grid are super low latency from controller streaming to graphics to game streaming. And virtualisation so that many gamers can share the GeForce cloud gaming supercomputer,” the firm added.

    Nvidia explained that streaming games is difficult as it requires a powerful gaming computer in the cloud as well as ensuring that it can deliver games to players in milliseconds.

    Reply
  33. Tomi Engdahl says:

    Amazon Launches Lambda, An Event-Driven Compute Service
    http://techcrunch.com/2014/11/13/amazon-launches-lambda-an-event-driven-compute-service/

    Amazon Web Services announced a new service today called Lambda, a stateless event-driven compute service for dynamic applications that doesn’t require provisioning of any compute infrastructure.

    As AWS’s CTO Werner Vogels pointed out, this will enable programmers to reduce their overall development effort. You simply write the code and define the event triggers, and it will run for you automatically when the conditions are met. This automation should save time and money because instead of running the whole stack for something that may only run infrequently, you can now run it without any resources and it runs automatically.

    Lambda will take care of managing, scaling and monitoring for you. Milliseconds after an event is triggered, it’s processed through stateless cloud functions, and thousands of these events can run in parallel (and you aren’t limited in any way by resources).

    Reply
  34. Tomi Engdahl says:

    Amazon Announces EC2 Container Service For Managing Docker Containers On AWS
    http://techcrunch.com/2014/11/13/amazon-announces-ec2-container-service-for-managing-docker-containers-on-aws/

    At its re:invent developer conference in Las Vegas, Amazon today announced its first Docker-centric product: the EC2 Container Service for managing Docker containers on its cloud computing platform. The service is available in preview now and developers who want to use it can do so free of charge.

    As Amazon CTO Werner Vogels noted today, despite all of their advantages, it’s still often hard to schedule containers and manage them. “What if you could get all the benefits of containers without the overhead?” he asked. With this new service, developers can now run containers on EC2 across an automatically managed cluster of instances.

    With this, Amazon follows in the footsteps of other large cloud vendors. Google, for example, is making major investments in adding more Docker capabilities to its Cloud Platform, including its efforts around Kubernetes, a deep integration into App Engine and its recently launched Container Engine. Microsoft, too, is adding more support for Docker to its Azure platform and is even going as far as supporting the Google-sponsored Kubernetes project.

    As an Amazon executive told me yesterday — without mentioning today’s announcements – Amazon likes to offer the services that its customers are asking for. Clearly, the company has now heard its customers wishes.

    Reply
  35. Tomi Engdahl says:

    Amazon opens public API as cloud wars heat up
    Confirms 11 launch partners
    http://www.theinquirer.net/inquirer/news/2381306/amazon-opens-public-api-as-cloud-wars-heat-up

    AMAZON WEB SERVICES (AWS) has announced that it will open its API to third-party developers for the first time.

    The move means that developers will be able to integrate Amazon Cloud Services into their apps.

    Peter Heinrich, Amazon’s tech evangelist, said: “When you connect your users to their own Cloud Drive storage, you can preserve and protect their app data without having to build an online data management system of your own.

    “Cloud Drive is built to be robust and to scale transparently, so you never have to worry about availability or performance. The Cloud Drive API doesn’t impose any restrictions on file type, so your app can work with all kinds of content.”

    Reply
  36. Tomi Engdahl says:

    729 teraflops, 71,000-core Super cost just US$5,500 to build
    Cloud doubters, this isn’t going to be your best day
    http://www.theregister.co.uk/2014/11/12/aws_cloud_turns_super_again/

    Cycle Computing has helped hard drive giant Western Digital shove a month’s worth of simulations into eight hours on Amazon cores.

    Reply
  37. Tomi Engdahl says:

    You get 50GB of free storage to a combination of different services into one

    Cloud computing means is a lot of choice. Services to attract free storage on the farm and specialties. OneBigDrivella can take advantage of cloud services more conveniently at a time. Larger storage space in addition to the advantage of better security.

    OneBigDriven The idea is to combine cloud storage services. The supported services are currently the Box, Google Drive, Dropbox, OneDrive and less well-known Yandex Disk. With a free account you are using 50GB of the aforementioned cloud services, free storage, if all services are running.

    If you have purchased additional space required for the utilization of annual paid subscription of OneBigDrive. The options are unlimited and 100 gigs of storage capacity. OneBigDrive does not provide storage space, but it makes use of only the linked cloud storage services.

    The program is easy to use. OneBigDriveen to create an account, after which it will be linked to existing cloud services. Since then, OS X, and Windows does not need to be opened in a single cloud service in saving. Files can be saved OneBigDrive, from where they are distributed to existing cloud services automatically.

    Source: http://www.tivi.fi/viikonsofta/saat+50+gigaa+ilmaista+tallennustilaa+yhdistamalla+eri+palvelut+yhdeksi/a1028664

    Reply
  38. Tomi Engdahl says:

    Court order stops Bitcasa from deleting your cloud data, for now
    http://www.engadget.com/2014/11/16/bitcasa-faces-lawsuit/?ncid=rss_truncated

    If you’re miffed that Bitcasa not only dropped its unlimited cloud storage option but made you migrate to a costlier limited tier just to keep your files, you’ll be glad to hear that you’re getting a reprieve. Angry customers have filed a tentative class action lawsuit against Bitcasa for allegedly breaching its contract through the sudden switch. In tandem with the suit, the court handling the case has granted a restraining order that forces Bitcasa to save those files until at least November 20th. That’s not exactly a long interval, but there’s a hearing on the 19th that could extend the grace period further.

    There’s no certainty that the lawsuit will succeed, but it might serve as a warning to other internet storage outlets that are thinking of scaling back their features.

    Reply
  39. Tomi Engdahl says:

    Google Brings Autoscaling To Compute Engine
    http://techcrunch.com/2014/11/17/google-brings-autoscaling-to-compute-engine/

    Google continues to build out its cloud computing platform and today the company announced that its autoscaling service for Compute Engine, its infrastructure-as-a-service platform, is now available in beta.

    Using this new feature, developers can now have Compute Engine automatically spin up new machines based on demand. If your CPU utilization goes above a certain value or your HTTP load balancer notices a spike in incoming traffic, for example, you can now have Google start a new machine to distribute that load. You can also connect the autoscaler to Google’s Cloud Monitoring API to select custom metrics that’s important for your application. This also means you don’t need to have machines on standby to take care of unexpected demand. Instead, they only spin up if needed, which could save you quite a bit of money.

    Reply
  40. Tomi Engdahl says:

    Amazon’s ability to invest in AWS: By the numbers
    http://www.zdnet.com/amazons-ability-to-invest-in-aws-by-the-numbers-7000035829/

    Summary: AWS chief Andy Jassy was asked whether Amazon had the funds to compete with the cash piles of Microsoft and Google in the cloud wars. Here’s a look at his answer and the numbers that back it up.

    “Does Amazon have the financial heft to compete with Google and Microsoft in an enduring cloud war?”

    That question (paraphrased) was posed to Amazon Web Services’ head Andy Jassy in a press conference last week at the cloud provider’s re:Invent conference. Jassy’s answer was predictable. He noted AWS’ torrid growth and even said at some point the cloud service may be larger than Amazon’s e-commerce business.

    “You invest in what you believe are the most important investments long term. We think of (these investments) as planting seeds for very large trees that will be fruitful over times. There aren’t many opportunities like this in your lifetime.”

    Amazon’s ability to invest

    In 2013, Amazon had free cash flow of more than $2 billion, a sum on par with 2011 and better than 2012. Free cash flow for Amazon is a fourth quarter affair since its business looks more like a retailer so a year-to-date free cash flow figure for 2014 doesn’t make a lot of sense. That commerce focus is why Amazon’s fourth quarter outlook was worrisome.

    Amazon ended the third quarter with $6.88 billion in cash and short-term investments. That war chest is Amazon’s smallest since the third quarter of 2012.

    Microsoft’s ability to invest

    In fiscal 2014 — the year ended in June 30 — Microsoft had free cash flow of $26.75 billion. Note that Microsoft is the only one of the big three cloud infrastructure as a service providers that pays a dividend. Microsoft’s annual cash flow has ranged from $22.1 billion to $29 billion over the last five years.

    Microsoft ended fiscal 2014 with $85.5 billion in cash and short term investments. As of Sept. 30, Microsoft had $88.7 billion in cash and short term investments.

    Google’s ability to invest

    For 2013, Google delivered free cash flow of $11.3 billion, down from $13.35 billion in 2012. The company’s free cash flow in the September quarter was $8.44 billion.

    The search giant ended the third quarter with $62.16 billion in cash and short term investments, up from $58.7 billion in 2013.

    Conclusion

    It’s clear that Amazon has the weakest financial hand of the big three cloud providers and e-commerce just doesn’t generate as much cash as search ads and software.

    However, Amazon grabbed a lead with AWS and could extend it. It’s unknown whether AWS’ cash flow can be completely returned to grow the business, but it’s safe to say that Jeff Bezos isn’t going to scrimp.

    So far, Amazon has been able to fund its infrastructure as well as generate returns via AWS and there’s no evidence that investment will be pared.

    Reply
  41. Tomi Engdahl says:

    That dreaded syncing feeling: Will Microsoft EVER fix OneDrive?
    Microsoft’s long history of broken Windows sync
    http://www.theregister.co.uk/2014/11/18/that_syncing_feeling_will_microsoft_ever_fix_onedrive/

    Microsoft and synchronisation go back a long way.

    Synchronisation over the internet began with FolderShare, acquired with Byte Taxi in 2005, which evolved into Windows Live Sync and then Windows Live Mesh

    Live Mesh was swept away by SkyDrive, Microsoft’s cloud storage with a desktop synchronisation client, and SkyDrive was rebranded as OneDrive in February 2014. It is a key part of Microsoft’s cloud strategy, and ties in with Office Web Apps as well as iOS and (soon) Android versions of Office that use SkyDrive for storage.

    Then again there is SkyDrive/OneDrive for Business, which has an entirely different ancestry, based on Microsoft’s document management and collaboration product SharePoint.

    Users do not much enjoy opening documents from a web browser, so the SharePoint team also worked on synchronisation.

    With all this history, nobody could accuse Microsoft of lack of experience in synchronisation, and yet neither the consumer OneDrive client nor the OneDrive for Business client works as well as it should. Of the two, the consumer version is better, with its main foible (tamed somewhat in silent updates) a tendency to duplicate files if you access them from more than one device, appending the name of the device to each copy and making it difficult to work out which is more current.

    OneDrive for Business, with its reassuring professional name, should be better but is not.

    When Windows 8.1 was released, Microsoft introduced an innovation for consumer OneDrive. A feature called Smart Files was added to Windows, so that cloud files could appear in Windows Explorer without actually being downloaded, but are made available on demand. There is also an option to sync specific files or folders for offline use. It is a brilliant feature, especially for Ultrabooks or tablets like Surface which have relatively small SSDs.

    Now that OneDrive storage limits are being lifted, syncing everything is not realistic on such devices.

    Reply
  42. Tomi Engdahl says:

    IBM Launching Web-Based Email Service
    Company to Offer IBM Verse Free to Individuals and Hopes to Sell Commercial Version to Businesses
    http://online.wsj.com/news/article_email/ibm-launching-web-based-email-service-1416322806-lMyQjAxMTA0ODE2ODUxNTg1Wj

    International Business Machines Corp. is launching a new offensive against Google Inc. and others in the email market, offering a Web-based service it plans to market directly to end users, a rare tactic for Big Blue.

    On Tuesday, the computing giant unveiled IBM Verse, an email service melded with collaboration and social-media tools. The company is offering the cloud-based software free to individuals and small businesses, and also hopes to sell a commercial version to businesses.

    Marketing a product directly to end users, who can open a Verse account online, is an unusual step for IBM since the company sold its personal-computer division in 2005.

    The “freemium” distribution method—giving away a basic version while selling a more elaborate version for businesses—is common among online software vendors, including Google, which offers both free and commercial versions of Gmail and other online applications.

    Reply
  43. Tomi Engdahl says:

    DRaaS-tic action: Trust the cloud to save your data from disaster
    Accidents happen…
    http://www.theregister.co.uk/2014/11/19/disaster_recovery/

    In modern computing, disaster recovery can be thought of in the same way as insurance: nobody really wants to pay for it, the options are complicated and seemingly designed to swindle you, but it is irrational (and often illegal) to operate without it.

    All the big IT players are getting into disaster recovery as a service (DRaaS), and many of the little ones are too.

    The core concept is simple: someone with a publicly accessible cloud stands up some compute, networking and storage and lets you send copies of your data and workloads into their server farm.

    If your building burns down or some other disaster hits your company, you can log into the DRaaS system, push a few buttons and all the IT for your entire business is up and running in moments. If only car insurance were that easy.

    But like car insurance, DRaaS comes in flavours. There are so many options from so many vendors that the mind boggles.

    Prices and capabilities vary wildly. Perhaps most importantly, the amount of effort required to make the thing work properly, and keep it working, can vary quite a bit too.

    Simply using software as a service offerings for critical functions and letting the rest burn is not particularly rational either. Public cloud services still need to be backed up.

    Vendors go under. Some putz could hack your account and delete everything. A plane could fall out of the sky and land directly on the storage array containing the only copy of your data.

    So you cannot avoid disaster recovery planning. You can, of course, set up your own disaster recovery solution. Go forth and build your own data centre, or even just toss a server in a colo.

    Both are excellent options, if the circumstances, requirements and budget of the company are right. For everyone else, there’s DRaaS.

    Reply
  44. Tomi Engdahl says:

    Microsoft adds video offering to Office 365. Oh NOES, you’ll need Adobe Flash
    Lovely presentations… but not on your Flash-hating mobe
    http://www.theregister.co.uk/2014/11/19/microsoft_adds_video_offering_to_office_365/

    Microsoft has added a video portal to Office 365, enabling users to upload and share videos. The service will be in preview soon, and available to all customers with the right kind of subscription in early 2015.

    So what is the point, when YouTube does this so well? The idea is to manage internal videos with permissions based on Azure Active Directory, the directory for all Office 365 users. Videos are organised into channels, and each channel has edit and view permissions which admins assign to Office 365 users or groups. Typical uses would be for training, presentations, or announcements.

    Reply
  45. Tomi Engdahl says:

    Microsoft’s Azure goes TITSUP PLANET-WIDE AGAIN in cloud FAIL
    Outlook: RAIN
    http://www.theregister.co.uk/2014/11/19/microsoft_azure_outage/

    Microsoft suffered a major outage on its cloud service Azure overnight.

    It went titsup just before 1am UK time, and services are only now slowly returning to normal.

    Microsoft confessed that multiple regions were affected by the “service interruption”.

    Customers around the world using Azure storage, virtual machines, a number of SQL products and Active Directory, among other services, were sucker-punched by the lengthy outage.

    Reply
  46. Tomi Engdahl says:

    Bittorrent wants to sink Dropbox with Sync 2.0
    From beta to alpha to pro
    http://www.theregister.co.uk/2014/11/20/bittorrent_wants_to_sink_dropbox_with_sync_20/

    Bittorrent is taking Sync out of beta with an alpha version of Sync 2.0.

    The premium version it’s announced – US$39.99 a year – claims unlimited-size file storage, and is pitching the strong crypto and distributed storage as offering better security than competitors like DropBox, Google Drive, or MS OneDrive.

    Bittorrent also says personal information isn’t stored in its cloud, so privacy should be better.

    The move from Sync to Sync 2.0 will formally happen in 2015, but users can apply to try out the alpha as soon as it’s available.

    The Sync 1.2 mobile-to-mobile feature is going to be turned into its own app, the Bittorrent post by Eric Pound states, for iOS, Android and Windows Phone operating systems.

    Reply
  47. Tomi Engdahl says:

    HP, Symantec PAIR UP to fight off disaster cloud rivals
    DRaaS set to appear late next year
    http://www.theregister.co.uk/2014/11/21/hp_symantec_ready_to_fight_disaster_clouds/

    HP and Symantec are partnering to develop a cloud-based Disaster Recovery as a Service (DRaaS) offering using Symantec software and HP’s Helion cloud.

    This DRaaS software will run on HP’s Helion OpenStack-based cloud environment with HP providing the end-to-end service based on underlying disaster recovery facilities, infrastructure, and operations team.

    The two say their DRaaS system will monitor the most widely used applications and databases in the market and support “replication, recovery and automated failover/failback of client IT whether it’s traditional IT on-premises, managed cloud, private cloud, or public cloud”.

    It will support “industry specific client standards for disaster recovery, such as PCI in the retail industry, HIPAA in the healthcare industry, or FedRAMP and FISMA in the US public sector”. There will be recovery SLAs for systems and application

    Reply
  48. Tomi Engdahl says:

    Amazon’s AppStream can now stream any Windows application
    Summary: Amazon expands support for its streaming service that targets games developers to any Windows application.
    http://www.zdnet.com/amazons-appstream-can-now-stream-any-windows-application-7000036050/

    Amazon Web Services (AWS) has updated AppStream to allow any Windows application to be accessed through a browser.

    Besides offloading graphics workloads to AppStream so that less powerful devices can access heavy duty applications, AWS developers can now use AppStream to deliver any Windows application to non-Windows devices such as FireOS, Android, Chrome, iOS, Mac OS X, as well as Windows devices.

    “You can now stream just about any existing Microsoft Windows application without having to make any code changes,” AWS evangelist Jeff Barr wrote.

    Reply
  49. Tomi Engdahl says:

    Cloud unicorns are extinct so DiData cloud outage was YOUR fault
    Applications need to be built to handle TITSUP incidents
    http://www.theregister.co.uk/2014/11/24/didata_cloud_outage_was_partly_your_fault/

    Last July, Dimension Data’s Australian cloud went down for over 24 hours. Now the company says its assessment of the incident found those who suffered the most had themselves to blame, to a degree.

    Speaking today at the launch of the company’s new government cloud, cloud general manager David Hanrahan said those impacted by the outage fell into two categories.

    Those who felt most pain, he said, “had not architected for availability” by replicating data and applications to either their own premises or to other clouds.

    Customers who had “taken an enterprise architecture approach approach and mapped their applications from top to bottom and planned accordingly” experienced less pain as a result of the outage.

    Hanrahan’s mostly right: it is possible to architect an application so that when a supplier fails a cloud provider redundant systems will kick in to protect customers applications and data. But the need to do so is not often mentioned in the rush to point out cloud’s low price, elasticity and speed of deployment.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*