Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    RBS promises ‘safe, secure, confidential’ info-sharing on Facebook at Work
    First bankers to use Zuck biz platform
    http://www.theregister.co.uk/2015/10/26/facebook_at_work_rbs/

    RBS has inked a deal with Facebook to allow its 100,000 bank employees to use the free content ad network’s Facebook at Work product.

    Financial details of the agreement were not disclosed by the companies.

    The bank’s surprise decision to opt for a service that is still in its infancy will no doubt raise eyebrows among some, who might question why such a conservative organisation would make such a bold move.

    Facebook, until now, hasn’t been known for competing in the enterprise space.

    Many companies block Facebook at work, which explains the logic behind the arrival of Facebook at Work: a site that is only used within the firewall of a given biz. The service is completely separate from personal Facebook accounts.

    The Mark Zuckerberg-run company only confirmed it was developing a biz-friendly platform in November last year.

    Reply
  2. Tomi Engdahl says:

    Sean Gallagher / Ars Technica:
    Microsoft’s Project Oxford APIs offer face tracking, emotion sensing, voice recognition, and spell checking — Microsoft’s Azure gets all emotional with machine learning — Project Oxford AI services detect emotions, identify voices, and fix bad spelling.

    Microsoft’s Azure gets all emotional with machine learning
    Project Oxford AI services detect emotions, identify voices, and fix bad spelling.
    http://arstechnica.com/information-technology/2015/11/new-microsoft-azure-tools-can-tell-who-you-are-and-what-your-mood-is/

    Reply
  3. Tomi Engdahl says:

    Microsoft capitulates, announces German data centres
    Clouds in Europe won’t rain data onto US spies
    http://www.theregister.co.uk/2015/11/12/microsoft_capitulates_announces_german_data_centres/

    Call it “safe harbour” in action: Microsoft has announced it’s going to go along with Germany’s data privacy concerns and start hosting Azure, Office 365, and Dynamics CRM Online in that country.

    The decision comes hard on the heels of the company’s decision to spin up some rust in the UK, with the Ministry of Defence as an anchor customer.

    In its announcement, Redmond also says German customers will have data access controlled by Deutsche Telekom, acting as local data trustee.

    The German operator’s T-Systems subsidiary will handle the data trustee functions, managing all access to customer data in the Microsoft data centres. “Microsoft will not be able to access this data without the permission of customers or the data trustee, and if permission is granted by the data trustee, will only do so under its supervision”, the announcement states.

    Since Microsoft won’t be in charge of the data, even an unfavourable decision in its US court case (in which the Feds want access to e-mails stored in Ireland) won’t expose German customer data to American courts or law enforcement.

    Reply
  4. Tomi Engdahl says:

    Microsoft Agrees to Store Customer Data in Germany
    http://www.securityweek.com/microsoft-agrees-store-customer-data-germany

    Microsoft has put new data centers in Germany under the control of Deutsche Telekom, the companies said, in a move that will keep privacy-sensitive Germans’ customer data in the country.

    US tech giant Microsoft has put new data centers in Germany under the control of Deutsche Telekom, the companies said Wednesday, in a move that will keep privacy-sensitive Germans’ customer data in the country.

    After scandals over US surveillance programs that spooked Europeans, Deutsche Telekom will serve as “custodian” for Microsoft’s cloud-based services in Germany.

    “All customer data will remain exclusively in Germany,” Deutsche Telekom said in a statement, adding that the service will also be available to European clients outside Germany.

    “With this partnership with T-Systems, Microsoft customers can choose a data protection level that complies with the requirements of German customers and many clients of the public sector,” added Anette Bronder, director of Digital Division at the Deutsche Telekom subsidiary T-Systems.

    The landmark verdict stemmed from a case lodged by Austrian law student Max Schrems, who challenged the 2000 “Safe Harbor” agreement between Washington and Brussels on the grounds it did not properly protect European data.

    Reply
  5. Tomi Engdahl says:

    Microsoft takes some cloud services to Germany because of privacy fears
    It can see clearly now the cloud has come
    http://www.theinquirer.net/inquirer/news/2434521/microsoft-takes-some-cloud-services-to-germany-because-privacy

    DATA PROTECTING PROBLEM STARTER Microsoft is reacting to an increased demand for customer information by taking it to Germany via the cloud.

    This makes some sense. Microsoft already has some offshore data that it keeps in Ireland. However, there are eyes on that and hands ready to grab it, and Microsoft goes to efforts to prevent that access.

    It will be hoping to do the same with its German data trove, presumably, which will let customers relax a little about their data and its location.

    Microsoft is handing over responsibility here, though. While services including Azure, Office 365 and Dynamics CRM Online, through German data centres, it is Deutsche Telekom that will “control and oversee all access to customer data”.

    “Microsoft is pioneering a new, unique, solution for customers in Germany and Europe. Now, customers who want local control of their data combined with Microsoft’s cloud services have a new option, and I anticipate it will be rapidly adopted”, added Timotheus Höttges, Chief Executive Officer, Deutsche Telekom AG.

    Reply
  6. Tomi Engdahl says:

    Decoding Microsoft: Cloud, Azure and dodging the PC death spiral
    Too many celebs and robot cocktails clouding the message?
    http://www.theregister.co.uk/2015/11/13/decoding_future_decoded_microsoft_sets_out_its_stall/

    Microsoft’s Future Decoded event took place in London this week, with CEO Satya Nadella and Executive VP Cloud and Enterprise Scott Guthrie on stage to pitch the company’s “cloud and mobile” message.

    Nadella was in Paris on Monday and moved on to a shorter Future Decoded event in Rome later in the week, so this is something of a European tour for the company.

    The event is presented as something to do with “embracing organisational transformation”, presumably with the hope that celeb-bedazzled attendees will translate this to “buy our stuff.”

    There were a couple of announcements at the show, the biggest being Nadella’s statement about a UK region for the Azure cloud. Guthrie stated the next day that Microsoft has “more than 26 Azure regions around the world – twice as many as Amazon and Google combined.” – though note that more regions does not necessarily mean more capacity.

    The company is betting on cloud services to make up for declining PC sales and its failure in mobile. Will we buy though? Microsoft is a long way behind Amazon Web Services (AWS) in the IaaS (Infrastructure as a service) market but makes up for that to some extent by a huge SaaS (Software as a Service) presence with Office 365, though this is far from pure cloud since it hooks into Office applications running on the desktop or on mobile devices. This is why Microsoft has raced to get Office onto iOS and Android.

    A well-attended session on “Azure and its competitors” was given by analyst David Chappell. Chappell does not see Microsoft overtaking AWS in IaaS, but gave his pitch on why Microsoft will be a strong number two, and ahead in certain other cloud services. Unlike Amazon, Microsoft has enterprise strength, he said.

    Azure Active Directory has “very few serious competitors,” he said, since it offers single sign-on across on-premises and cloud services both from Microsoft and from third-parties. IBM will not get the cloud scale it needs, he said, VMware is not a full public cloud platform, and OpenStack has gaps which get filled with vendor-specific extensions that spoil its portability advantage. Google has potential, he said, but he questioned its long-term commitment to enterprise cloud services when most of its business is built around advertising.

    Another announcement at the show concerned Project Oxford, Microsoft’s artificial intelligence service. New services include emotion recognition, spell check, video enhancement, and speaker recognition. Marketers are all over this kind of stuff, since it can help with contextual advertising. It is not hard to envisage an outdoor display or even a TV that would pump out different ads according to how you are feeling, for example.
    There was a live demo of Project Oxford emotion recognition, but think about it for a moment.

    What about the mobile part of “cloud and mobile”? Two things tell you all you need to know. One was Nadella showing off what he called an iPhone Pro, an iPhone stuffed with Microsoft applications. The other was the Lumia stand in the exhibition, which proclaimed “The Phone that works like your PC.” This is a reference to the Continuum project, where you can plug the phone into an external display and have Universal Windows Platform Apps morph into something like desktop applications. An interesting feature, but Microsoft is no longer targeting the mass market for mobile phones.

    Despite the sad story of Lumia, Microsoft is keen to talk up the role of Windows on other small devices. There are two sides to the company’s IoT (Internet of Things) play. One is Azure as a back-end for IoT data, both for storage and analytics. The other is Windows 10 IoT Core, which runs on devices such as Raspberry Pi.

    Reply
  7. Tomi Engdahl says:

    Surprise! No wonder Oracle doesn’t ‘see’ IBM or SAP in the cloud
    Free gifts to customers don’t count as deals, Larry
    http://www.theregister.co.uk/2015/11/16/oracle_cloud_cloud_claims/

    At Oracle’s recent OpenWorld conference, Larry Ellison asserted: “We never, ever see IBM” or SAP in the cloud.

    Perhaps because, according to a senior Oracle insider, Oracle still isn’t doing much in the cloud.

    How much is “not much?” As InfoWorld reported in an interview with this unnamed source, in 90 per cent of Oracle’s large deals cloud is simply given away, often without the customer even knowing it was included.

    Which isn’t all that surprising, really. After all, this is the same Ellison who once ridiculed cloud computing as mere “water vapour.”

    But maybe he was just talking about Oracle’s cloud revenues. Giving credit where credit is due, cloud computing has posed such an existential threat to enterprise IT’s old guard that they’ve been doing complicated financial gymnastics to demonstrate just how cloudy they can be.

    Hence, each of the legacy vendors can point to billions in cloud revenue of some stripe, even though they miss earnings quarter after quarter and year after year. But this isn’t to exult in their difficulties: change is hard, especially the kind that requires a completely different kind of sales force, accounting practices, etc.

    The hardest cloud category to fake is Infrastructure-as-a-Service (in which Platform-as-a-Service is included), the area completely dominated by Amazon Web Services, with Microsoft Azure an increasingly strong second. Oracle, for its part, comes in third, according to analyst Forrester. The problem, however, is that it’s not at all clear we can trust that data. This isn’t because Forrester has done a bad job with the maths, but rather because Oracle seems to be selling smoke, not cloud, according to the report’s inside source.

    Reply
  8. Tomi Engdahl says:

    Automation enterprises invest in cloud technologies
    http://www.controleng.com/single-article/automation-enterprises-invest-in-cloud-technologies/1eeb4626e331ab287a6ceff170dadefa.html

    Facing new demands from manufacturers and the latest developments in the Internet of Things, enterprises in industrial automation begin to invest in cloud technology.

    With the popularization of cloud storage technology in commercial fields, consumers are accustomed to enjoying information in the cloud in smart phones, tablet PCs, and laptops. The cloud provides flexible and convenient information transmission. It works like an invisible USB with unlimited capacity.

    Just like commercial fields, the recognition of the cloud by the manufacturing industry is also quietly changing. The industry went from having doubts and concerns over safety of using a cloud platform to realizing value in cloud-based asset management, historical data analysis, industrial business flow optimization, remote real-time access, better energy efficiency management, more cost cutting, and efficiency improvements.

    IDC, a market research organization, said the global cloud-computing infrastructure grew by 25.1% and reached $6.3 billion in first-quarter 2015. The expenditure of private cloud and the expenditure of public cloud grew by 24.4% and 25.5% respectively, year over year.

    Relevant data from the China Ministry of Industry and Information Technology indicated that the fastest growing information technology service industry of China in the first half of 2015 was service business with cloud and big data as representatives. The growth rate reached 22.1%. It was undoubtedly a new blue ocean as far as “new normality” of China’s economy was concerned.

    Facing the new demands from manufacturing users, leading enterprises in industrial automation also are giving increasing attention to cloud services. As a promoter of the concept of Industrial Internet of Things (IIoT), GE formally declared a plan to enter into the cloud service market through the Predix cloud, an industrial Internet cloud platform exclusively developed for Predix. It is said to be the first cloud solution developed and designed exclusively for collection and analysis of industrial data.

    “All technologies for construction of a cloud platform are basically mature in terms of content. The challenges are information security and depth of industrial application,”

    Today, industrial enterprises are starting to adopt cloud technologies for big-data-based intelligent manufacturing through data sharing, assets management, remote monitoring, and information analysis.

    Reply
  9. Tomi Engdahl says:

    Microsoft launches cloud-based blockchain platform with Brooklyn start-up
    http://www.reuters.com/article/2015/11/10/us-microsoft-tech-blockchain-idUSKCN0SZ2ER20151110

    Microsoft launched a cloud-based blockchain platform on Tuesday with Brooklyn-based start-up ConsensYs, which will allow financial institutions to experiment cheaply and easily with the technology underpinning bitcoin.

    The blockchain works as a huge, decentralized ledger of every bitcoin transaction, which is verified and shared by a global computer network and therefore is virtually tamper-proof.

    The financial industry is increasingly investing in the technology, betting it will reduce costs and increase efficiency.

    Blockchain technology is not limited to bitcoin — it can be used to secure and validate the exchange of any data. And other companies are now building their own blockchains that provide additional features to the original bitcoin one.

    One such company is Ethereum, which has built a fully programmable blockchain. Technology giant Microsoft is using it for the blockchain platform launched on Tuesday.

    The platform will be available to banks and insurance companies that are already using Microsoft’s cloud-based Azure platform. Microsoft said four large global financial institutions had already signed up to the service.

    Reply
  10. Tomi Engdahl says:

    Microsoft to world: We’ve got open source machine learning too
    Help teach Cortana to say ‘Sorry, Dave’
    http://www.theregister.co.uk/2015/11/17/microsoft_to_world_weve_got_open_source_machine_learning_too/

    Microsoft’s decided that it, too, wants to open source some of its machine learning space, publishing its Distributed Machine Learning Toolkit (DMTK) on Github.

    Google released some of its code last week. Redmond’s (co-incidental?) response is pretty basic: there’s a framework, and two algorithms, but Microsoft Research promises it will get extended in the future.

    The DMTK Framework is front-and-centre, since that’s where both extensions will happen. It’s a two-piece critter, consisting of a parameter server and a client SDK.

    Reply
  11. Tomi Engdahl says:

    Fujitsu: We started the cloud party, honest
    Acquisition of UShareSoft to be ‘core’ of cloud stuff … get down
    http://www.theregister.co.uk/2015/11/17/fujitsu_forum_meta_arc_cloud/

    Fujitsu Forum 2015 Fujitsu has outlined a timetable for its hybrid IT digital platform after inking a deal to acquire French-based fluffy white services outfit UShareSoft.

    Some components of the digital business platform MetaArc are already available, including an Internet of Things, big data, mobility solutions and app dev.

    Head of Fujitsu’s EMEIA product business Michael Keegan told us more is to come. App management is due in this quarter; cloud based development is set for release in the first half of the next calendar year; and the app marketplace is expected in the second half.

    “We are not buying [the concept] that everything will be hyper scale or digital with no infrastructure. We believe organisations that leave the past behind will find it tough because of their existing IT infrastructure,” he said.

    Dumping legacy IT is not an option, due to the intricate layers of security and compliance, among other considerations, so big business needs to make “incremental steps” to digitise their organisation.

    “They are not throwing out everything they’ve done before,” said Keegan.

    Fujitsu will incorporate UShareSoft primary software product into the Cloud Services K5 as the “core” of MetaArc. The ware is designed make it more efficient to build, migrate and deliver apps for multiple clouds

    Reply
  12. Tomi Engdahl says:

    Netflix and skill: Web vid giant open sources Spinnaker cloud tool
    Continuous delivery across AWS, Google Cloud and CloudFoundry
    http://www.theregister.co.uk/2015/11/17/netflix_offers_up_spinnaker_for_cloud_devs/

    Netflix has released Spinnaker, an open-source tool for testing and rolling out software updates in the cloud.

    The Apache 2.0-licensed code provides continuous delivery of applications, including managing and monitoring their deployment. Netflix said Spinnaker will replace its Asgard project.

    The streaming video giant said Spinnaker took a year to build, and is based on its internal toolkit that ushers new features from development and testing to production.

    “We looked at the ways various Netflix teams implemented continuous delivery to the cloud and generalized the building blocks of their delivery pipelines into configurable stages that are composable into pipelines,” Netflix delivery engineering manager Andy Glover explained.

    “Pipelines can be triggered by the completion of a Jenkins job, manually, via a cron expression, or even via other pipelines.”

    Currently, the project is compatible with Amazon’s AWS, Google Cloud, and Pivotal’s CloudFoundry, with support for Microsoft’s Azure planned. Accordingly, the project is backed by Google, Pivotal, and Microsoft in addition to Netflix.

    Reply
  13. Tomi Engdahl says:

    Behold, the fantasy of infinite cloud compute elasticity
    Spinning up thousands of servers in minutes. Yeah, yeah, yeah
    http://www.theregister.co.uk/2015/11/18/the_fantasy_of_infinite_cloud_compute_elasticity/

    Cloud computing has elements of a fool’s paradise. We’re told it’s elastic, infinitely elastic even, with thousands of virtual servers spun up in minutes. But there’s just one problem … this is palpable nonsense, short-term excess capacity being mopped up by early entrants to a large resource.

    Virtual servers are not magic. They don’t exist in a parallel universe, decoupled from hardware. You still need physical servers, sitting in a data centre somewhere.

    Let’s take a claim that 10,000 cloud compute instances can be spun up to run a rendering job in the cloud, and assume that every ten such instances need one physical server.

    One minute before the 10,000 compute instances were spun up in the cloud’s virtual reality they didn’t exist. One minute after they were spun down they didn’t exist. But the physical servers they needed, all 1,000 of them, existed before and existed afterwards as well.

    What were they doing, before the 10,000 compute instance VMs were spun up, these 1,000 physical servers, maybe 1U boxes, meaning 25 racks of them? Obviously they were idle; how else can you spin up thousands of VMs all at once?

    And what were they doing one minute after the 10,000 compute instances were spun down? Sitting there and idle again. That’s not very good resource management.

    For corporations renting out resources, be they cars, hotel rooms or compute servers, their rented thing is an asset that costs money and needs to be earning money. Idleness doesn’t pay, it costs.

    So the suppliers get slightly more than enough assets to fulfil a regular and dependable demand, with some small amount left over for peaks

    In the future, when the rush to the cloud has settled down and Amazon, Azure and Google are no longer building vast data centre server farms for tomorrow’s demand but operating them for today’s needs, the scaling up of 10,000 compute instances in minutes will simply not be possible unless ordered in advance.

    Anyone punting the idea of infinitely elastic cloud computing resource is either a gullible marketing droid or a remarkably credulous analyst. It’s a fantasy folks. Get your feet back down on the (physical) ground

    Reply
  14. Tomi Engdahl says:

    Sarah Perez / TechCrunch:NEW
    Amazon Studios Launches Amazon Storywriter, Free Cloud Software For Screenwriters — In an effort to expand its original video content, including movies and TV series, Amazon announced this morning the launch of a free, cloud-based screenwriting software program called Amazon Storywriter.

    Amazon Studios Launches Amazon Storywriter, Free Cloud Software For Screenwriters
    http://techcrunch.com/2015/11/19/amazon-studios-launches-amazon-storywriter-free-cloud-software-for-screenwriters/

    In an effort to expand its original video content, including movies and TV series, Amazon announced this morning the launch of a free, cloud-based screenwriting software program called Amazon Storywriter. In addition, the company says it’s expanding to include drama submissions, and will no longer take a free option on scripts submitted to the Amazon Studios website, allowing WGA members to upload directly to the site.

    Previously, Amazon accepted script submissions for feature films, primetime comedy series for adults, and series for children aged 2 to 14, but this is the first time that Amazon will now consider drama series submissions as well.

    Amazon Studios launched in 2010 to serve as a way to crowdsource the process of finding new scripts for films and series. It offers a way for writers to upload their content online and make their projects public in order to gain feedback from the larger community. However, its launch and a related “script contest” were immediately fraught with confusion and controversy as a number of writers warned of Amazon’s then-free 18-month option on scripts from the moment they were uploaded, as well as other issues with copyright and authorship.

    That submission program has evolved over the years, however. Amazon Studios’ prior policy, until today, stated it had the exclusive right to buy a movie script for $200,000 or TV script for $55,000 from the day it’s uploaded up to 45 days out.

    Reply
  15. Tomi Engdahl says:

    AWS launches EC2 Dedicated Hosts to let users run many instances on each server
    http://venturebeat.com/2015/11/23/aws-launches-ec2-dedicated-hosts-to-let-users-run-many-instances-on-each-server/

    Amazon Web Services (AWS), the biggest public cloud infrastructure provider, today announced the availability of a new service called EC2 Dedicated Hosts. The feature can help companies run the software they pay for with licenses on multiple Amazon virtual machines (VMs) on a single physical server. This is in contrast to the usual pattern of using AWS, which involved getting access to VMs but not knowing where exactly they were running.

    The feature gives more power to admins when it comes to getting VMs going on Amazon’s servers.

    “You can exercise fine-grained control over the placement of EC2 instances on each of your Dedicated Hosts,” AWS chief evangelist Jeff Barr wrote in a blog post on the news.

    This is the sort of addition that can make Amazon more appealing to enterprises with extensive supplies of software licenses that are priced by the number of CPU cores or sockets. The launch should help Amazon further outdo competitors, especially Microsoft Azure, whose parent company sells plenty of license-based software. Google Cloud Platform and IBM SoftLayer are also growing in the cloud infrastructure business. Meanwhile Oracle, a major legacy enterprise database vendor, is just starting to get going in this market.

    There are certain restrictions here: Dedicated Hosts require the use of AWS’ Virtual Private Cloud (VPC) service

    Reply
  16. Tomi Engdahl says:

    Amazon’s AWS Is Growing Up; Should You Be Scared?
    http://techcrunch.com/2015/11/18/amazons-aws-is-growing-up-should-you-be-scared/?ncid=rss&cps=gravity_1730_-7365231804315431256

    Amazon Web Services (AWS) is the undisputed leader in infrastructure-as-a-service (IaaS). Historically, this meant focusing on low-level “building block” services such as object storage, scalable compute and caching.

    But recent years have seen them build and launch higher-level products such as databases and content delivery platforms. These services are cheap, scalable and performant, but still narrowly defined. Now Amazon seems to be getting even more aggressive, and more serious, about moving “up the stack.” No longer content with reselling fractionalized hardware, they’ve set to work building complex software products to serve more of their customers’ needs.

    Some cloud entrepreneurs are concerned, and rightly so. Among the new products launched this year and on the roadmap are many that bring them into direct competition with startups — startups that never expected to have to go head-to-head with AWS.

    For example, Amazon’s new Elasticsearch Service allows a customer to deploy an Elasticsearch cluster with the click of a button, and costs no more than the standard rates for the underlying EC2 nodes on which it will run. At Elastic, the company behind the open-source Elasticsearch project, they have built a substantial business supporting these clusters, many of which run on AWS. Amazon didn’t pull the plug on them, but they made it easier than ever for customers to deploy Elasticsearch without ever picking up the phone to do business with Elastic. Other software vendors, open source or otherwise, who see their offering on Amazon’s roadmap may feel equally threatened.

    Before we all give up and go home, I think it’s important to consider why Amazon is moving up the stack, and what it means for the industry as a whole. The dream of the public cloud has always been to achieve levels of scale and cost savings such that private data centers could never compete.

    Cheap, instantly-deployable and infinitely-scalable cloud infrastructure is equally good for big companies with enormous demands and evolving needs as it is for the startup ecosystem just trying to innovate and show value to customers as quickly as possible.

    Now that the foundation of infrastructure services is in place, Amazon is building the other features their offering needs to make it an attractive home for even the biggest and most demanding customers.

    Reply
  17. Tomi Engdahl says:

    AWS Lambda Makes Serverless Applications A Reality
    http://techcrunch.com/2015/11/24/aws-lamda-makes-serverless-applications-a-reality/?ncid=rss&cps=gravity_1462_2880799555608892#.ojwuxm:bZ6K

    Most companies today develop applications and deploy them on servers — whether on-premises or in the cloud. That means figuring out how much server, storage and database power they need ahead of time, and deploying all of the hardware and software it takes to run the application. Suppose you didn’t want to deal with all of that and were looking for a new model that handled all of the underlying infrastructure deployment for you?

    Amazon Web Services’ Lambda Service offers a way to do just that today. With Lambda, instead of deploying these massively large applications, you deploy an application with some single-action triggers and you only pay for the compute power you use, priced in 100 millisecond increments of usage. You can have as many triggers as you like running in tandem or separately. When the conditions are met, it triggers the programmed actions.

    Welcome to the world of the serverless app.

    Over the years, the rate of deployment speed and how long these deployments live has gone down dramatically — and Lambda reduces that to milliseconds.

    We are in the midst of an evolutionary shift where Lambda encapsulates shifting developer priorities and requirements. As I wrote last year when Lambda launched at AWS re:invent:

    As AWS’s CTO Werner Vogels pointed out, this will enable programmers to reduce their overall development effort. You simply write the code and define the event triggers, and it will run for you automatically when the conditions are met.

    Triggers could be actions like a user uploading a file from a smart phone or clicking the Buy button on a website, or they could be machine-to-machine actions without humans involved. The idea is that they are flexible so just about anything can be a trigger. What’s more, developers can use familiar programming tools to create the triggers, and Amazon provides a list of prewritten common ones.

    Those conditions could be met every fraction of a second as with an Internet of Things scenario with sensors constantly feeding an application a stream of data or it could be weekly.

    Brave New World

    Technically no application can be serverless of course. There has to be some sort of hardware underpinning the application, but what Amazon has done with Lambda is enabled developers to automate programming to the point AWS takes care of all of the complexity related to the server deployments, storage and the database, Matt Wood general manager of product strategy at AWS explained.

    “Most people were baffled by Lambda, but lots of people [have been] thinking about serverless architecture. You’re not scaling machines up and down. It lets the machines be invisible and is a very cost-effective architecture,” he said.

    When Lambda Comes Into Play

    Lambda works best in two types of scenarios, AWS’s Wood says. On one end of the spectrum, you might have a situation where actions happen rather infrequently and it makes little sense to pay for servers you’re not using most of the time such as the weekly drone photos scenario.

    On the other end, you might be building something big and complex that needs to scale quickly and trying to deploy the infrastructure would be challenging. Suppose you have a network of weather sensors feeding you information and once that information is collected a number of things have to happen. You could trigger an event each time the sensor sends data, and program the series of required actions, keeping mind that this is likely is quite often, measured in fractions of seconds.

    One client using this technology is Major League Baseball. The trigger is the action of the pitch being thrown, the ball being hit, the runner taking off and so forth. They can then track this data in real time and Lambda deals with all of the infrastructure for them, providing as much firepower as needed at the time to capture the information and run the data. And for the six months of the year where there isn’t any baseball, MLB isn’t paying for infrastructure it doesn’t need.

    While this approach to programming isn’t a magic bullet by any means, it’s a new tool for developers who might not need a more traditional server set up, and it gives them options when they are designing the program and deciding how to deploy it.

    “Lambda lets [developers] focus on developing applications without worrying about the heavy lifting of all the behind the scenes stuff of building the application,” Wood said. And it’s ushered in a world of serverless app deployment.

    Reply
  18. Tomi Engdahl says:

    Julie Bort / Business Insider:
    Hewlett Packard Enterprise partners with Microsoft to sell Azure as its preferred cloud alternative

    Meg Whitman: We just signed a big deal to help Microsoft sell its Amazon-killer cloud
    http://uk.businessinsider.com/meg-whitman-hpe-will-sell-microsofts-cloud-2015-11?op=1?r=US&IR=T

    We now know how Hewlett Packard Enterprise plans to keep itself in the cloud computing game now that it decided to shutter its public cloud computing business and not compete head on with Amazon, Microsoft, Google and IBM.

    HPE is going to partner with Microsoft to sell Microsoft’s cloud, Azure, HPE CEO Meg Whitman told analysts on the quarterly conference call on Tuesday.

    She said that HP “reached an agreement with Microsoft” in which HP will sell Microsoft Azure as its “preferred cloud alternative.” In exchange, HP will become a “preferred” cloud services provider when Microsoft customers are looking for consulting or other help, she said.

    Reply
  19. Tomi Engdahl says:

    Amazon Is Giving Away Unlimited Cloud Storage For $5.00
    http://techcrunch.com/2015/11/27/amazon-is-giving-away-unlimited-cloud-storage-for-5-00/?ncid=rss&cps=gravity_1730_-32147943388999863

    Amid a slew of deep discounts appearing on the web today as a part of the shopping holiday Black Friday, Amazon has introduced one deal that’s sort of a no brainer. The company is giving away unlimited online storage on its cloud servers for just five dollars. The normal price for this is $60 per year, so this – 92% off – represents a significant savings.

    The deal is aimed at promoting Amazon’s Cloud Drive service – an online storage site that competes with similar services like Dropbox, Google Drive, Microsoft’s OneDrive, and more. Cloud Drive allows you to store documents, music, photos, videos and other files in the cloud, which you can access from any web-connected device, including smartphones and tablets by way of Amazon’s Cloud Drive mobile applications.

    But be aware that the $5.00 is not a one-time fee. Cloud Drive is sold as an annual subscription, which means when the service comes up for renewal, you may be again paying full price (unless the company runs a similar promotion, of course.)

    For Amazon, giving away storage space – basically a commodity at this point – is not much of a burden. The company introduced the option for unlimited cloud storage earlier this year, with the simplified pricing to make it competitive with tiered-based services that charge more than Amazon’s $5 per month to host so many files.

    But at the end of the day, consumers may not choose their storage service based solely on price – user experience, including ease-of-use and feature set, will also come into play. While Amazon’s software has gotten better over the past months, it’s still not as developed or as polished a service as those from rivals like Google, Microsoft, Dropbox, Box and others.

    https://www.amazon.com/gp/drive/landing/everything/buy?tag=bisafetynet-20

    Reply
  20. Tomi Engdahl says:

    Microsoft cuddles cloudy content with CDN deal
    Akamai, Redmond link arms in bit-farms
    http://www.theregister.co.uk/2015/10/21/microsoft_cuddles_more_content_with_cdn_deal/

    Microsoft has inked a deal with Akamai to integrate its content delivery network (CDN) into the Azure cloud platform.

    The idea is to give Azure customers easy sign-on for Akamai services, either as a check-box on their Azure portal or under a Microsoft enterprise agreement.

    The Azure CDN, Redmond says in this missive, provides “content delivery both using the cloud and tailored for cloud platform customers”.

    That post, by Sudheer Sirivara (partner director, Azure Media and Azure CDN services), adds that Microsoft reckons the tie-up will be particularly attractive in Latin America and Asia.

    There’s also a sop to government customers, with Sirivara noting that “both platforms have achieved the FedRAMP, JAB’s [the Joint Authorisation Board's - Ed] highest certification”.

    Reply
  21. Tomi Engdahl says:

    Yes, HPE and Azure are dating, but it’s OK if we see other people – Hilf
    Cloud belongs to no one vendor, so yes, we’ll work with AWS, VMware …
    http://www.theregister.co.uk/2015/12/02/hpe_azure/

    HPE Discover Hewlett Packard Enterprise is placing big bets on Microsoft Azure: it plans to train thousands of techies to operate Centers of Excellence for Azure, aka its “preferred provider” of public clouds.

    CEO of HPE Meg Whitman claimed last week that her organization, which will not provide Helion Public Cloud from January, will go big with Microsoft’s offering. HPE will continue developing its own private managed clouds.

    The IT duo put some more meat on the bones of their arrangement at the HPE Discover conference in London this week, starting with the launch of the HP Converged Systems 250 – a cloud-in-a-box-type sell – that has been in the workshop for the past year.

    Garth Fort, GM of the cloud enterprise division at Microsoft, who described his unit as the “proud plumbers” of the company, claimed the pair are “doing the hard work upstream” to integrate the products and prevent “downstream complexity.”

    “This is the world’s first hyper-converged infrastructure that is truly integrated with Azure from the get go … it comes pre-configured to do backup and disaster recovery right out of the box,” said Fort.

    Reply
  22. Tomi Engdahl says:

    Microsoft whips out PowerApps – now your Pointy Haired Boss can write software, too!
    Yipeeeee!?
    http://www.theregister.co.uk/2015/11/30/microsoft_introduces_powerapps/

    Microsoft has announced PowerApps, a new way to create and host applications for its Azure cloud service.

    PowerApps is an “enterprise service for innovators everywhere to connect, create and share business apps,” says Application Platform VP Bill Staples.

    Sure, but what is this really? Microsoft, it turns out, is still hunting for that mythical thing, a tool for non-programmers to build useful applications. “There simply aren’t enough skilled developers to keep up with demand for business app scenarios,” says Staples.

    In this latest effort, the company is combining the existing Azure App Service with a new tool that supposedly enables anyone to create an application with a few clicks.

    Azure App Service, which was introduced in March 2015, is the evolution of Azure Web Apps, a managed platform for applications. App Service supports .NET, Node.js, Java, PHP or Python. The underlying virtual machines are only partially abstracted, but Microsoft takes care of patching the operating system and you can auto-scale the infrastructure based on load.

    PowerApps is a new visual development tool for building clients for App Service applications. It borrows from previous efforts to attract non-programmers, such as Project Siena (still in beta), an app for building Windows 8 apps, and App Studio (still in beta), a browser-based tool for building apps for Windows 8, Windows Phone and Windows 10.

    The PowerApps development tool runs as a Windows application, but “the full authoring tool also works in a browser running on any platform,” according to a Microsoft spokesperson.

    PowerApps also support formulae, for which Microsoft has drawn on its Excel spreadsheet application for inspiration. “PowerApps is modeled after Excel. Many of the same formulas that you use in Excel also work in PowerApps,”

    Offline applications are supported, thanks to a local data store.

    Once your app is complete, you can deploy it. Microsoft is aiming for business users, and you can control permissions based on groups in Azure Active Directory, giving users either view or edit permissions. The company makes play of the idea that PowerApps have enterprise manageability as well as being easy to build.

    An app package is built with HTML and JavaScript, and published to the PowerApps cloud service. Native clients are available for iOS, Android and Windows, or the application can be run directly on the PowerApps site.

    Microsoft has three different plans for PowerApps, Free, Standard and Enterprise, with pricing helpfully announced as $0, $, and $$.

    Introducing Microsoft PowerApps
    http://blogs.microsoft.com/blog/2015/11/30/introducing-microsoft-powerapps/

    Reply
  23. Tomi Engdahl says:

    Part of the world’s IT brought down by Azure Active Directory issue
    Microsoft speaks. As do office workers now … to each other!
    http://www.theregister.co.uk/2015/12/03/azure_active_directory_issue_brings_it_world_to_a_halt/

    Alas, poor Redmond has acknowledged the Azure Active Directory is “having issues” alongside the disappearance of its Office 365 service in the UK and Europe.

    Microsoft’s Office 365 service went down earlier this morning, and Microsoft has now copped to an issue affecting Azure Active Directory.

    Azure’s status page reveals:

    Starting at approximately 09:00 on 3rd Dec, 2015, customers began experiencing intermittent issues accessing Azure services that use, or have dependencies on, Azure Active Directory.

    Under the heading “Services Experiencing Downstream Impact from Azure Active Directory” Azure’s status page additionally clarified:

    Engineers are engaged on an underlying Azure Active Directory issue impacted several Azure properties that rely on the service.

    Reply
  24. Tomi Engdahl says:

    Microsoft Office 365, Azure portals offline for many users in Europe
    Status page says all systems go, users cannot log in
    http://www.theregister.co.uk/2015/12/03/office_365_goes_offline/

    Microsoft’s Office 365 service has gone offline for many users in the UK and Europe, though the cause and extent of the outage is not yet known.

    Neither the Office 365 portal, nor the Azure management portal is available at the time of writing, though Microsoft’s status page says everything is fine.

    Some users say email is still being received, but the issue may be with Azure Active Directory. If this service is unable to authenticate users, multiple Microsoft cloud services become inaccessible.

    One Microsoft Gold partner told us, “the irony is you can’t even log a support ticket.”

    But he quipped that the length of time of the outage – service seems to have been interrupted for a couple of hours – was “ntohing”.

    Another Microsoft partner has contacted us, and said, “most” of the online services for Office 365 are down

    “Microsoft are having a bad day,”

    According to Microsoft’s Azure Status Page, the reason for this Azure Active Directory outage was an “Azure Active Directory configuration error” followed by a failover failure.

    Reply
  25. Tomi Engdahl says:

    Surprise! No wonder Oracle doesn’t ‘see’ IBM or SAP in the cloud
    Free gifts to customers don’t count as deals, Larry
    http://www.theregister.co.uk/2015/11/16/oracle_cloud_cloud_claims/

    At Oracle’s recent OpenWorld conference, Larry Ellison asserted: “We never, ever see IBM” or SAP in the cloud.

    Perhaps because, according to a senior Oracle insider, Oracle still isn’t doing much in the cloud.

    How much is “not much?” As InfoWorld reported in an interview with this unnamed source, in 90 per cent of Oracle’s large deals cloud is simply given away, often without the customer even knowing it was included.

    Which isn’t all that surprising, really. After all, this is the same Ellison who once ridiculed cloud computing as mere “water vapour.”

    Smoke, mirrors, and cloud

    How could this happen in our highly regulated world of Generally Accepted Accounting Principles (GAAP)? Easily, according to InfoWorld’s source – a person that editor Eric Knorr reckoned is “not low-ranking” and is “well placed” to know about Oracle’s cloud sales:

    What the account teams are doing is seeding cloud deals within larger deals. So if a customer does a $10 million deal, we are throwing in $500,000 in PaaS. The rep then gets his accelerator of 5X or 3X on that $500,000. I have seen this on 90 per cent of our large deals. … It is still booked as a sale and goes through the appropriate approvals with account reps getting credit. But in reality it is being given away. Somewhat amusingly, the customer’s IT staff are not even aware of the inclusion.

    The industry publication is the latest to detail the extend to which Oracle is going to prime its cloud pump, further confirming The Register’s findings from earlier this year.

    Oracle (not to mention IBM, which has been guilty of its own financial engineering to appear cloudy) can’t continue forever on this vapourware cloud strategy.

    He’s right, but that should be cold comfort for Oracle. After all, that eensy-weensy $6bn competitor (Amazon Web Services) is growing at a torrid 80 per cent pace, and has set its sights on Oracle’s heart and soul: the database.

    AWS keeps winning because it’s delivering real cloud that companies like GE can and do use, with GE decommissioning 30 of its 34 data centres to move thousands of workloads to AWS. That’s cloud that can’t be faked. Ellison should take note.

    Reply
  26. Tomi Engdahl says:

    Ron Miller / TechCrunch:
    IBM Snags Clearleap Video Service As It Continues To Pick Off Strategic Cloud Properties
    http://techcrunch.com/2015/12/08/ibm-snags-clearleap-cloud-video-services-as-it-continues-to-pick-off-strategic-cloud-properties/#.b5imzi:VeJL

    This morning IBM announced it was acquiring Clearleap, a company that gives it enterprise-grade video content management services in the cloud, and which adds another layer to its burgeoning cloud strategy.

    IBM would not disclose the purchase price.

    Clearleap, whose customers include HBO, The History Channel, Time Warner Cable and Verizon (the parent company of TechCrunch) provides customers with multi-screen video processing and video asset management — and perhaps most importantly the ability to expose these features and functions as open APIs on IBM’s Bluemix Platform as a Service.

    IBM has been thinking about video for some time and sees it as an increasingly important content type, says Jim Comfort, general manager of cloud services at IBM. “Video will represent 65 percent of Internet traffic going forward,” he said. “[It] is becoming a first class citizen as a data type. [Customers] need ways to acquire, store, distribute, enhance and manage it,” he explained.

    With this purchase, IBM is getting the manage and enhance pieces, and at least part of the distribute piece he said.

    It was the second cloud purchase in two months, following the purchase of Cleversafe in October, and the two purchases are not coincidental. Cleversafe handles the storage piece for those fat video files in the cloud. It’s all linked and IBM has a plan to make this all fit together into a coherent strategy to increase revenue.

    “Clearleap’s customers are big media companies, after all. IBM’s challenge is to translate the [storage] capacity these customers need into IBM cloud revenue,” John Rymer, an analyst with Forrester Research told TechCrunch by email.

    Reply
  27. Tomi Engdahl says:

    Cisco’s Spark Collaboration App Now Does Something Slack Doesn’t: Voice and Video Chat
    http://recode.net/2015/12/08/ciscos-spark-collaboration-app-now-does-something-slack-doesnt-voice-and-video-chat/

    Networking giant Cisco Systems says it will add voice calling and video conferencing features to Spark, its cloud-based workplace collaboration app.

    Launched last year as Project Squared, Spark is similar to Slack, the ubiquitous office messaging app.

    Cisco would very much like to compete with products like Slack, which has about a million users, and features widely requested in Slack are voice and video calling. Cisco has the hardware and the history of supplying infrastructure to link a Slack-like service with the phones and other equipment many companies already have.

    Cisco’s plan, which takes effect today, calls for adding voice telephony and video conferencing into a single unified experience. What starts as a phone call can, at a click, be converted into a video meeting. Video calls that begin on a meeting room system can, at the swipe of a screen, move to a mobile device.

    Spark will also be enabled for existing Cisco-made desktop telephones and video conferencing systems by way of a Cisco cloud service

    Collaboration was a $4 billion business in Cisco’s most recently completed fiscal year, making it the company’s third-largest segment by revenue after switching and routing equipment. About a billion dollars of that is WebEx, the Web-based meeting and presentation service.

    Reply
  28. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google launches 2nd generation of its Cloud SQL service, with up to 10TB per instance
    http://venturebeat.com/2015/12/10/google-launches-2nd-generation-of-its-cloud-sql-service-with-up-to-10tb-per-instance/

    Google is today announcing the beta availability of the second generation of the Google Cloud SQL service, a hosted version of the MySQL database. The second generation offers greater performance and storage than the original version, which first launched in 2011.

    The new version of Cloud SQL can scale up to 10 terabytes, 15,000 input/output operations per second (IOPS), and 104GB of RAM, Google Cloud Platform product manager Brett Hesterberg wrote in a blog post on the news.

    The next generation of managed MySQL offerings on Cloud SQL
    http://googlecloudplatform.blogspot.fi/2015/12/the-next-generation-of-managed-MySQL-offerings-on-Cloud-SQL.html

    Reply
  29. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Backblaze launches public beta for B2 low-cost cloud storage service after getting 15K private beta requests

    Backblaze launches public beta for B2 cloud storage service after getting 15K requests
    http://venturebeat.com/2015/12/15/backblaze-launches-public-beta-for-b2-cloud-storage-service-after-getting-15k-requests/

    Cloud data backup company Backblaze today announced the beginning of a public beta for its B2 cloud storage service that developers can use inside of their applications. The service has been in a private invitation-only beta since it was announced in September.

    Backblaze received 15,000 requests to use the service during the private beta. Now the service is available at prices that are meant to undercut the largest cloud infrastructure providers — Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

    Backblaze started in 2007 and is based in San Mateo, California. In 2012, the company took on a $5 million funding round.

    The amount of data currently in the B2 service — which depends exclusively on Backblaze’s single data center in the Sacramento, California, area — is only in the terabytes. But now that B2 is immediately available to everyone, the pile of data it stores could grow considerably, primarily due to its low price point.

    Reply
  30. Tomi Engdahl says:

    Cloud Foundry interop scheme leaves PaaS players certifiable
    PaaS portability promise is signed, sealed, delivered
    http://www.theregister.co.uk/2015/12/17/cloud_foundry_interop_scheme_leaves_paas_players_certifiable/

    The Cloud Foundry Foundation has created what amounts to a good cloudkeeping seal of approval.

    The new Cloud Foundry Certification is “designed to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.” Or in other words, if a platform-as-a-service provider can wave around the Foundation’s certificate, it will prove that its implementation of Cloud Foundry is wonderfully vanilla and that if you choose to go to another certificate-holder, your code will come with you and work just fine when it lands.

    CenturyLink’s AppFog, HPE Helion Cloud Foundry, Huawei FusionStage, IBM Bluemix, Pivotal Cloud Foundry, SAP HANA Cloud Platform and Swisscom Application Cloud have all scored the certification on day one.

    Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them.

    PaaS providers therefore have little to lose by signing up with the Foundation and securing the certification

    Reply
  31. Tomi Engdahl says:

    Bungled storage upgrade led to Google cloud brownout
    Credentials copied to new storage, but software looked for the old storage
    http://www.theregister.co.uk/2015/12/17/bungled_storage_upgrade_led_to_google_cloud_brownout/

    Google’s ‘fessed up to another bungle that browned-out its cloud.

    The incident in question meant Google App Engine applications “ received errors when issuing authenticated calls to Google APIs over a period of 17 hours and 3 minutes.”

    The cause, Google says was that its “… engineers have recently carried out a migration of the Google Accounts system to a new storage backend, which included copying API authentication service credentials data and redirecting API calls to the new backend.”

    “To complete this migration, credentials were scheduled to be deleted from the previous storage backend.”

    You can guess what came next, namely “a software bug” that meant “the API authentication service continued to look up some credentials, including those used by Google App Engine service accounts, in the old storage backend. As these credentials were progressively deleted, their corresponding service accounts could no longer be authenticated.”

    Reply
  32. Tomi Engdahl says:

    Mobile developer report shows growing back-end challenge, weak Windows support
    Never mind the app, it’s integrating with data that counts
    http://www.theregister.co.uk/2015/12/17/mobile_developer_report_shows_growing_backend_challenge_weak_windows_support/

    IDC and Appcelerator have published a survey of 5,778 mobile developers which highlights integrating with back-end data as the biggest challenge in app development.

    Appcelerator’s product is a cross-platform mobile development tool, so note that the survey may not be representative of all mobile developers.

    Among this crowd though, 33.9 per cent spent more than half their development effort on back-end integration. This effort includes creating and debugging APIs, finding documentation for existing APIs, and orchestrating data from multiple sources.

    2015 Appcelerator / IDC Mobile Trends Report: Leaders, Laggards and the Data Problem
    http://www.appcelerator.com/blog/2015/12/2015-mobile-trends-report/

    Reply
  33. Tomi Engdahl says:

    Carbonite acquires Seagate’s EVault backup cloud for US$14m
    On this of all days, the Universe just gave the storage industry a Star Wars angle
    http://www.theregister.co.uk/2015/12/17/carbonite_acquires_seagates_evault_backup_cloud_for_us14m/

    Cloud backup outfit Carbonite has acquired Seagate’s EVault cloud backup service.

    Carbonite offers personal backup, plus backup and disaster-recovery-as-a-service for small businesses. The latter market can choose to acquire a Carbonite appliance to create a hybrid backup rig. Bare-metal backup of servers is another option.

    Seagate’s EVault does pretty much the same thing. And so did Wuala, another cloud backup service operated by Seagate subsidiary LaCie. We’re using the past tense to describe Wuala because Seagate offloaded it back in August.

    Seagate appears to have lost interest in cloud backup services, making the EVault evacuation consistent.

    US$14m is heading Seagate’s way to make the deal happen, a trifling sum that may indicate why the disk-maker thinks it is better off without EVault.

    Reply
  34. Tomi Engdahl says:

    Assembly of tech giants convene to define future of computing
    ‘Cloud natives’ include two-year-old Docker, 104-year-old IBM
    http://www.theregister.co.uk/2015/12/18/cloud_native_computer_cloud_native/

    A flurry of the tech world’s great and good signed up the Cloud Native Computing Foundation yesterday, and kicked off a technical board to review submissions – which will be tested and fattened up on a vast Intel-based “computer farm”.

    Vendors declared their intent to form the Cloud Native Computing Foundation (CNCF) earlier this year, under the auspices of the Linux Foundation. Just to avoid confusion, the (cloud native) foundation reckons “Cloud native applications are container-packaged, dynamically scheduled and microservices-oriented”.

    Hence the foundation said it “seeks to improve the overall developer experience, paving the way for faster code reuse, improved machine efficiency, reduced costs and increases in the overall agility and maintainability of applications”.

    Reply
  35. Tomi Engdahl says:

    Flickr gives you a full gigabyte of storage, but only if you upload JPEGs, GIFs, and PNGs. That doesn’t prevent you from using Flickr as your own cloud storage.

    Source: http://hackaday.com/2015/12/20/hackaday-links-december-20-2015/

    More: https://sites.google.com/site/backuptoflickr/

    Reply
  36. Tomi Engdahl says:

    ZFS Replication To the Cloud Is Finally Here and It’s Fast
    http://slashdot.org/story/15/12/22/026209/zfs-replication-to-the-cloud-is-finally-here-and-its-fast

    Jim Salter at arstechnica provides a detailed, technical rundown of ZFS send and receive and compares it to traditional remote syncing and backup tools such as rsync. He writes: ‘In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net.

    rsync.net: ZFS Replication to the cloud is finally here—and it’s fast
    Even an rsync-lifer admits ZFS replication and rsync.net are making data transfers better.
    http://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/

    In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net. Who cares, right? As the service itself states, “If you’re not sure what this means, our product is Not For You.”

    Of course, this product is for someone—and to those would-be users, this really will matter. Fully appreciating the new rsync.net (spoiler alert: it’s pretty impressive!) means first having a grasp on basic data transfer technologies. And while ZFS replication techniques are burgeoning today, you must actually begin by examining the technology that ZFS is slowly supplanting.

    Revisiting a first love of any kind makes for a romantic trip down memory lane, and that’s what revisiting rsync—as in “rsync.net”—feels like for me.

    Rsync is a tool for synchronizing folders and/or files from one location to another. Adhering to true Unix design philosophy, it’s a simple tool to use. There is no GUI, no wizard, and you can use it for the most basic of tasks without being hindered by its interface. But somewhat rare for any tool, in my experience, rsync is also very elegant. It makes a task which is humanly intuitive seem simple despite being objectively complex.

    You can go further and further down this rabbit hole of “what can rsync do.” Inline compression to save even more bandwidth? Check. A daemon on the server end to expose only certain directories or files, require authentication, only allow certain IPs access, or allow read-only access to one group but write access to another? You got it. Running “rsync” without any arguments gets you a “cheat sheet” of valid command line arguments several pages long.

    If rsync’s so great, why is ZFS replication even a thing?

    This really is the million dollar question. I hate to admit it, but I’d been using ZFS myself for something like four years before I realized the answer. In order to demonstrate how effective each technology is, let’s go to the numbers. I’m using rsync.net’s new ZFS replication service on the target end and a Linode VM on the source end. I’m also going to be using my own open source orchestration tool syncoid to greatly simplify the otherwise-tedious process of ZFS replication.

    Time-wise, there’s really not much to look at. Either way, we transfer 1GB of data in two minutes, 36 seconds and change. It is a little interesting to note that rsync ate up 26 seconds of CPU time while ZFS replication used less than three seconds, but still, this race is kind of a snoozefest.

    what happens if we change it just enough to force a re-synchronization?

    Now things start to get real. Rsync needed 13 seconds to get the job done, while ZFS needed less than two. This problem scales, too. For a touched 8GB file, rsync will take 111.9 seconds to re-synchronize, while ZFS still needs only 1.7.

    Reply
  37. Tomi Engdahl says:

    CIOs, what does your nightmare before Christmas look like?
    Graveyards are full of IT pros once thought irreplaceable
    http://www.theregister.co.uk/2015/12/22/cios_worst_real_life_disasters/

    No single point of failure… and other jokes

    We met in the shadow of Telecity going titsup, taking out VOIP, hosting and Amazon Web Services to a large bunch of customers. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop.

    That’s not a single point of failure in the classical sense of a standalone database server but it’s really hard to manage unless you go for full cloud agnosticism, which pushes up costs of development and delivery times. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail.

    Oh, and no one seems to test their mission-critical data centre properly, because it’s mission critical. Our IT execs shared a good laugh about the idea that any CTO would really “see what happens if this fails” if he had any doubt that the power/aircon/network might actually failover, since crashing it would be a career-changing event. So they just over-specify where they can and cross their fingers.

    This means that you pay twice for some things and get the half the coverage for other vulnerabilities.

    Reply
  38. Tomi Engdahl says:

    Rachel King / ZDNet:
    Oracle acquires StackEngine, plots new cloud campus in Austin — A few days ago, it was quietly confirmed that Oracle acquired StackEngine, a startup with a container operations platform for DevOps based in Austin, Texas. — Oracle has announced a few moves in the last week that line …

    Oracle acquires StackEngine, plots new cloud campus in Austin
    http://www.zdnet.com/article/oracle-acquires-stackengine-plots-new-cloud-campus-in-austin/

    A few days ago, it was quietly confirmed that Oracle acquired StackEngine, a startup with a container operations platform for DevOps based in Austin, Texas.

    Oracle has announced a few moves in the last week that line up to bolster the tech giant’s cloud agenda next year.

    A few days ago, it was quietly confirmed that Oracle acquired StackEngine, a startup with a container operations platform for DevOps based in Austin, Texas.

    Both Oracle and StackEngine affirmed the merger on their respective websites and that all StackEngine employees will be joining Oracle’s public cloud team.

    Beyond that, no details — including financial terms — of the deal have been revealed.

    On Tuesday, the Silicon Valley titan unveiled plans to construct a new campus in Austin dedicated to expanding Oracle’s cloud portfolio.

    Reply
  39. Tomi Engdahl says:

    2015 wasn’t about AWS. It was about everybody getting ready to try to beat AWS
    Will the end of cheap money cast a cloud over 2016?
    http://www.theregister.co.uk/2015/12/28/2015_year_review_enterprsie_cloud/

    One star eclipsed all others in the enterprise in 2015: Amazon. Or, rather, its cloud division, AWS.

    It somehow became a given that AWS leads in Infrastructure as a Service (Iaas), with the one-time ebook shop now a major IT infrastructure provider.

    That ebook flogger has become an IT infrastructure provider for the likes of Ocado, Norwich Union, News UK, British Gas and the Co-Op Bank in the UK.

    AWS’s services have gone far beyond humble compute and storage, and it currently has fingers in the pies of data warehousing, disaster recovery, content delivery networks and more.

    Amazon claimed a 95 per cent year-on-year increase in virtual machine (EC2) instances and 120 per cent growth in data transfers from its storage service (S3).

    Things have gotten so big, AWS announced its first UK region: a substantial commitment that’ll require several physical data centres.

    How did that stack up to the competition – the giant software providers lumbering into cloud after dominating the enterprise for decades.

    From a fiscal perspective, comparisons are almost impossible. Everybody claimed they were a leader – that they were the industry’s fastest-growing cloud company.

    The competition was left racing to catch up – rolling out their own, new capabilities or, in IBM’s case, acquiring companies to add features to their cloud.

    But, featurewise, Leong said, Microsoft’s Azure was no AWS. Rather, it was “good enough” – often lacking the completeness and polish of Amazon’s service. In terms of growth, she reckoned Microsoft was able to leverage its existing, on-prem relationship with customers to get a leg up into the cloud.

    First, HP said it would kill its Helion public cloud service in January 2016 – five years after unveiling ambitious plans for an all things-to-all-men hosted service.

    Next, HP backtracked on its commitment to the OpenStack open source cloud – the basis of Helion – by becoming a premier partner of Microsoft’s proprietary Azure.

    Back in 2008, the internet’s number-one ad slinger was close behind Amazon’s AWS with Google App Engine, which allows you to host apps on Google’s platform. Seven years on, Google is nowhere in terms of being a provider of public cloud services for enterprises.

    So much for public cloud, what of private? That was dominated by VMware, courtesy of its hypervisor. VMware retained its crown but, according to Gartner, Microsoft had closed the functionality gap with Hyper-V and was challenging VMware in SMBs.

    When it comes to the playbook of how not to become an over-priced owner of a legacy technology empire ripe for disruption, AWS made the right moves.
    Can anything stop AWS in 2016?

    Cheap money fuelled so much of tech’s success stories in 2015 in the startup scene – firms like Uber.

    But cheap money is helping others, too.

    That figure is manageable if interest rates remain low, enabling cheap borrowing.

    But a rise will inevitably mean increased costs for borrowers – eating into margins.

    Reply
  40. Tomi Engdahl says:

    Cloudy With a Chance of Lock-In
    http://jacquesmattheij.com/cloudy-with-a-chance-of-lock-in

    lots of products that came to market in the recent past and that will come to market in the near future that use some kind of cloud hosted component. In many cases these products rightly use some kind of off-device service in order to provide you with features that would otherwise not be possible. Sometimes these features are so much part of the core product that the whole idea would be dead in the water without it.

    But there are also many products for which it makes very little or even no sense at all to have a cloud based component. In many of these cases if you look a bit more closely at what is being sold you’ll realize that these are just instances of a business-model that was grafted on as an afterthought onto something that would have worked really well stand-alone but where the creators weren’t happy with a one-time fee from potential buyers.

    The last couple of years have seen ever more blatant abuses of this kind of trick to the point where even the most close study of the applications has not been able to reveal a reason why the ‘cloud’ should even be a factor in the design of the product. Some examples: internet-of-things applications that come with a mandatory subscription to get your own data back, televisions that require you to sign up with an online service in order to be able to use the TV’s built in browser, navigation devices or apps that contain all the bits and pieces required to work except that they somehow also require you to sign up with a service before the device will function. The list is absolutely endless.

    I hate these clouds-grafted-on devices and applications with a passion. There are only a few things more certain than death and taxes and one of those is that the device I own will outlive the required service component so sooner or later (and plenty of times sooner)

    Software as a service to many people is the way to convert what used to be licensed software into a repeat revenue stream and in principle there is nothing wrong with that if done properly (Adobe almost gets it right). But if the internet connection is down and your software no longer works, if the data you painstakingly built up over years goes missing because a service dies or because your account gets terminated for no apparent reason and without any recourse you might come to the same conclusion that I came to: if it requires an online service and is not actually an online product I can do just fine without it.

    Reply
  41. Tomi Engdahl says:

    Same time, same server, next Tuesday? AWS can do that now
    Who wins with the new Scheduled Reserved Instances: you or Amazon?
    http://www.theregister.co.uk/2016/01/14/same_time_same_server_next_tuesday_aws_can_do_that_now/

    Amazon Web Services has just done something rather interesting, in the form of making it possible to reserve servers in advance for short bursts of computing.

    The new “Scheduled Reserved Instances” caper does what it says on the tin: you can decide what kind of server you want to run, how long to run it for and what you’re prepared to pay. You can even add a schedule to get the same instance type each week. Or each day. Or whatever, provided you commit for a year.

    AWS is talking up the new plan as ideal for those who predictably run certain jobs at certain times.

    Of course there’s also an upside for AWS in this new feature. Running a big public cloud means spending astounding amounts of cash on kit and operations. The more AWS and its ilk can predict usage rates, the easier it gets to plan future purchases, cashflow and capacity.

    Reply
  42. Tomi Engdahl says:

    Service Provider Builds National Network of Unmanned Data Centers
    http://hardware.slashdot.org/story/16/01/14/2146215/service-provider-builds-national-network-of-unmanned-data-centers

    Colocation and content delivery specialist EdgeConneX is operating unmanned “lights out” data centers in 20 markets across the United States, marking the most ambitious use to date of automation to streamline data center operations. While some companies have operated prototypes of “lights out” unmanned facilities (including AOL) or deployed unmanned containers with server gear, EdgeConneX built its broader deployment strategy around a lean operations model.

    Scaling Up the Lights Out Data Center
    http://datacenterfrontier.com/lights-out-data-center-edgeconnex/

    The “lights out” server farm has been living large in the imaginations of data center futurists. It’s been 10 years since HP first made headlines with its vision of unmanned data centers, filled with computers that monitor and manage themselves. Even Dilbert has had sport with the notion.

    But the list of those who have successfully implemented lights out data centers is much shorter. HP still has humans staffing its consolidated data centers, although it has used automation to expand their reach (each HP admin now manages 200 servers, compared to an initial 1-to-15 ratio). In 2011, AOL announced that it had implemented a small unmanned data center, but that doesn’t appear to have progressed beyond a pilot project.

    EdgeConneX is changing that. The company has pursued a lights out operations model in building out its network of 24 data centers across the United States and Europe. EdgeConneX, which specializes in content distribution in second-tier markets, designs its facilities to operate without full-time staff on site, using sophisticated monitoring and remote hands when on-site service is needed.

    The EdgeConneX design is perhaps the most ambitious example yet of the use of automation to streamline data center operations, and using design as a tool to alter the economics of a business model.

    The Deployment Template as Secret Sauce

    The key to this approach is an advanced design and operations template that allows EdgeConneX to rapidly retrofit existing buildings into data centers with Tier III redundancy that can support high-density workloads of more than 20kW per cabinet. This allowed the company to deploy 18 new data centers in 2014.

    A lean operations model was baked into the equation from the beginning

    “Our primary build is a 2 to 4 megawatt data center and about 10,000 square feet,” said Lawson-Shanks. ” We always build with a view that we’ll have to expand. We always have an anchor tenant before we go to market.”

    That anchor is usually a cable multi-system operator (MSO) like Comcast or Liberty Global,

    Solving the Netflix Dilemma

    “We’re helping the cable companies solve a problem: to get Netflix and YouTube off their backbones,” said Lawson-Shanks. “The network is being overwhelmed with content, especially rich media. The edge is growing faster than you can possibly imagine.”

    Data center site selection is extremely important in the EdgeConneX model. In each new market, the company does extensive research of local network and telecom infrastructure, seeking to identify an existing building that can support its deployment template.

    “This is a patented operations management system and pricing model that makes every Edge Data Center a consistent experience for our customers nationwide,”

    Managing Infrastructure from Afar

    The lynchpin of the lights out approach is data center infrastructure management (DCIM) doftware. EdgeConneX uses a patented data center operating system called EdgeOS to monitor and manage its facilities. The company has the ability to remotely control the generators and UPS systems at each data center.

    EdgeConneX facilities are managed from a central network operations center in Santa Clara, with backup provided by INOC

    Currently 20 of the 24 EdgeConnex data centers are unmanned. Each facility has a multi-stage security system that uses biometrics, PIN and keycard access, with secured corridors (“mantraps”) and video surveillance.

    EdgeConneX expects to be building data centers for some time to come. Demand for edge-based content caching is growing fast

    “The user experiences and devices are changing,” he said. “But fundamentally, it’s latency, latency, latency.”

    Much of this technology wasn’t in the mix in 2005 when the first visions emerged of an unmanned data center. But as we see edge data centers proliferate, the EdgeConneX model has demonstrated the possibility of using automation to approach these facilities differently. This approach won’t be appropriate for many types of workloads, as most data centers in second-tier and third-tier markets will serve local businesses with compliance mandates that require high-touch service from trained staff.

    But one thing is certain: The unmanned “lights out” data center is no longer a science project or flight of fancy. In 20 cities across America, it’s delivering Netflix and YouTube videos to your devices.

    Reply
  43. Tomi Engdahl says:

    AT&T chooses Ubuntu Linux instead of Microsoft Windows
    http://betanews.com/2016/01/13/att-chooses-ubuntu-linux-instead-of-microsoft-windows/

    While Linux’s share of the desktop pie is still virtually nonexistent, it owns two arguably more important markets — servers and smartphones. As PC sales decline dramatically, Android phones are continually a runaway market share leader. In other words, fewer people are buying Windows computers — and likely spending less time using them — while everyone and their mother are glued to their phones. And those phones are most likely powered by the Linux kernel.

    AT&T
    has partnered with Canonical to utilize Ubuntu for cloud, network, and enterprise applications. That’s right, AT&T did not choose Microsoft’s Windows when exploring options. Canonical will provide continued engineering support too.

    “By tapping into the latest technologies and open principles, AT&T’s network of the future will deliver what our customers want, when they want it. We’re reinventing how we scale by becoming simpler and modular, similar to how applications have evolved in cloud data centers. Open source and OpenStack innovations represent a unique opportunity to meet these requirements and Canonical’s cloud and open source expertise make them a good choice for AT&T”, says Toby Ford, Assistant Vice President of Cloud Technology, Strategy and Planning at AT&T.

    John Zannos, Vice President of Cloud Alliances and Business Development at Canonical explains, “this is important for Canonical. AT&T’s scalable and open future network utilizes the best of Canonical innovation. AT&T selecting us to support its effort in cloud, enterprise applications and the network provides the opportunity to innovate with AT&T around the next generation of the software-centric network and cloud solutions. Ubuntu is the Operating System of the Cloud and this relationship allows us to bring our engineering expertise around Ubuntu, cloud and open source to AT&T”.

    AT&T selects Ubuntu for cloud and enterprise applications
    http://insights.ubuntu.com/?p=31292

    AT&T has selected Canonical to be part of its effort to drive innovation in the network and cloud. Canonical will provide the Ubuntu OS and engineering support for AT&T’s cloud, network and enterprise applications. AT&T chose Ubuntu based on its demonstrated innovation, and performance as the leading platform for scale-out workloads and cloud.

    “By tapping into the latest technologies and open principles, AT&T’s network of the future will deliver what our customers want, when they want it,” said Toby Ford, Assistant Vice President of Cloud Technology, Strategy and Planning at AT&T. “We’re reinventing how we scale by becoming simpler and modular, similar to how applications have evolved in cloud data centers. Open source and OpenStack innovations represent a unique opportunity to meet these requirements and Canonical’s cloud and open source expertise make them a good choice for AT&T.”

    About Canonical:

    Canonical is the company behind Ubuntu, the leading OS for cloud, scale-out and ARM-based hyperscale computing featuring the fastest, most secure hypervisors, as well as the latest in container technology with LXC and Docker. Ubuntu is also the world’s most popular operating system for OpenStack. Over 80% of the large-scale OpenStack deployments today are on Ubuntu.

    About AT&T:

    AT&T Inc. (NYSE:T) helps millions around the globe connect with leading entertainment, mobile, high speed Internet and voice services. We’re the world’s largest provider of pay TV. We have TV customers in the U.S. and 11 Latin American countries.

    Reply
  44. Tomi Engdahl says:

    Dina Bass / Bloomberg Business:
    Microsoft to donate $1B in cloud services to 70K nonprofits globally over the next three years — Microsoft to Donate Cloud Services Worth $1 Billion Over 3 Years — Program aims to advance public good, solve biggest problems — Company will also invest in Internet in developing nations

    Microsoft to Donate Cloud Services Worth $1 Billion Over 3 Years
    http://www.bloomberg.com/news/articles/2016-01-19/microsoft-to-donate-cloud-services-worth-1-billion-over-3-years

    Program aims to advance public good, solve biggest problems
    Company will also invest in Internet in developing nations

    Microsoft Corp. will donate cloud services worth more than $1 billion to nonprofit groups over the next three years in a bid to “advance the public good” and help solve some of the world’s toughest problems, President and Chief Legal Officer Brad Smith said.

    The largest part of the funds will provide free or discounted cloud services, such as Azure computing power and data storage, Office 365 Internet-based corporate programs and other products to nonprofit groups worldwide. Other donations will include expanding access to free Azure for universities, and a program that will invest in organizations providing Internet connectivity in the developing world. Microsoft Chief Executive Officer Satya Nadella, who is attending the World Economic Forum in Davos this week, will announce the donation in a column Wednesday in the Financial Times.

    Nadella is trying to boost the usage of Microsoft’s products and expand its cloud services businesses amid rising competition

    Reply
  45. Tomi Engdahl says:

    South China Morning Post:
    AliCloud, an Alibaba subsidiary, partners with Nvidia to develop GPU-based computing cloud platform in China — Alicloud launches Big Data Platform as Alibaba subsidiary aims to make technology accessible across China — E-commerce powerhouse Alibaba Group is gearing up to make “big data” …

    Alicloud launches Big Data Platform as Alibaba subsidiary aims to make technology accessible across China
    http://www.scmp.com/tech/innovation/article/1903491/alicloud-launches-big-data-platform-alibaba-subsidiary-aims-make-new

    Reply
  46. Tomi Engdahl says:

    IBM introduces fleecing-you-as-a-service for retailers
    And the price is … whatever the cloud says you’ll pay. And maybe more
    http://www.theregister.co.uk/2016/01/18/ibm_introduces_fleecingyouasaservice_for_retailers/

    IBM has introduced a new cloud service it calls “dynamic pricing” that says a lot about where online retailing, IBM and its relationship with partners is going.

    Dynamic pricing is conceptually simple: if you run a web store, IBM will now scour rivals price lists for you and offer recommendations about what you should charge. If a competitor drops prices, you won’t be left as the most expensive option out there. And if users are bailing out before they buy, leaving a full virtual shopping cart at the checkout, you’ll know that too and be offered ways to stop it from happening.

    Among the things IBM reckons retailers need to be able to consider are “Market demand, social sentiment, competitor prices, inventory availability, time of day, conversion rates, financial goals, and even the weather”.

    Big Blue’s guff (PDF) about the new service says it’s needed because customers now expect discounts when shopping online. But the company also says “Products with high demand but limited availability could have their prices marginally increased and vice versa. And while we’re not talking about 50 percent price hikes but rather a few cents on the dollar, systematically and continually identifying these opportunities adds up.”

    Think about that for a moment: a bot that can figure out when rivals have low inventory and boost your prices accordingly. Nice!

    Let’s also consider what this does to IBM’s business model, which in the not-too-distant past emphasised working with specialist independent software vendors to cook up vertical products.

    Reply
  47. Tomi Engdahl says:

    Chris Welch / The Verge:
    Backblaze, the cloud backup service, will now loan you a hard drive full of your data — Backblaze, my preferred cloud backup service for a few years now, is today making it a bit easier and cheaper to restore all of your data if your computer should ever crash or get lost / stolen.

    Backblaze, the cloud backup service, will now loan you a hard drive full of your data
    Pay for the drive, restore your files, and send it back for a full refund
    http://www.theverge.com/2016/1/26/10832728/backblaze-restore-refund-program-announced

    Backblaze, my preferred cloud backup service for a few years now, is today making it a bit easier and cheaper to restore all of your data if your computer should ever crash or get lost / stolen. The company has always let subscribers ($5 per month) pay $189 to receive an external hard drive with a full copy of their backup. But maybe you don’t need yet another external drive that’ll just end up sitting around collecting dust. So now Backblaze is giving customers another option: send it back within 30 days for a full refund.

    You’ve got to cover shipping costs for the drive’s return trip, unfortunately, but this still makes for a pretty convenient way of getting your stuff back in a jam.

    Of course, this being a cloud service, Backblaze always gives you the ability to download your files directly from the company’s website at no charge, but that can be a bit slow and frustrating depending on your broadband connection.

    Backblaze claims this service is unique among its competition, with Crashplan recently having phased out the option for “Home” subscribers to restore via mailed hard drive.

    Reply
  48. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft announces first technical preview of Azure Stack, which puts Azure on-premises, to launch January 29 with full release slated for Q4 — Microsoft Announces The First Technical Preview Of Azure Stack — With Azure Stack, Microsoft wants to bring its Azure cloud computing services into its customers’ data centers.

    Microsoft Announces The First Technical Preview Of Azure Stack
    http://techcrunch.com/2016/01/26/microsoft-launches-the-first-technical-preview-of-azure-stack/

    With Azure Stack, Microsoft wants to bring its Azure cloud computing services into its customers’ data centers. Today, the company announced that it will launch the first technical preview of Azure Stack later this week on Friday, January 29.

    For now, this is a pretty limited version of Microsoft’s overall vision for Azure Stack. It’ll only support a single machine, for example, which is obviously a far cry from the enterprise-scale data center environment Microsoft envisions for the platform. As Microsoft’s Ryan O’Hara told me, though, the plan is to get a full release of Azure Stack into customers’ hands “in the Q4 timeframe.”

    In many ways, Azure Stack is the logical next step in Microsoft’s overall hybrid cloud strategy. If you’re expecting to regularly move some workloads between your own data center and Azure (or maybe add some capacity in the cloud as needed), having a single platform and only one set of APIs across your own data center and the cloud to work with greatly simplifies the process. This, O’Hara believes, means Microsoft will be “well-positioned against Google and AWS” because it can more easily connect its data centers to its customers’ data centers than its competitors.

    Microsoft describes Azure Stack as a “high-fidelity” version of Azure. For now, though, the plan isn’t to make all the Azure services available on premises. Instead, these earlier versions of Azure Stack will mostly focus on the core components: compute, storage and networks (which isn’t unlike earlier versions of Azure Stack competitor OpenStack, for example).

    Reply
  49. Tomi Engdahl says:

    Microsoft struggles against self-inflicted Office 365 IMAP outage
    Seven days is a long time in cloud
    http://www.theregister.co.uk/2016/01/25/office_365_imap_outage/

    Microsoft engineers are struggling to fix a seven-day-old, self-inflicted Office 365 IMAP outage.

    IMAP access to Office 365 tanked on January 18, meaning customers could not access emails using Exchange Online via IMAP or connect third-party mail clients via IMAP.

    Microsoft told disgruntled Office 365 customers that the problem affected a limited number of licensees – but that those customers hit had a “large number of users.”

    The culprit was found to be a botched Microsoft update that stopped the IMAP protocol automatically loading data from Exchange Online databases.

    Following preliminary analysis of the cause, Microsoft told disgruntled users on January 22:

    As part of our efforts to improve service performance, an update was deployed to a subset of code which is responsible for obtaining the subscribed folder list. However, the update caused a code issue that prevented the list from being automatically loaded.

    Microsoft promised to fix the problem by January 23 – five days after the outage. Those plans then had to be pushed back. Restoration time for the service was estimated for today (January 25), however, Microsoft is still nowhere near a fix.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*