Gartner has published a new magic quadrant for infrastructure-as-a-service (IaaS) – the results should not be surprising to anybody. Consider this posting as update to my previous cloud market posting few years back. Here is reporting on newest cloud market trends from two sources:
Gartner puts AWS, Microsoft Azure top of its Magic Quadrant for IaaS | ZDNet
Amazon Web Services (AWS) and Microsoft Azure dominate the infrastructure-as-a-service field, according to Gartner, which released its IaaS Magic Quadrant.
However, Google Cloud is emerging as a key challenger.
Gartner confirms what we all know: AWS and Microsoft are the cloud leaders, by a fair way
Paranormal parallelogram for IaaS has Google on the same lap, IBM and Oracle trailing
136 Comments
Tomi Engdahl says:
Ron Miller / TechCrunch:
Dropbox unveils plan for its global network with custom infrastructure, following last year’s announcement it would leave AWS
Dropbox announces massive network expansion
https://techcrunch.com/2017/06/19/dropbox-announces-massive-network-expansion/
“The edge proxy is a stack of servers that act as the first gateway for TLS & TCP handshake for users and is deployed in PoPs (points of presence) to improve the performance for a user accessing Dropbox from any part of the globe,” Bhargava wrote.
This type of service is typically offered by Content Delivery Network (CDN) providers like Akamai, but like many companies working at the scale of Dropbox, it ultimately decided it needed to build a custom solution to meet its unique requirements and to give it the ability to control all aspects of the stack.
The company is deploying the custom proxy stack across its US data centers starting today.
Ultimately, this expansion is designed for two reasons. One is to improve the user experience wherever they live. This was particularly important to Dropbox because it found that about 75 percent of users are outside the US.
The second reason is that by building its own hardware and software, the company can control costs much more easily, and they are claiming the new approach cuts networking costs in half, an amount that has to add up to significant cost savings for the company.
Tomi Engdahl says:
Larry Dignan / ZDNet:
Nutanix announces strategic partnership with Google Cloud and unveils new tools for hybrid cloud management
Google Cloud Platform, Nutanix forge hybrid cloud strategic pact
http://www.zdnet.com/article/google-cloud-platform-nutanix-forge-hybrid-cloud-strategic-pact/
Google Cloud and Nutanix joint customers will be able to manage on-premises and public cloud infrastructure as one unified service.
Tomi Engdahl says:
TechCrunch:
Microsoft buys Tel Aviv-based Cloudyn to incorporate the startup’s cloud management products into its portfolio; sources say the price was $50M-$70M — Back in April, we began hearing that Microsoft was in the process of buying Israeli cloud startup Cloudyn, a company that helps customers manage …
Microsoft confirms Cloudyn acquisition, sources say price is between $50M and $70M
https://techcrunch.com/2017/06/29/microsoft-finally-pulls-trigger-on-cloudyn-deal/
Back in April, we began hearing that Microsoft was in the process of buying Israeli cloud startup Cloudyn, a company that helps customers manage their cloud billing across multiple clouds. It’s taken a while to work through the terms, but today Microsoft finally made it official.
Sources tell TechCrunch the price was between $50 million and $70 million.
In a company blog post today, Microsoft’s Jeremy Winter wrote, “I am pleased to announce that Microsoft has signed a definitive agreement to acquire Cloudyn, an innovative company that helps enterprises and managed service providers optimize their investments in cloud services.”
As companies continue to pursue a multi-cloud strategy, this gives Microsoft a cloud billing and management solution that provides it with an advantage over competitors, particularly AWS and Google Cloud Platform.
Tomi Engdahl says:
Jacob Kastrenakes / The Verge:
Google says it plans to launch its full desktop backup tool, called Backup and Sync, for Google Drive on June 28, available as an app — Google is turning Drive into a much more robust backup tool. Soon, instead of files having to live inside of the Drive folder, Google will be able …
Google Drive will soon back up your entire computer
https://www.theverge.com/2017/6/14/15802200/google-backup-and-sync-app-announced-drive-feature
Google is turning Drive into a much more robust backup tool. Soon, instead of files having to live inside of the Drive folder, Google will be able to monitor and backup files inside of any folder you point it to. That can include your desktop, your entire documents folder, or other more specific locations.
The backup feature will come out later this month, on June 28th, in the form of a new app called Backup and Sync. It sounds like the Backup and Sync app will replace both the standard Google Drive app and the Google Photos Backup app, at least in some cases. Google is recommending that regular consumers download the new app once it’s out, but it says that business users should stick with the existing Drive app for now.
Backup and Sync from Google available soon
https://gsuiteupdates.googleblog.com/2017/06/backup-and-sync-from-google-available.html
On June 28th, 2017, we will launch Backup and Sync from Google, a tool intended to help everyday users back up files and photos from their computers, so they’re safe and accessible from anywhere. Backup and Sync is the latest version of Google Drive for Mac/PC, which is now integrated with the Google Photos desktop uploader. As such, it will respect any current Drive for Mac/PC settings in the Admin console.
Tomi Engdahl says:
Tom Krazit / GeekWire:
Google announces release of Spinnaker 1.0, an open-source multi-cloud continuous delivery platform
Spinnaker, an open-source project for continuous delivery, hits the 1.0 milestone
https://www.geekwire.com/2017/spinnaker-open-source-project-continuous-delivery-hits-1-0-milestone/
Spinnaker, an open-source project that lets companies improve the speed and stability of their application deployment processes, reached the 1.0 release milestone Tuesday.
Google announced the 1.0 release of Spinnaker, which was originally developed inside Netflix and enhanced by Google and a few other companies. The software is used by companies like Target and Cloudera to enable continuous delivery, a modern software development concept that holds application updates should be delivered when they are ready, instead of on a fixed schedule.
Spinnaker is just another one of the open-source projects that are at the heart of modern cloud computing
Spinnaker is probably still best for early adopters, but continuous delivery in general is one of the many advances in software development enabled by cloud computing that will likely be an industry best practice in a few years.
In an interesting move, Google took pains to highlight the cross-platform nature of Spinnaker, noting that will run across several different cloud providers and application development environments. Google is chasing cloud workloads that tend to go to Amazon Web Services or Microsoft Azure, and noted “whether you’re releasing to multiple clouds or preventing vendor lock-in, Spinnaker helps you deploy your application based on what’s best for your business.”
Tomi Engdahl says:
Earlier cloud info postings from few years back
http://www.epanorama.net/newepa/2012/12/08/whos-who-of-cloud-market/
Tomi Engdahl says:
Cloud students, pay attention! Exam plans promise fresh skills
Going native
https://www.theregister.co.uk/2017/04/21/mad_native_cloud_skill/
Anyone looking to deploy cloud in any meaningful way will struggle to find skilled practitioners. It’s a situation faced by so many enterprises, no matter what cloud system is being deployed.
Microsoft recently bewailed the lack of cloud skills and it’s been an issue long highlighted by the Cloud Industry Forum, which is why it’s keen to develop its own training courses.
But this paucity of knowledge is being particularly felt when it comes to cloud-native deployments. Cloud native is one of those vague buzzwords that can mean what any vendor wants it to mean but the underlying principle is that it’s about building cloud-specific applications and not moving existing applications to cloud service providers. It’s an approach that’s fundamental to digital transformation (to quote another popular buzzphrase) of businesses.
It’s one thing wanting to go down this transformation path, it’s another thing to have the means to do it. According to a report last year from the Cloud Foundry Foundation, the lack of cloud skills was having an impact on businesses’ ability to adapt to cloud in a “truly transformative way”. The report found that companies were struggling to hire new developers, meaning that key projects were being neglected.
Tomi Engdahl says:
Microsoft’s Azure cloud feels the pinch in price war with Amazon’s AWS
Ah, the old ‘Windows upsell’ one-two
https://www.theregister.co.uk/2017/04/28/cuts_hit_microsoft_cloud_profits/
Azure, Dynamics 365 and Office 365 commercial saw the biggest revenue growth, according to Microsoft – 93 per cent, 81 per cent and 45 per cent respectively.
Tomi Engdahl says:
Cloud eye for the sysadmin guy: Get tooled up proper, like
How to weather the storm
https://www.theregister.co.uk/2017/06/06/cloud_for_sysadmins/
Like it or not, the cloud in all forms is approaching at great speed, irrespective of your employer’s size. All sysadmins need to get onboard or be left behind. Me? After 17 years working in a range of environments, I did at one point believe I had ages before the cloud arrived at large-scale enterprises like the one where I’m employed. Plenty of time to skill up. Or so I thought.
One day that plan got turned upside down. The proclamation came down from on high: everything was going to the cloud. Quickly.
The move wasn’t expected by the rank and file. Why would it? We are – or were – comfortably running several thousands servers, ranging from legacy Windows NT4 and RHEL 5 all the way through to Windows Server 2016. But our customers were demanding cloud and cloud is seen as a good way to cut physical footprint and help control costs.
All of which left me (and the other sysadmins) in a bit of a quandary as to what was next in terms of staying relevant and, more importantly, employed. Sure, the on-premises infrastructure wasn’t going anywhere today, tomorrow or next year but it was now on notice.
So how do sysadmins, especially the ones that have spent several years or even decades dealing with on-premises get on the all-singing, all-dancing, cloud on-ramp?
There are a couple of ways to look at the situation. You could not bother to retool and take the upcoming redundancy package, or seize an opportunity to understand that we as administrators are on the very precipice of a paradigm shift in how IT gets done. Master the hot skills and become exceptional at what you do and people will pay handsomely for it.
Taking the second route provides a way to potentially reinvigorate your livelihood and perhaps make you more in demand than ever. Do I have your attention now?
But you are going to have to do the legwork. Companies, both small and large, typically tend to cheap out on training.
Invest in yourself
Getting the right training is key – anyone who wants to effectively learn has to be both interested in and motivated by the subject. Most companies won’t spring for a set of real, instructor-led courses for every administrator. It would cost a small fortune. Therefore, it is partly understandable why we got this courseware. It just happened in our case to be bad courseware.
These courses I sat, to be frank, where absolute dross. They were part of a course catalogue that came from a large “coverall” type of courseware vendor that gets sold to large businesses
Tomi Engdahl says:
Migrating to Microsoft’s cloud: What they won’t tell you, what you need to know
Of devils and details
https://www.theregister.co.uk/2017/06/19/hidden_hurdles_of_microsoft_cloud_migration/
“Move it all to Microsoft’s cloud,” they said. “It’ll be fine,” they said. You’ve done your research and the monthly operational cost has been approved. There’s a glimmer of hope that you’ll be able to hit the power button to turn some ageing servers off, permanently. All that stands in the way is a migration project. And that is where the fun starts.
Consultants will admit that their first cloud migration was scary. If they don’t, they’re lying. This is production data we’re talking about, with a limited time window to have your systems down. Do a few migrations and you learn a few tricks. Work in the SMB market and you learn all the tricks, as they don’t always have their IT environments up to scratch to start with. Some of these traps are more applicable to a SaaS migration, particularly to Office 365. Some will trip you up no matter what cloud flavour you’ve chosen.
How much data?
The worst thing you can do is take your entire collection of mailboxes and everything from your file servers and suck it all up to the cloud. Even in small organisations that can be over 250GB of data. If your cloud of choice doesn’t have an option to seed your data via disk, that all has to go up via your internet connection. At best, we’re talking days.
Your two best options (pick one or both) are a pre-cloud migration archiving project and/or a migration tool that will perform a delta sync between the cloud and your original data source. Get ruthless with the business about what will be available in the cloud and what will stay in long-term storage on-prem. You seriously don’t want to suck up the last 15 years of data in this migration project.
Piece of string internet connection
Don’t even start a cloud project until you’re happy with your internet speeds. And don’t ignore your lesser upload speed either. That’s the one doing all the hard work to get your data to the cloud in the first place and on an ongoing basis if you are syncing all the things, all the time. Another tip: don’t sync all the things everywhere all the time. If you’re going to use the cloud, use the cloud, not a local version of it. Contrary to popular belief, working locally does not reduce the impact on your internet connection, it amplifies it with all the devices syncing your changes.
Outlook item limits
Office 365 has inherited some Microsoft Exchange and Outlook quirks that you might hope are magically fixed by the cloud. Most noticeable is performance issues with a large number of items or folders in a mailbox. This includes shared mailboxes you might be opening in addition to your own mailfile.
DNS updates and TTL
When you are ready to flip your MX records to your new cloud email system, it’s going to take time for the updated entry to filter out worldwide across the global network of secondary DNS servers. Usually things will settle down after 24 hours, which is fine if your organisation doesn’t work weekends but challenging if you are a 24×7 operation.
Missing Outlook Stuff
Lurking in the shadows of a Microsoft Outlook user profile are those little personal touches that are not migrated when a mailfile is sent to the cloud. These are the things you’ll get the helpdesk calls about. The suggested list of email addresses (Autocomplete), any text block templates (Quick Parts) and even email signatures all need to be present when accessing the user’s new email account.
One admin to rule them all
If I had a dollar for every time someone locked themselves out of their admin account and the password recovery steps didn’t work, I wouldn’t need to be writing this. Often your cloud provider can help, once you’ve run the gauntlet of their helpdesk.
Syncing ALL the accounts
Even if your local on-prem directory is squeaky clean (with no users who actually left in 2012), it will contain an amount of service accounts. The worst thing you can do is sync all the directory objects to your cloud directory service, which then becomes a crowded mess.
Compatibility with existing tech
Older apps don’t support TLS encryption that is required by Office 365 for sending email. This can impact software and hardware, such as scanners or multifunction devices.
Ancient systems
You thought the migration went smoothly, but now someone’s office scanner won’t email scans or a line of business application won’t send its things via email. Chances are those ancient systems don’t support TLS encryption. Now things are going to get a little complicated. There are direct send and relay methods, but it might easier to buy a new scanner.
Metadata preservation
This one’s for the Sharepoint fans. True data nerds love the value in metadata – all the information about a document’s creation, modification history, versions etc. A simple file copy to the cloud is not guaranteed to preserve that additional data or import in into the right places in your cloud system.
Long file names
Once upon a time we had an 8.3 character short file name and we lived with it. Granted, we created much fewer files back then. With the arrival of NTFS we were allowed a glorious 260 characters in a full file path and we use it as much as we can today. Why? Because search sucks and a structure with detailed file names is our only hope of ever finding things again on-prem. Long file names (including long-named and deeply nested folders) will cause you grief with most cloud data migrations.
If you don’t run into migration issues with this, just wait until you start syncing. We’ve seen it both with OneDrive and Google Drive and on Macs too. Re-educate your users and come up with a new, shorter naming standard. And watch out for Microsoft lifting the 260-character limitation in Windows 10 version 1607. Fortunately, it’s opt-in.
Tomi Engdahl says:
Cloud may be the future, but it ain’t all sunshine and rainbows
Learning lessons the hard way so you don’t have to
https://www.theregister.co.uk/2017/06/21/cloud_hidden_lessons/
Yes, cloud might be the future but what truths lie hidden beneath this rock of certainty? You’ve heard the hype, pros and cons, but there’s plenty the average cloud user may not have considered in the clamour to get up there. Our company recently heeded the cloud’s call, and this is what we discovered.
Where cloud excels is auto-scaling infrastructure but the fly in the ointment is that most small-to-medium environments are still based on services grouped around a set of virtual machines rather than being focused on the service it provides.
Re-engineering such a platform to fit the cloud paradigm isn’t something a lot of vendors discuss. Rather, it’s something frequently kicked down the road using phrases like “tri-modal”.
For those admins and engineers who want to try re-engineering, most systems can’t put an application into the cloud without a significant amount of work. The real problem is working out the dependencies and how they would map into any cloud environment.
Tomi Engdahl says:
What does an enterprise cloud look like?
What’s here already, what’s missing
https://www.theregister.co.uk/2017/07/03/enterprise_cloud_characteristic_features/
Sysadmin blog In late 2014 I wrote about Software Defined Infrastructure (SDI). I revisited this early last year. This year I expect the first mainstream SDI blocks to emerge, likely under the moniker “Enterprise Cloud”. So what does the enterprise cloud of 2017 look like?
A number of players are entering, or have already entered the turnkey cloud market. “Push button, receive bacon” on-premises clouds are a real thing that everyday companies can buy this year. No largesse required.
At the core of most of 2017′s enterprise clouds is Hyperconverged Infrastructure (HCI). The ability to reliably manage virtualization and storage while integrating various services, all on commodity hardware has been a huge enabler. This has driven down costs to a reasonable level, and also simplified support.
Commodity x86 supply logistics and support apparatuses are well known. There is a huge channel of integrators and service providers hungry for a reason to still exist in the face of an increasingly easy-to-use public cloud. Vendors up and down the supply chain want to make the enterprise cloud happen, and that lack of friction means that those with good software are getting chances that, five years ago, they wouldn’t have had.
The public cloud itself is a huge boost to the enterprise cloud. It raised expectations of what on-premises IT should be delivering, driving demand for self-service solutions. The public cloud has also helped drive a stake through the heart of the antiquated notion of dedicated nodes or clusters for single workload types.
You simply provision what you need, and go. If it meets your needs, you’re good. You don’t freak out about what’s underneath. Mixed workloads are the new normal – something HCI vendors have been banging on about for years.
Ultimately, the enterprise cloud is about choice. Cloudy setups let you grow as you need instead of massively overprovisioning. The magic is in the management software, not the hardware.
What’s included
Today, you can buy SDI blocks with a number of key features integrated into your turnkey solution. Compute resources (CPU, RAM, GPU, etc.), distributed shared storage, and storage services (such as compression, deduplication, thin provisioning and so so forth) are all part of the basic package.
The better enterprise clouds incorporate or obviate the need for WAN optimisation technology, offer fully integrated hybrid cloud computing, a bare metal hypervisor or microvisor, and have some form of workload migration/maximization software. (Sadly, none of the load-maximization bits are licensing aware at the moment, but that was probably a pipe dream on my part.)
Most enterprise clouds offer orchestration to spin up groups of applications as services, incorporate an app store or marketplace and integrate with hybrid identity services allowing for Role Based Access Controls (RBAC) that work on a combination of on-premises, service provider and public clouds.
Software Defined Networking (SDN) has found its way into today’s enterprise clouds as well, but as the argument over just which approach to SDN will win out is still very much in the air
What is only partially available
Perhaps the most distressing partial feature of today’s enterprise clouds is my dream of fully automated and integrated backups. I wanted application aware, auto-configuring, auto-testing backups. Ones that would be able to back up to whatever on-premises storage you happened to have, to secondary sites, to service providers or the public cloud.
Sadly, the closest we get in most instances is the SDI vendor automating snapshots and offering you a way to buy more of their gear to send your snapshots to. As an extension of this, fully automated and integrated disaster recovery is still narrow and limited in today’s enterprise clouds; it only works if you do it exactly as the vendor envisioned it to get you buying the maximum possible amount of their gear. Otherwise, you’re stuck hunting third-party software as we’ve done for the past umpteen years.
There’s a lot of work still to be done here.
What’s not included
Notably lacking from any of today’s enterprise clouds is integrated chaos creation. If you want a Chaos Monkey, you have to integrate your own. That saddens me mostly because it means that this level of automated testing isn’t yet accepted in the mainstream of IT, which means we collectively aren’t as good at our jobs as we like to think.
Enterprise clouds also lack integrated next generation security. Specifically, nobody has added automated Incident Response, relying at best on SDN packages incorporating primitive NFV.
Nearly there
Enterprise clouds are a lot closer to my SDI block dream today than they were six months ago. A year from now I expect we’ll have not one, but at least four different vendors offering these things with 90 per cent of the asks in place. It’s been a long haul, but the is in sight.
Tomi Engdahl says:
SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure
Tomi Engdahl says:
Whitepapers
Critical security and compliance considerations for hybrid cloud deployments
A report from Custom Research commissioned by Hewlett Packard Enterprise
https://whitepapers.theregister.co.uk/paper/view/4943/critical-security-and-compliance-considerations-for-hybrid-cloud-deployments
The evolution of cloud infrastructures toward hybrid cloud models is inexorable, driven both by the requirement of greater IT agility and financial pressures. But a major study by 451 Research reveals that organisations are struggling with the twin challenges of security and compliance in the hybrid cloud space. Organisations want to be able to replicate existing security, governance and compliance audit practices in hybrid cloud environments, where at least some of the cloud infrastructure belongs to third parties. Organisations are struggling with practical considerations in this regard, such as ensuring that workloads are moved securely from one environment to another, without having the data maliciously or inadvertently exposed.
Tomi Engdahl says:
AWS Quickstart for Kubernetes
http://www.linuxjournal.com/content/aws-quickstart-kubernetes?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29
Kubernetes is an open-source cluster manager that makes it easy to run Docker and other containers in production environments of all types (on-premises or in the public cloud). What is now an open community project came from development and operations patterns pioneered at Google to manage complex systems at internet scale.
AWS Quick Starts are a simple and convenient way to deploy popular open-source software solutions on Amazon’s infrastructure. While the current Quick Start is appropriate for development workflows and small team use, we are committed to continuing our work with the Amazon solutions architects to ensure that it captures operations and architectural best practices. It should be easy to get started now, and achieve long term operational sustainability as the Quick Start grows.
Tomi Engdahl says:
Connectivity’s value is almost erased by the costs it can impose
The internet made information flow on the cheap, but making it anti-fragile will cost plenty
https://www.theregister.co.uk/2017/06/13/mark_pesce_column/
The great advantage of a browser-based programming environment is that nothing gets lost – it’s all saved to the cloud as you type it in. But what happens when the link dies, or the cloud chokes?
Thankfully, my code reappeared within a few minutes. But my faith was shaken, and I’ve since taken to saving my Glitch programs into a text file on my local machine – once burned, twice shy.
Which got me thinking about the increasingly fragile nature of our connected culture.
Twenty-five years ago almost nothing was connected to the Internet. Today, many things are – at least some of the time – and it’s only when connected that they realise their full capacity. A smartphone shorn of network access cannot be an object of fascination. The network activates, piping intelligence into our toys, making them irresistible.
That intelligence comes with some costs; the most obvious is our increasing dependency on that connection. People get lost on hikes as they fall out of mobile range and lose the mapping apps that keep them oriented. We’ve come to expect intelligence with us all the time. Losing connectivity is coming to feel like losing a bit of our mind.
Another cost – and the bigger worry – is that this connected intelligence isn’t entirely benevolent. Every connection is a way into a device that may have something of value – credit card numbers, or passwords, or Bitcoins. The same intelligence that activates can also try to harvest that information, or even poison those devices, turning them against their owners.
We’ve reached a very delicate point, where the value of connected intelligence is almost entirely countered by the costs it can impose. If things become just a little more hostile out there (with four billion people using the Internet, that’s pretty much assured) the scales could tip in favour of disconnection, isolation, and a descent into a kind of stupidity we haven’t seen in many years.
There’s no easy answers for any of this. It’s unreasonable to expect that businesses will turn the clock back on the productivity gains made from connectivity, but it’s equally unreasonable to assume any of those businesses are prepared for an onslaught of connected hostility.
In this sort of high-pressure environment, where the wrong decision quickly becomes a fatal one, we have no choice but to evolve our responses, rapidly. It feels as though we got the benefits of connected intelligence for free; it’s only just now that we can see that bill is being presented – and it’s a whopper.
Tomi Engdahl says:
Aptare: 8 exabyte-juggler pimps its ‘data centre MRI’ product
Eight? Really?
https://www.theregister.co.uk/2017/07/06/aptare/
Data centre infrastructure management console seller Aptare claims to have 8 or 9 exabytes of storage under management – a larger total than any storage company on the planet.
It also claims to make the only DC infra management console you’ll likely need, which is quite a claim.
For example, CitiGroup is a customer and has 60 data centres world-wide, each with an Aptare collector pumping data to a central site.
We learnt that roughly one-third of its business comes through HDS, with whom it has an OEM deal; HDS’ Hitachi Storage Reporter product uses Aptare’s StorageConsole 6.5 platform.
Aptare’s core perception is that you can’t manage a data centre infrastructure unless you have an end-to-end view of how its components interact and can present that. It does this with the Aptare console, an agentless data collector from storage repositories, file systems, volume managers, operating systems, SAN switches, backup products, public clouds, and virtual environments.
Tomi Engdahl says:
Tom Krazit / GeekWire:
Ex-Twitter engineers raise $10.5M Series A for microservices management startup Buoyant — Buoyant, a 13-person startup led by former Twitter engineers and now backed by a former member of Twitter’s board of directors, has raised a $10.5 million Series A round to apply lessons learned …
Former Twitter engineers land $10.5M for startup Buoyant, leveraging lessons from the ‘fail whale’
https://www.geekwire.com/2017/two-engineers-helped-kill-twitters-fail-whale-land-10-5m-buoyant-thinks-missing-link-microservices/
Buoyant, a 13-person startup led by former Twitter engineers and now backed by a former member of Twitter’s board of directors, has raised a $10.5 million Series A round to apply lessons learned from revamping Twitter’s infrastructure to simplify the emerging world of microservices.
Microservices are an evolution of software development strategies that has gained converts over the last several years. Developers used to build “monolithic” applications with one huge code base and three main components: the user-facing experience, a server-side application server that does all the heavy lifting, and a database. This is a fairly simple approach, but there are a few big problems with monolithic applications: they scale poorly and are difficult to maintain over time because every time you change one thing, you have to update everything.
So microservices evolved inside of webscale companies like Google, Facebook, and Twitter as an alternative. When you break down a monolithic application into many smaller parts called services, which are wrapped up in containers like Docker, you only have to throw extra resources at the services that need help and you can make changes to part of the application without having to monkey with the entire code base.
The price for this flexibility, however, is complexity.
“That’s the biggest lesson we learned at Twitter,” said Morgan, the startup’s CEO. “It’s not enough to deploy stuff and package it up and run it in an orchestrator (like Kubernetes) … you’ve introduced something new, which is this significant amount of service-to-service communication” that needs to be tracked and understood to make sure the app works as designed, he said.
Buoyant’s solution is what the company calls a “service mesh,” or a networked way for developers to monitor and control the traffic flowing between services as a program executes.
Linkerd is the manifestation of its approach,
“we’re only going to be successful as a company if we get Linkerd adoption,” Morgan said.
This approach might sound familiar. In May, Google, IBM, and Lyft released Istio, a different open-source project aimed at accomplished many of these same goals by improving the visibility and control of service-to-service communications.
In a blog post scheduled to go live Tuesday, Buoyant plans to announce that it supports Istio with the latest release of Linkerd, and while the projects appear to be somewhat competitive, the company bent over backwards to emphasize that it sees Istio as a complementary part of a microservices architecture.
https://linkerd.io/
Resilient service mesh for cloud native apps
linker∙d is a transparent proxy that adds service discovery, routing, failure handling, and visibility to modern software applications
Tomi Engdahl says:
John Mannes / TechCrunch:
Jefferies report says IBM’s Watson investment will struggle to return value to shareholders, points to IBM’s failed partnership with MD Anderson to show why — IBM’s Watson unit is receiving heat today in the form of a scathing equity research report from Jefferies’ James Kisner.
Jefferies gives IBM Watson a Wall Street reality check
https://techcrunch.com/2017/07/13/jefferies-gives-ibm-watson-a-wall-street-reality-check/
IBM’s Watson unit is receiving heat today in the form of a scathing equity research report from Jefferies’ James Kisner. The group believes that IBM’s investment into Watson will struggle to return value to shareholders. In recent years, IBM has increasingly leaned on Watson as one of its core growth units — a unit that sits as a proxy for projecting IBM’s future value.
In the early days, IBM’s competitive advantage was its longstanding relationships with Fortune 500 companies.
a case study for IBM’s broader problems scaling Watson.
The MD Anderson nightmare doesn’t stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM.
https://javatar.bluematrix.com/pdf/fO5xcWjc
Tomi Engdahl says:
Mark Bergen / Bloomberg:
Sources: Google’s offering some science labs and AI experts early cloud access to its quantum computers, following IBM’s similar effort to spur app development — Company offers early access to its machines over the internet — IBM began quantum computing cloud service earlier this year
Google’s Quantum Computing Push Opens New Front in Cloud Battle
https://www.bloomberg.com/news/articles/2017-07-17/google-s-quantum-computing-push-opens-new-front-in-cloud-battle
Company offers early access to its machines over the internet
IBM began quantum computing cloud service earlier this year
For years, Google has poured time and money into one of the most ambitious dreams of modern technology: building a working quantum computer. Now the company is thinking of ways to turn the project into a business.
Alphabet Inc.’s Google has offered science labs and artificial intelligence researchers early access to its quantum machines over the internet in recent months. The goal is to spur development of tools and applications for the technology, and ultimately turn it into a faster, more powerful cloud-computing service, according to people pitched on the plan.
“They’re pretty open that they’re building quantum hardware and they would, at some point in the future, make it a cloud service,” said Peter McMahon, a quantum computing researcher at Stanford University.
Providing early and free access to specialized hardware to ignite interest fits with Google’s long-term strategy to expand its cloud business. In May, the company introduced a chip, called Cloud TPU, that it will rent out to cloud customers as a paid service. In addition, a select number of academic researchers are getting access to the chips at no cost.
While traditional computers process bits of information as 1s or zeros, quantum machines rely on “qubits” that can be a 1, a zero, or a state somewhere in between at any moment. It’s still unclear whether this works better than existing supercomputers. And the technology doesn’t support commercial activity yet.
Still, Google and a growing number of other companies think it will transform computing by processing some important tasks millions of times faster.
In 2014, Google unveiled an effort to develop its own quantum computers. Earlier this year, it said the system would prove its “supremacy” — a theoretical test to perform on par, or better than, existing supercomputers — by the end of 2017.
Quantum computers are bulky beasts that require special care, such as deep refrigeration, so they’re more likely to be rented over the internet than bought and put in companies’ own data centers.
Earlier this year, IBM’s cloud business began offering access to quantum computers. In May, it added a 17 qubit prototype quantum processor to the still-experimental service. Google has said it is producing a machine with 49 qubits, although it’s unclear whether this is the computer being offered over the internet to outside users.
The hope in the field is that functioning quantum computers, if they arrive, will have a variety of uses such as improving solar panels, drug discovery or even fertilizer development. Right now, the only algorithms that run on them are good for chemistry simulations
Tomi Engdahl says:
Frederic Lardinois / TechCrunch:
Google announces Transfer Appliance, a new hardware appliance and service for moving up to 480TB from corporate data centers to Google Cloud Platform via FedEx
Google’s Transfer Appliance helps businesses ship their data to the Google Cloud by FedEx
https://techcrunch.com/2017/07/18/googles-transfer-appliance-helps-businesses-ship-their-data-to-the-google-cloud-by-fedex/
Google today announced the launch of its Transfer Appliance, a new hardware appliance and service for moving large amounts of data from corporate data centers to its cloud via FedEx. While Google already offered its users the ability to ship physical media like storage arrays, hard disks, tapes and USB flash drives to its data centers through third-party partners, the Transfer Appliance is quite a bit more sophisticated. The company also built its own hardware for this service, which can be used to ship anything from 100TB to 480TB to the cloud (and more if the data is easily compressible).
If this sounds familiar, that’s probably because you’ve heard about AWS Snowball, which is essentially Amazon’s version of this. Snowball comes in 50TB and 80TB versions. If you need more space (up to 100PB), Amazon will pull up to your data center with a truck — the AWS Snowmobile.
The concept here is pretty straightforward: Google will ship the appliance to your data center, you install it in one of your racks, connect it to the local network, fill it with your data using Google’s tools and then send it back to Google.
Tomi Engdahl says:
Trouble in the ‘cloud’ for Amazon? Deutsche Bank cuts price forecast in rare bearish analyst move
http://www.cnbc.com/2017/07/18/deutsche-bank-cuts-amazon-price-forecast-in-rare-bearish-analyst-move.html?recirc=taboolainternal
Deutsche Bank’s recent conversations with Amazon Web Services partners revealed cloud computing migrations are “a little bit slower than expected.”
The firm lowered its price target for the company to $1,135 from $1,150, representing 12 percent upside from Monday’s close.
Trouble may be brewing for Amazon’s cloud computing business.
Deutsche Bank told investors the market may get surprised by weaker than expected Amazon Web Services (AWS) growth in the second quarter.
Analysts are almost universally bullish on Amazon with 86 percent of Wall Street firms rating the internet company at buy or overweight, according to FactSet. The internet firm’s shares are up 35 percent this year through Monday compared to the S&P 500′s 10 percent return.
“We trim our AWS estimates and see near-term risk to investor sentiment around AMZN given slightly more cautious checks around cloud migration,” analyst Lloyd Walmsley wrote in a note to clients Monday. “We have not heard much in the way of investor concern around AWS revenue deceleration this quarter (unlike last quarter).”
Tomi Engdahl says:
Microsoft:
Microsoft Q4: revenue of $23.3B, up 13% YoY, net income of $6.5B, up 109% YoY; Intelligent Cloud revenue of $7.4B, up 11% YoY; productivity revenue was $8.4B — REDMOND, Wash. — July 20, 2017 — Microsoft Corp. today announced the following results for the quarter ended June 30, 2017
Earnings Release FY17 Q4
Microsoft Cloud Strength Highlights Fourth Quarter Results
Commercial cloud annualized revenue run rate exceeds $18.9 billion
https://www.microsoft.com/en-us/Investor/earnings/FY-2017-Q4/press-release-webcast
Tomi Engdahl says:
Dina Bass / Bloomberg:
Microsoft beats with strong cloud performance; Azure sales almost double, and Office 365 app revenue rises 43%
Microsoft Regains Turnaround Momentum on Strong Cloud Growth
https://www.bloomberg.com/news/articles/2017-07-20/microsoft-sales-profit-top-estimates-as-cloud-growth-marches-on
Microsoft Corp.’s turnaround plan got back on track in the latest quarter, buoyed by rising sales of internet-based software and services.
Profit in the fiscal fourth quarter exceeded analysts’ estimates and adjusted sales rose 9 percent as demand almost doubled for Azure cloud services, which let companies store and run their applications in Microsoft data centers. A tax-rate benefit added 23 cents a share to earnings, Microsoft said.
Shareholders are watching closely to gauge whether Satya Nadella is making progress toward reshaping 42-year-old Microsoft as a cloud-computing powerhouse with new services related to Azure and the Office 365 online productivity apps — a shift that led to a massive sales-force restructuring earlier this month.
Cloud Revenue
Commercial cloud revenue was $18.9 billion on an annualized basis, moving closer to the $20 billion target the company set for the fiscal year that started July 1. Even as cloud sales rise, the company has been able to meet a pledge to trim costs, with commercial cloud gross margin widening to 52 percent.
“In commercial cloud gross margin, we committed a year ago to material improvement and this is 10 points higher than where we were last year,” Chief Financial Officer Amy Hood said in an interview.
Azure sales rose 97 percent in the period, while commercial Office 365 — cloud-based versions of Word, Excel and other productivity software — increased 43 percent. Microsoft’s Azure cloud-computing service still lags behind market leader Amazon.com Inc., but more customers are starting to go with Microsoft, according to research from Credit Suisse Group AG. Both corporate and consumer users are switching from older Office programs to the cloud subscriptions, providing more stable and recurring revenue.
“The underlying trends — the shift to the cloud and also what it means for the legacy, on-premise stuff — are likely to be in motion for a very long period of time,”
Tomi Engdahl says:
Microsoft Regains Turnaround Momentum on Strong Cloud Growth
https://www.bloomberg.com/news/articles/2017-07-20/microsoft-sales-profit-top-estimates-as-cloud-growth-marches-on
Microsoft Corp.’s turnaround plan got back on track in the latest quarter, buoyed by rising sales of internet-based software and services.
Profit in the fiscal fourth quarter exceeded analysts’ estimates and adjusted sales rose 9 percent as demand almost doubled for Azure cloud services, which let companies store and run their applications in Microsoft data centers. A tax-rate benefit added 23 cents a share to earnings, Microsoft said.
Shareholders are watching closely to gauge whether Satya Nadella is making progress toward reshaping 42-year-old Microsoft as a cloud-computing powerhouse with new services related to Azure and the Office 365 online productivity apps
“They are a company that seems to be ahead of some of these old-line technology companies that are making transitions to the cloud,” said Dan Morgan, a senior portfolio manager at Synovus Trust, which owns Microsoft shares. “The story is still intact but they still have a ways to go.”
Cloud Revenue
Commercial cloud revenue was $18.9 billion on an annualized basis, moving closer to the $20 billion target the company set for the fiscal year that started July 1. Even as cloud sales rise, the company has been able to meet a pledge to trim costs, with commercial cloud gross margin widening to 52 percent.
Tomi Engdahl says:
Ingrid Lunden / TechCrunch:
Memo: GoDaddy to shut down its AWS-style Cloud Servers service on December 31, after launching it in 2016
GoDaddy sells PlusServer for $456M, kills off its AWS-style cloud services
https://techcrunch.com/2017/07/20/godaddy-sells-plusserver-for-456m-kills-off-its-aws-style-cloud-services/
As web hosting and domain registration business GoDaddy prepares to report its quarterly results in a couple of weeks, the company is making some moves to reorganise its business.
Earlier this week it confirmed that it would sell its European PlusServer business to London-based private equity firm BC Partners for $456 million (€397 million).
And separately, we have learned that GoDaddy is shutting down Cloud Servers, a business it launched only last year as an AWS-style service for building, testing and scaling cloud solutions on GoDaddy’s infrastructure.
The Cloud Servers memo notes that the service will stop being supported on December 31, 2017. Apps and development environments provided by Bitnami — a YC startup that partnered with GoDaddy to provide a library of some 140 apps that they could host with GoDaddy — will stop being supported on November 15.
“In the coming months, we will be informing you of some exciting opportunities to move your services to other GoDaddy products,” the company notes.
Tomi Engdahl says:
Ina Fried / Axios:
Microsoft CFO Amy Hood: for the first time, Microsoft got more revenue from Office 365 subscriptions than from traditional Office software licensing
Microsoft shares hit record high after upbeat earnings report
https://www.axios.com/microsoft-shares-hit-record-high-after-upbeat-earnings-report-2462704685.html
Shares of Microsoft hit record territory in after-hours trading on Thursday, topping $75 a share, after the software giant’s better-than-expected financial results.
As has been the case for the last several quarters, strength in Microsoft’s cloud business, including Office 365 and Windows Azure, was the key to the company’s growth. Of note, Microsoft CFO Amy Hood told analysts that, for the first time, Microsoft got more revenue from Office 365 subscriptions than from traditional Office software licensing.
Why it matters: Microsoft has shown an ability to grow its business even as the PC market has stalled, reflecting moves the company made in the cloud both since Satya Nadella took over as CEO as well as some that were in place before he took over the top spot.
Tomi Engdahl says:
How to avoid vendor lock-in when migrating to the cloud
http://www.cloudpro.co.uk/cloud-essentials/public-cloud/6914/how-to-avoid-vendor-lock-in-when-migrating-to-the-cloud
Migrating to the cloud is a complicated business
Companies can often trip over common mistakes because they’re ill-prepared to deal with the intricacies of such a transition. One of the most common issues is avoiding vendor lock-in; when a company or individual finds it impossible to transfer their business from one cloud provider to another without substantial switching costs and effort.
This happens for a number of reasons, including the fact that vendors value customer loyalty above all else, much like you do with your own clients. Once you’ve signed up with one cloud storage vendor, they’ll do everything in their power to keep you.
Avoiding vendor lock-in by not taking advantage of the full benefits comes with it’s own set of problems, so you need to know how to sidestep this problem without limiting yourself before going in.
Consider a hybrid strategy
Putting all of your eggs in one basket is the problematic part of migrating to the cloud, so one of the smartest things businesses can do is implement a hybrid strategy. This would involve having some of your workloads remaining on-premises and some – such as data analysis – moved to the the cloud.
It’s likely you’ll be told again and again that you need to dive in at the deep end right from the start, but this simply isn’t true. Companies are often better off dealing with a mixture of public cloud, private cloud and local infrastructure, to ensure they don’t find themselves tied to one way of doing things even after it no longer suits the organisation.
Diversify your investment
At the beginning of the migration process the main concern for companies tends to be the cost of the move in both money and time, but making sure you deal with more than one provider can be the key to avoiding problems later on.
Have an exit strategy BEFORE going in
No matter which solution you choose, there’s always the chance that the company will need to either move back or switch to another vendor or vendors. It is therefore absolutely essential for you to have a strategy in place before spending the time and effort migrating business workloads, ensuring that things can go smoothly at any point in the journey.
Test everything multiple times before making any hard and fast decisions, and make sure you are able to repeat these tests easily in the future without risking business downtime. Set up a resiliency solution to alert you of outages, and factor exit costs into your budget.
Most importantly, make sure the business is as flexible as possible should priorities or requirements change – adaptability is key, and can’t be faked at the last minute.
Tomi Engdahl says:
French hosting giant OVH opens huge data center in Limburg, Germany
http://www.cablinginstall.com/articles/pt/2017/07/french-hosting-giant-ovh-opens-huge-data-center-in-limburg-germany.html?cmpid=enl_cim_cimdatacenternewsletter_2017-07-25
Datacenter Dynamics is reporting that French hosting giant OVH has opened its first data center in Germany, offering space for nearly 45,000 servers. The facility in Limburg an der Lahn, codenamed LIM1, is set inside a former industrial building with an on-site electrical substation.
Tomi Engdahl says:
Chasing cloud, IBM opens 4 new data centers globally
http://www.cablinginstall.com/articles/pt/2017/07/chasing-cloud-ibm-opens-4-new-data-centers-globally.html?cmpid=enl_cim_cimdatacenternewsletter_2017-07-25
IBM is set to open four new data centers in California, London, and Sydney, as it tries to stay competitive in the cloud business amid steadily declining overall revenue. The company plans to announce the new data centers – two in London, one in San Jose, and one in Australia
IBM opens four new data centers around the world as it chases cloud leaders
https://www.geekwire.com/2017/ibm-opens-four-new-data-centers-around-world-chases-cloud-leaders/
The company plans to announce the new data centers — two in London, one in San Jose, and one in Australia — one day after releasing quarterly earnings results that upped its streak of consecutive quarters with year-over-year declining revenue to 21, which is a lot. IBM now has 59 data centers supporting its cloud computing efforts, and the new data centers join several other IBM facilities in their respective areas, increasing reliability for customers in those regions.
IBM is fighting from behind when it comes to public cloud computing. Various market share surveys put it in fourth place, more or less, well behind Amazon Web Services, Microsoft Azure, and Google Cloud Platform. Gartner’s cloud infrastructure Magic Quadrant, which is really only useful for judging cloud also-rans, ranked IBM behind Alibaba and Virtustream in terms of ability to execute.
But as both Microsoft’s Scott Guthrie and Google’s Greg DeMichillie pointed out at our Cloud Tech Summit last month, there are only a handful of companies that can afford to make the investments in hardware needed to compete in the market for public cloud services. IBM is definitely one of those companies, with revenue of $19 billion and net income of $2.3 billion in the past quarter, despite heading in the wrong direction.
Tomi Engdahl says:
Google switches on Sydney cloud region, with a subset of services
App Engine and Datastore coming real soon now, no word on when other services will land
https://www.theregister.co.uk/2017/06/20/google_switches_on_sydney_cloud_region/
Tomi Engdahl says:
Tom Krazit / GeekWire:
Microsoft debuts Azure Container Instances, which make it easier to deploy and bill containers on Azure, and joins foundation that oversees Kubernetes
Microsoft unveils Azure Container Instances, joins Cloud Native group, isolating AWS on Kubernetes
https://www.geekwire.com/2017/microsoft-launches-new-container-service-joins-cloud-native-group-isolating-aws-kubernetes/
Microsoft’s cloud business is making two notable moves involving containers Wednesday — unveiling a new service that aims to make it much easier to get up and running with containers, and joining a key industry foundation that oversees the open-source Kubernetes container orchestration project.
The moves, embracing an orchestration technology that originated inside Google, bring Microsoft’s container strategy into sharper focus and present some interesting decisions for public cloud juggernaut Amazon Web Services.
Microsoft’s new Azure Container Instances service, available as a public preview for Linux containers, allows developers to start containers on Azure and have them billed by the second. Containers are already attractive to developers because they spin up much faster than virtual machines, but Microsoft said ACI is “the fastest and easiest way to run a container in the cloud,” in a post Wednesday morning.
Tomi Engdahl says:
IBM killing off its first go at cloud object storage – 20 months after launch
Move your data by August 24th or lose it, then ask if this would happen on-prem
https://www.theregister.co.uk/2017/07/26/bluemix_object_storage_v1_deprecated/
We all know cloud is evolving fast, but IBM’s just given us the downside of that speed: a service it switched on in December 2015 will be switched off in August 2017.
That service is Bluemix’s Object Store v1 service, launched in December 2015 and updated not once but twice since its debut.
IBM’s now on version 3 of the service and that’s where it wants users to migrate. “We will now be deleting all existing instances after 30 days i.e. on August 24, 2017,” IBM says. “We recommend users to unprovision the Object Storage v1 service and switching to v3, before August 24, 2017,” the company advises.
Tomi Engdahl says:
Amazon.com:
Amazon reports Q2 revenue of $38B, up 25% YoY, as net income drops from $857M to $197M YoY; AWS revenue rose 42% YoY to $4.1B; headcount reaches 382K, up 31K
Amazon.com Announces Second Quarter Sales up 25% to $38.0 Billion
http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=2289567
Tomi Engdahl says:
Saas 1000: Top SaaS Companies
http://saas1000.com/
The SaaS 1000 is a list of the top saas companies according to employee size growth.
The list will include the largest saas companies and smaller startups. For now in order to get a ranking you must have at least 40 employees
Tomi Engdahl says:
Amazon reportedly acquired GameSparks for $10M to build out its gaming muscle
https://techcrunch.com/2017/07/28/amazon-reportedly-acquired-gamesparks-for-10m-to-build-out-its-gaming-muscle/
Amazon and its enterprise cloud division AWS have been making a number of moves to expand the company as a platform to build and host games. One of the latest developments has been an acquisition: Amazon in the last quarter reportedly quietly acquired a company called GameSparks, a “backend as a service” for game developers to build various features like leaderboards into games, and then manage them, all in the cloud. According to documents from deal analytics firm PitchBook, the acquisition price was $10 million.
Tomi Engdahl says:
Google triples enterprise cloud deals in latest quarter
http://www.cloudpro.co.uk/leadership/6936/google-triples-enterprise-cloud-deals-in-latest-quarter
CEO Sundar Pichai heralds growing momentum with big customers
Google’s cloud appears to be making inroads with enterprises, with the tech giant having signed triple its number of $500,000-plus deals in its latest quarter than during the same period a year ago.
While Google doesn’t break out results for Google Cloud Platform, CEO Sundar Pichai did drop some hints of how the cloud is doing on a press call on the back of Alphabet’s second quarter 2017 financial results.
“GCP continues to experience impressive growth across products, sectors and geographies and increasingly with large enterprise customers in regulated sectors,” he said, as transcribed by Seeking Alpha.
Pichai also pointed to partnerships with Nutanix, to help on-premise organisations bridge into the cloud, and a deal with SAP to host the German firm’s business applications, while SAP will hawk Google’s productivity portfolio, G Suite, to its own customers.
Tomi Engdahl says:
IBM opens two London data centres despite slower cloud growth
http://www.cloudpro.co.uk/leadership/6926/ibm-opens-two-london-data-centres-despite-slower-cloud-growth
IBM cloud growth slows amid wider declines
IBM today opened two new data centres in London, expanding its cloud footprint despite its latest quarterly results showing growth in this division is slowing.
Two other facilities opened in San Francisco and Sydney, bringing its global data centre count to nearly 60 spanning 19 countries.
Big Blue is targeting its cloud at businesses looking to take advantage of its cognitive services as they approach the challenge of making sense of vast amounts of data. It now has five facilities in the UK.
The latest two are part of investment IBM announced last year to triple the number of data centres it has here, pitching its cloud as a reliable way to securely store data in the face of incoming data protection regulations.
Tomi Engdahl says:
Microsoft Further Pledges Linux Loyalty, Joins Cloud Native Computing Foundation
https://linux.slashdot.org/story/17/07/29/0327209/microsoft-further-pledges-linux-loyalty-joins-cloud-native-computing-foundation
Today, Microsoft further pledges its loyalty to Linux and open source by becoming a platinum member of the Cloud Native Computing Foundation. If you aren’t familiar, the CNCF is a part of the well-respected Linux Foundation (of which Microsoft is also a member). With the Windows-maker increasingly focusing its efforts on the cloud — and profiting from it — this seems like a match made in heaven. In fact, Dan Kohn, Executive Director of the foundation says, “We are honored to have Microsoft, widely recognized as one of the most important enterprise technology and cloud providers in the world, join CNCF as a platinum member.”
“CNCF is a part of the Linux Foundation, which helps govern for a wide range of cloud-oriented open source projects, such as Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, containerd, Helm, gRPC, and many others,”
Microsoft further pledges Linux loyalty by joining Cloud Native Computing Foundation
https://betanews.com/2017/07/26/microsoft-linux-cncf/
Linux is the future, and even closed-source champion Microsoft has gotten onboard. The Windows-maker is not only contributing to many open source projects, but developing software for the Linux desktop, with programs such as Skype. You can even install Linux distributions from the Windows Store nowadays. Hell, the company has even created a version of Microsoft Office that runs on Linux by way of Android! Yes, Google’s mobile operating system is Linux. Android is also what effectively killed the much maligned Windows Phone, so Microsoft clearly has no problem with joining forces with prior “enemies.”
Today, Microsoft further pledges its loyalty to Linux and open source by becoming a platinum member of the Cloud Native Computing Foundation.
Gossman also says, “Open source is a way to scale software development beyond what any single organization can do. It allows vendors, customers, researchers and others to collaborate and share knowledge about problems and solutions, like no other form of development. And I strongly believe the power of open source derives from strong, diverse communities and that we have an obligation to support these communities by participating as code contributors and in the associated foundations and committees. With all that in mind, I look forward to us working with the other CNCF members (most of whom we already know very well) to help make these projects awesome for everyone.”
Tomi Engdahl says:
IBM CIO leaves for AWS – and Big Blue flings sueball to stop him
Filing reveals IBM knows its current cloud is weak, intends to match AWS on price next year
https://www.theregister.co.uk/2017/08/09/ibm_sues_former_cio_who_wants_to_work_for_aws/
IBM has flung a sueball at Jeff Smith, its former chief information officer, because he’s trying to go to work for Amazon Web Services.
Big Blue filed a complaint [PDF] in a US district court in New York last week that says Smith “threatens to violate his one-year non-competition agreement by going into direct competition with IBM as a senior executive of Amazon Web Services, one of IBM’s main competitors in cloud computing.”
The complaint also alleges Smith has already revealed some information to AWS CEO Andrew Jassy, violated directives not to retain presentations about IBM’s new cloud, and then wiped his company-issued phone and tablet before leaving the IT giant, “making it impossible for IBM to detect other communications with Jassy or determine if he transferred any other IBM information.”
The complaint says Smith is one of “only a dozen” executives involved in top-level decision-making about IBM’s next-generation cloud platform, has insider knowledge of IBMs security posture and was involved at the very highest level of internal discussions on IBM’s transformation plan.
Tomi Engdahl says:
It’s 2017 and Hyper-V can be pwned by a guest app, Windows by a search query, Office by…
Update IE, Edge, Windows, SQL Server, Office and – of course – Flash
https://www.theregister.co.uk/2017/08/08/august_patch_tuesday/
Microsoft has released the August edition of its Patch Tuesday update to address security holes in multiple products. Folks are urged to install the fixes as soon as possible before they are exploited.
Among the flaws are remote code execution holes in Windows, Internet Explorer/Edge and Flash Player, plus a guest escape in Hyper-V. Of the 48 patches issued by Redmond, 25 are rated as critical security risks.
Tomi Engdahl says:
How data center trends are shifting staff workloads
http://www.cablinginstall.com/articles/pt/2017/07/how-data-center-trends-are-shifting-staff-workloads.html?cmpid=enl_cim_cim_data_center_newsletter_2017-08-08
Reporting for Data Center Knowledge, Gail Dutton notes how “data centers are becoming lean, efficient strategic assets as they adopt cloud computing, XaaS, self-provisioning models, colocation, and other still-emerging technologies. Achieving the promise of these technologies, however, requires changing work assignments and updating skill sets.”
Data Center Trends Shift Staff Workloads
http://www.datacenterknowledge.com/archives/2017/07/31/data-center-trends-shift-staff-workloads/
Data centers are becoming lean, efficient strategic assets as they adopt cloud computing, XaaS, self-provisioning models, colocation, and other still-emerging technologies. Achieving the promise of these technologies, however, requires changing work assignments and updating skill sets.
“These trends are redefining the data center work environment by reducing the number of physical devices that need human intervention,” says Colin Lacey, vice president of Data Center Transformation Services & Solutions at Unisys. “This elevates the required skill sets from ‘racking and stacking’ to administering tools and automation.” While some hands-on work will always be required, it’s much less in highly automated or outsourced data centers.
“Take clouding computing as an example,” he continues. “When you move to a cloud, you immediately remove some administrative details. Infrastructure is prepositioned, and automation, monitoring and reporting capabilities already are in place. That eliminates some of the physical aspects of operating a data center, but it also brings a new set of responsibilities for the client.”
For example, while moving to a cloud has the potential to improve disaster recovery, that feature isn’t automatic.
“We have a disaster recovery plan to guide recovery of our services, but it doesn’t extend to individual customers.”
Clients migrating to a cloud, therefore, must redesign their disaster recovery plans for that specific environment, either purchasing disaster recovery as an added service or designing a different strategy. The point is that data center managers can’t simply migrate to a cloud and think everything is done.
As Mametz says, “If you see migrating to a cloud as a 1:1 move, you’re not taking advantage of the cloud’s benefits.” Achieving those benefits is most likely when one person is in charge of implementing the new solution to ensure it works and that its full value is realized.
Changes in headcount depend on the compute solution and its role in the company. “One of the biggest misconceptions in moving to a cloud is that you’ll need fewer employees,” Mametz says. While staffers may no longer be needed to handle the physical equipment, they are needed to maintain the operating system. “They work remotely. It’s a different value proposition.”
Tomi Engdahl says:
AWS joins Cloud Native Computing Foundation to promote open source projects
https://venturebeat.com/2017/08/09/aws-joins-cloud-native-computing-foundation-to-promote-open-source-projects/
Amazon Web Services has joined the Cloud Native Computing Foundation as a platinum member, the two organizations announced today, in a major move to support open source projects that help developers build modern applications. Adrian Cockcroft, AWS’ vice president of cloud architecture, will join the CNCF governing board.
The move shows Amazon’s commitment to some of the key open source technologies that are used to help run applications on AWS and other cloud platforms. AWS was the last of the major cloud providers to join the CNCF as a platinum member — Microsoft, Google, and IBM were already on board. (The cloud provider was already a member of the Linux Foundation, the CNCF’s parent organization.)
This also may signal a potential product shift from AWS toward increasing support of Kubernetes, container orchestration software that originated with Google and has become the CNCF’s marquee project.
Amazon already runs its own container orchestrator in the form of the company’s EC2 Container Service, but a recent report from The Information said that the company is working on developing its own service based on Kubernetes.
Cockcroft called out the container orchestrator in his blog post announcing the move, saying that AWS is planning more code contributions to the project. A recent survey of CloudNativeCon attendees showed that 63 percent of them are running Kubernetes on AWS’ Elastic Compute Cloud (EC2).
The CNCF isn’t just hosting Kubernetes, however. It also guides other projects, including the containerd runtime for software containers and the Container Networking Interface (CNI).
Amazon Web Services Joins Cloud Native Computing Foundation as Platinum Member
https://www.cncf.io/announcement/2017/08/09/amazon-web-services-joins-cloud-native-computing-foundation-platinum-member/
Tomi Engdahl says:
How this Canadian university developed its own private cloud, shared it with others
https://siliconangle.com/blog/2017/08/09/canadian-university-developed-private-cloud-shared-others-veeamon/
The data explosion hit the University of British Columbia hard a few years ago, because provincial law required that personal information in the custody of a public institution had to be stored and accessed only in Canada. So the school’s information technology department developed its own private cloud to handle the load. While that may sound typical, what’s unusual is that UBC also provides cloud services for 26 other universities across the Canadian province.
“We didn’t have any of the large service providers,” said Mario Angers (pictured), senior manager of systems at the University of British Columbia. “If we wanted to provide or consume cloud, we had to basically build it.”
A self-service cloud model
After building a large cloud infrastructure inside the province a few years ago, UBC was approached by BCNET, a service provider for other schools in Canada, to provide the same support for the rest of higher education inside British Columbia. UBC provisions a virtual data center for the end user, basically offering infrastructure as a service and data recovery to the other universities.
“It’s self-service from end-to-end,” Angers said.
His university has been a Veeam Software Inc. customer for nearly six years, and he only has one person on staff to manage backup. “We can provide peace of mind now, knowing that if we lose something we can bring it back very quickly as it’s actually being restored to the production environment,” Angers stated.
Tomi Engdahl says:
AWS, Microsoft and Google take different paths to the cloud
http://www.cio.com/article/3175616/leadership-management/aws-microsoft-and-google-take-different-paths-to-the-cloud.html
CIOs concerned about betting too heavily on a single vendor for cloud services hear pitches from the industry’s leading vendors.
An outage at Amazon Web Services Tuesday rekindled the debate about whether it is wise to rely too heavily on one cloud service provider. Such snafus are rare for AWS so CIOs worry more about the potential for vendors to turn off their service without notice.
But CIOs who bet on multiple providers often invite challenges, including committing resources to work with each vendor, said Adrian Cockcroft, vice president of cloud architecture strategy for Amazon Web Services, at this week’s WSJ CIO Network conference, which included also appearances from executives running Microsoft and Google’s cloud businesses.
When an audience member lamented the fact that AWS and others reserve the right to suddenly terminate services, Cockcroft said an enterprise agreement, rather than a simple click-through license, is the best option for CIOs seeking to avoid business disruption. “For any enterprise we should set up an EA [enterprise agreement, which has whatever you need in it … it’s not something where we can just turn it off or you can just turn it off,” Cockcroft said.
Cockcroft said that splitting cloud services between providers slows deployment because companies must familiarize themselves with different vendors’ technologies. Splitting cloud capacity between two vendors, for example, also cuts volume discounts in half. A far more common scenario is that CIOs choose AWS to deploy most applications and select another vendor to run test or ancillary services, Cockcroft said.
Microsoft partners for the cloud
Where AWS is capturing the lion’s share of enterprise deals, Microsoft has taken a different tack. Leveraging its vertical expertise, Microsoft is partnering with companies on strategic cloud deals, said Judson Althoff, the company’s executive vice president of worldwide commercial business. Working closely with Boeing CIO Ted Colbert, Microsoft’s Azure team is building, selling and running aviation applications on its cloud.
“Rather than Microsoft be the supplier of technology we are part of the cogs for the solution,” Althoff said.
Microsoft is also collaborating with Land O’Lakes to parse satellite imagery in Azure Machine Learning to increase crop yields for its precision agriculture business.
Althoff said such projects, which also include working with automakers such as BMW and Renault-Nissan on cloud capabilities that support semi-autonomous cars, are examples of digital transformations.
Google’s secret sauce
Google’s cloud business lags AWS and Microsoft, but the search company owns the best technology to win more market share in a young but growing sector, said Diane Greene, senior vice president of Google Cloud. She said Google’s data centers and data management and machine learning technologies are among the best in the business.
“Ninety-five percent of the data is not in the public cloud yet but it is happening quickly and our customer engagements are ramping very quickly,” Greene said. “Thank goodness we’re not behind on the technology front.
Why your cloud strategy should include multiple vendors
http://www.cio.com/article/3183504/cloud-computing/why-your-cloud-strategy-should-include-multiple-vendors.html
For decades, enterprise computing environments have been composed of servers, storage and networking equipment developed by different vendors. Those choices often hinged on the best products to power applications and data — as well as the enticing volume discounts tossed into enterprise agreements. A similar scenario is playing out in cloud computing infrastructure, where CIOs are grappling with how to best architect systems for multi-vendor, hybrid cloud strategies.
A telling exchange on cloud vendors occurred during the Wall Street Journal’s CIO Network event last month when an audience member shared his perspective on the challenges of choosing between different cloud vendors with Adrian Cockcroft, vice president of cloud architecture strategy for Amazon Web Services (AWS), who was speaking on stage.
Cloud computing has become a staple of most enterprise computing environments, but CIOs are still sweating over whether to use one or more infrastructure-as-a-service provider.
Tomi Engdahl says:
Battle of the clouds: Amazon Web Services vs. Microsoft Azure vs. Google Cloud Platform
Which flavor of IaaS public cloud has what you need?
http://www.cio.com/article/3173251/cloud-computing/battle-of-the-clouds-amazon-web-services-vs-microsoft-azure-vs-google-cloud-platform.html
Tomi Engdahl says:
Frederic Lardinois / TechCrunch:
AWS unveils Macie security service, which uses machine learning to classify sensitive info stored on S3 and then monitors access to it
Amazon Macie helps businesses protect their sensitive data in the cloud
https://techcrunch.com/2017/08/14/amazon-macie-helps-businesses-protect-their-sensitive-data-in-the-cloud/
Amazon’s AWS cloud computing service hosted its annual NY Summit today and it used the event to launch a new service: Amazon Macie. The idea behind Macie is to use machine learning to help businesses protect their sensitive data in the cloud. For now, you can use Macie to protect personally identifiable information and intellectual property in the Amazon S3 storage service, with support for other AWS data stores coming later this year (likely at the re:Invent conference in November).
The company says the fully managed service uses machine learning to monitor how data is accessed and to look for any anomalies. The service then alerts users of any activity that looks suspicious so they can find the root cause of any data leaks (whether those are malicious or not). To do all of this, the service continuously monitors new data that comes into S3. It then uses machine learning to understand regular access patterns and the data in the storage bucket.
As with all AWS services, pricing is complicated, but mostly based on the number of events and data the service processes every month. Because a lot of costs are bound to the initial classification of the data, the first month of usage is also likely the most expensive.
Tomi Engdahl says:
Microsoft does something unusual in Australia: Names the bit barn hosting Azure
Plan to conquer federal IT sees Redmond team with super-secure Canberra Data Centres
https://www.theregister.co.uk/2017/08/14/azure_coming_to_canberra/
Microsoft’s cooking up a government-grade cloud in the Canberra, Australia’s capital city.
The two planned “Australia Central” Azure data centres will come online some time in 2018. True to form, Microsoft’s not saying anything about capacity or the instance types it will offer at launch. Nor will it confirm what services it will offer at launch, beyond saying users can expect Azure’s core IaaS experience, SQL Server, plus networking services.
But Microsoft has revealed one important detail about the new region: it will be hosted by a company called Canberra Data Centres.
Gartner vice president Michael Warrilow pointed out to The Register that Microsoft doesn’t usually reveal that kind of detail. Indeed, the company has never confirmed the identity of the data centres in which its other Australian regions reside, even though they are well know throughout local industry.
Why the exception? Because the partner is an outfit called Canberra Data Centres (CDC) that has gone to the trouble of implementing security the company believes is fit to handle information classified Top Secret, even though it doesn’t offer that level of security as a product.
Tomi Engdahl says:
Mary Jo Foley / ZDNet:
Microsoft acquires Cycle Computing, which makes software for orchestrating workloads in Azure, AWS, and Google clouds, will make future versions “Azure focused”
Microsoft acquires cloud-computing orchestration vendor Cycle Computing
http://www.zdnet.com/article/microsoft-acquires-cloud-computing-orchestration-vendor-cycle-computing/
Microsoft is buying Cycle Computing, which develops software for orchestrating workloads in the Azure, Amazon, and Google clouds, for an undisclosed amount.
Tomi Engdahl says:
Rackspace rolls out managed data protection service
http://www.zdnet.com/article/rackspace-rolls-out-managed-data-protection-service/
Rackspace is the latest firm to offer new cybersecurity tools ahead of the GDPR implementation.
Rackspace is bolstering cybersecurity offerings, rolling out a new service to help companies identify and protect sensitive data in accordance with various compliance requirements.
Utilizing the Vormetric Transparent Data Encryption platform to protect data, the new service enables firms to restrict access to approved company personnel and processes. It also generates detailed information about unauthorized access by users, applications, and systems.
The Privacy and Data Protection (PDP) service also offers detailed compliance reporting that gives customers a monthly, comprehensive view of their data usage. That should help them comply with Europe’s General Data Protection Regulation (GDPR), as well as other compliance standards like the Payment Card Industry Data Security Standard (PCI DSS).
Rackspace is one of several vendors lining up new cybersecurity tools and services ahead of the GDPR’s implementation in May 2018. The new regulations will require organizations to protect data belonging to EU citizens and to know where the data is flowing at all times.
As compliance requirements evolve, so do the threats: Rackspace highlighted a recent Forrester Research report which showed that 49 percent of global network security decision-makers have experienced at least one breach in the past year.