Who's who of cloud market

Seemingly every tech vendor seems to have a cloud strategy, with new products and services dubbed “cloud” coming out every week. But who are the real market leaders in this business? Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article shows Gartner’s Magic Quadrant for IaaS. Research firm Gartner’s answer lies in its Magic Quadrant report for the infrastructure as a service (IaaS) market.

It is interesting that missing from this quadrant figure are big-name companies that have invested a lot in the cloud, including Microsoft, HP, IBM and Google. The reason is that report only includes providers that had IaaS clouds in general availability as of June 2012 (Microsoft, HP and Google had clouds in beta at the time).

Gartner reinforces what many in the cloud industry believe: Amazon Web Services is the 800-pound gorilla. Gartner has also found one big minus on Amazon Web Services: AWS has a “weak, narrowly defined” service-level agreement (SLA), which requires customers to spread workloads across multiple availability zones. AWS was not the only provider where there was something negative to say on the service-level agreement (SLA) details.

Read the whole Gartner’s IaaS Magic Quadrant: a who’s who of cloud market article to see the Gartner’s view on clould market today.

1,065 Comments

  1. Tomi Engdahl says:

    Now began to cloud clearance sale

    Microsoft is the latest in the IT giant, which has dropped the price of cloud services. Also, Google and Amazon have in the past filed cheaper prices for their cloud services.

    Price competition was launched in early November, when Amazon started selling virtual machines to 18 percent lower cost. Later, Google dropped the price of the virtual machine by 5 percent.

    In addition, Amazon cloud storage fell by a quarter of the price. Google responded to this by adding the following day, also a recording pricing.

    Now even Microsoft has started to price competition that. The company offers Azure cloud completely the same prices as Amazon corresponding product.

    Not all of them do not believe the price to be the most important criterion for customers acquired. Rackspace sent a press release last week, which the company says that customers should also take into account other factors such as product cost of acquiring cloud solution.

    Source: http://www.tietoviikko.fi/cio/nyt+alkoi+pilven+alennusmyynti/a862872?s=r&wtm=tietoviikko/-10122012&

    Reply
  2. Tomi Engdahl says:

    Cloud service providers customers should find out what kind of conditions they agree to when signing the contract.

    Oracle’s cloud services, terms and conditions and promises of a document that contains a publicly readable Oracle’s web site – despite the fact that the document was earlier marked as confidential.

    “Oracle’s promise of 99.5 per cent on the availability sounds good until it clear that the benchmark is designed to availability”, says analyst Frank Scavo IT consulting company Strativia. Scavon that Oracle allows himself an excessive amount of potential service interruptions.

    Oracle’s view, if the service relates to a virus or denial of service attack, but not in so can not be subject to the fulfillment of service level agreement. Scavon, for example, denial of service attacks is possible to fight, so they should not be used to justify access to the promises made.

    Mittens are also Oracle’s reluctance to allow customers self-monitoring servers.

    In addition, Oracle reserves the right to make “major changes” cloud-infrastructure trap twice a year. These large changes can cause up to a day service outage.

    Source: http://www.tietoviikko.fi/cio/pilven+ostajalle+tarjotaan+karuja+ehtoja+jopa+vuorokauden+katko+mahdollinen/a865890?s=r&wtm=tietoviikko/-20122012&

    Reply
  3. Tomi Engdahl says:

    Analyst Report Advocates for Cloud, Saas Buyers’ ‘bill of Rights’
    http://www.cio.com/article/721353/Analyst_Report_Advocates_for_Cloud_Saas_Buyers_39_39_bill_of_Rights_39_

    The report from Constellation Research lays out what the ground rules for SaaS vendors should be

    With SaaS (software as a service) having become a preferred deployment model for new software purchases, customers should be entitled to a clear-cut set of rights and expectations from vendors, a new report from analyst firm Constellation Research argues.

    Despite a perception of SaaS being easy to acquire, cloud contracts require all the rigor and due diligence of on-premise licensed software, analyst Ray Wang [cq] wrote in the report.

    “CIO’s, CMO’s, [line-of-business] execs, procurement managers, and other organizational leads should ensure that the mistakes they made in on-premises licensed software aren’t blindly carried over,” Wang wrote.

    Current conditions make it all too easy for that to happen, with some 81 percent of new enterprise software license sales offering customers a cloud deployment option, the report states.

    And while customers keep control of their data, it’s expensive and difficult to switch cloud providers due to differences in architecture, metadata models and other factors, according to the report.

    Vendor lock-in, always a specter of the on-premises licensing world, is just as scary and maybe more so with SaaS, according to the report.

    Third, “vendors currently eager for business may grow fat and lazy,” moving away from today’s “customer-friendly policies,” it adds.

    Reply
  4. Tomi Engdahl says:

    Will we still love the data centre seven years from now?
    http://www.theregister.co.uk/2012/12/20/datacentre_survey/

    It seems there has never been a clearer understanding of how rapidly business is changing and IT technologies are evolving.

    With this in mind, we recently ran an online survey to ask readers of The Register how they thought data centres would develop between now and 2020. This is long enough for significant things to happen, but not so long as to take us into the realm of science fiction.

    Cloud solutions, be they private, public or hybrid, have received considerable coverage over the past few years. Some pundits and advisors have even talked about a wholesale move to cloud computing of one form or another, and the consequent disappearance of traditional systems. Most respondents don’t accept that a huge shift is on the cards, however, particularly in relation to public cloud

    Reply
  5. EllisGL says:

    Want to say some of these providers use a 3rd party.

    Reply
  6. Tomi Engdahl says:

    Hey, cloudy tech vendors on Amazon: AWS can fluff you up
    And not in a good way
    http://www.theregister.co.uk/2012/12/29/open_and_shut/

    While these “cloud economics” may not be attractive to established IT vendors, they absolutely are attractive to IT buyers, which is why this Forrester survey of IT executives should give vendors pause

    In fact, every IT vendor except Microsoft should be concerned by this chart, because they’re not even a consideration in this new cloud world. At least, not in this survey, which tried to capture the major Infrastructure as a Service providers. IBM and Oracle show up as a rounding error. Only HP, at 5 per cent, shows up when asked which cloud environments survey respondents expect to try in the next 12 months.

    Reply
  7. Tomi Engdahl says:

    Home Server vs. VPS – a quick Cost and Performance analysis
    http://tidbitsfortechs.blogspot.fi/2013/02/home-server-vs-vps-quick-cost-and.html

    Which is cheaper: Running a server from home, or renting a VPS (Virtual Private Server)?

    I did some research, and came up with the following:
    1) A system such as this one takes roughly 150w of power to run, at the most.
    2) My local utility charges 6.6 cents per kilowatt/hr.

    So, plug in the numbers. There are 730 hours in the average month. Take 730 times 150 watts, divide by 1000 and you get 109.5 killowatt hours used, and at 6.6 cents per kw/hr that’s 722.7 cents, or $7.23/mo.

    VPS: $27.15/mo or $325/yr
    Home: $7.23/mo or $86.76/yr

    Extrapolate that into a year, and thats $238/yr saved! For that money, I can afford to replace the power supply or hard drive in the home server if it dies. Its a LOT cheaper.

    Overall it is VERY cost effective for us to run the home server.

    Reply
  8. Tomi Engdahl says:

    Rackspace revenue misses as web hosting growth slows
    http://www.reuters.com/article/2013/02/12/us-rackspacehosting-results-idUSBRE91B1H720130212

    Web hosting company Rackspace Hosting Inc reported a 25 percent rise in quarterly revenue that narrowly missed analysts’ estimates

    “Clearly, growth is slowing. That’s probably the primary driver as to why the stock is off so much,” Stephens Inc analyst Barry McCarver said.

    Rackspace Chief Financial Officer Karl Pichler cited the transition to the company’s next-generation cloud as the main reason for the slowdown in growth.

    Reply
  9. Tomi Engdahl says:

    IDC: Outsourcing sector needs rescue fund for cloudy customers
    2e2 collapse to hit channel, customers, investors
    http://www.channelregister.co.uk/2013/02/15/outsourcing_rescue_fund/

    The outsourcing industry should develop a voluntary crisis fund to give protection to customers should their services provider hit the wall.

    This was proposed by IDC in light of 2e2′s recent high profile collapse that left some customers scrambling for alternative suppliers, and highlighted the pitfalls of outsourcing.
    More Reading
    Daisy Group battles to keep 2e2 managed services user base2e2 rescue deal: Daisy swoops in, grabs data centre opsDon’t get 2e2′d: How to survive when your IT supplier goes titsupAtomic Weapons Establishment ditches 2e2 in funding rowNHS IT bods ‘walk out’ in pay row with crashed UK tech giant 2e2

    One solution is to create a “voluntary shared rescue fund” along the lines of the Association of British Travel Agents bond, said IDC associate veep Douglas Hayward.

    Hayward said hosting and cloudy firms could hold a pot of cash in escrow to be used so that hosted data can be transitioned to new providers should the need arise.

    “This could be marketed either as an industry-wide service, or as an optional value-added service to be bought by clients when signing a hosting/outsourcing contract,” he said.

    Another option is for hosting firms to guarantee regular data backups are made to third party DR providers who are obliged to hand over the data to the customer in the event the hosting entity goes pop.

    “That option, however, would be costly and arguably wasteful, not to mention bad for the environment, by generating huge volumes of duplicated data in independent data centres.”

    Reply
  10. Tomi Engdahl says:

    Microsoft’s Azure cloud-washed the Amazon Web Services to their service speed test, the storage vendor Nasuni implemented annually.

    Last year the Amazon cloud was the fastest.

    OpenStack based clouds, provided by HP and Rackspace again performed poorly on big size loads.

    Nasumin comparison shows at least that cloud services are developing rapidly. The results differed sharply from last year.

    Source: http://www.tietoviikko.fi/cio/nopein+pilvi+selvisi++kehitys+ollut+huomattavaa/a881437?s=r&wtm=tietoviikko/-22022013&

    Reply
  11. Tomi Engdahl says:

    You too? The majority is wrong in think they are using the cloud

    Virtualization and private cloud between the border is dark, clear research firm Forrester Research report.

    According to Forrester private cloud is up to the IT administrators difficult to piece together the case. Of these, 70 per cent had private cloud a number of entities, which are not.

    “This is a big problem. This is a cloud washing, “explains Forrester’s James Staten cloud expert.

    The most important thing has not necessarily been staring at that for a certain definition is met.

    “Who cares what you call it. What really matters is that your customers and corporate users have access to the resources they need, “says IT company CA Technologies’ Director Andi Mann.

    Source: http://www.tietoviikko.fi/cio/sinakin+suurin+osa+erehtyy+luulee+kayttavansa+pilvea/a882588?s=r&wtm=tietoviikko/-27022013&

    Reply
  12. Tomi Engdahl says:

    Finally, “The Cloud” Means Something
    http://www.linuxjournal.com/content/finally-cloud-means-something

    Few jargonistic terms have annoyed me as much as, “The Cloud.” When the term was first coined, its meaning was ambiguous at best. For some companies, it meant shared web hosting (but with a cooler sounding name). For others it was simply, “let us host your servers in our datacenter, which we now refer to as a cloud.”

    Then, finally, the concept started to solidify into offering specific services or entire software applications as a commodity removed from the server infrastructure.

    Software as a service (SaaS) is arguably the largest implementation of the “cloud” ideology.

    Instead of buying a software package as a service (SaaS), PaaS allows me to deploy whatever Java applications I want onto a fully installed, maintained, and updated Java application server.

    Oh, and for the record? Shared web hosting was cloud computing long before it was cool, just saying.

    Reply
  13. Tomi Engdahl says:

    File sharing and storage services, such as Apple’s iCloud, Evernote, and to the discovery of data theft and leakage.

    Will cloud security no longer be trusted, a security expert Ari-Matti Husa FICORA Cert-fi unit?

    Cloud Service drawback is that the availability and security are not in our own hands. A risk assessment of the trust the greater personal or other security expertise.

    Is the security of cloud computing data centers enough?

    Most likely, it is better than many home users and businesses.

    Can I reduce the security risks for yourself?

    The security level can be improved so that it sends only encrypted files into the cloud. It reduces the risk of information leaks.

    Cases should relate to the fact that the local environments is more and more serious data leaks. They are not only going to be in the same manner to the public.

    Data leaks can never be completely prevented

    Consumer Services does not have the same level of security as services designed for corporate users.

    Source: http://www.3t.fi/artikkeli/uutiset/teknologia/voiko_pilvipalveluiden_tietoturvaan_luottaa

    Reply
  14. Tomi Engdahl says:

    By the numbers: How Google Compute Engine stacks up to Amazon EC2
    http://gigaom.com/2013/03/15/by-the-numbers-how-google-compute-engine-stacks-up-to-amazon-ec2/

    When Google launched its EC2 rival, Google Compute Engine, last June, it set some high expectations. Sebastian Standil’s team at Scalr put the cloud infrastructure service through its paces — and were pleasantly surprised at what they found.

    So should you switch?

    AWS offers an extremely comprehensive cloud service, with everything from DNS to database. Google does not. This makes building applications on AWS easier, since you have bigger building blocks. So if you don’t mind locking yourself into a vendor, you’ll be more productive on AWS.

    But that said, with Google Compute Engine, AWS has a formidable new competitor in the public cloud space, and we’ll likely be moving some of Scalr’s production workloads from our hybrid aws-rackspace-softlayer setup to it when it leaves beta. There’s a strong technical case for migrating heavy workloads to GCE, and I’ll be grabbing popcorn to eagerly watch as the battle unfolds between the giants.

    Reply
  15. Tomi Engdahl says:

    Sources: Amazon and CIA ink cloud deal
    http://fcw.com/Articles/2013/03/18/amazon-cia-cloud.aspx?Page=1

    In a move sure to send ripples through the federal IT community, FCW has learned that the CIA has agreed to a cloud computing contract with electronic commerce giant Amazon, worth up to $600 million over 10 years.

    Amazon Web Services will help the intelligence agency build a private cloud infrastructure that helps the agency keep up with emerging technologies like big data in a cost-effective manner not possible under the CIA’s previous cloud efforts, sources told FCW.

    “As a general rule, the CIA does not publicly disclose details of our contracts, the identities of our contractors, the contract values, or the scope of work,” a CIA spokesperson told FCW.

    Reply
  16. tomi says:

    Whatever happened to self-service computing?
    http://www.theregister.co.uk/2013/04/08/cloud_self_service/

    According to Gartner’s Emerging Technologies Hype Cycle for 2012, cloud computing has passed the peak of inflated expectations and is heading for the trough of disillusionment at full speed. Cloud computing didn’t live up to the overblown hype.

    We have to get over the disappointment before we start to rationally accept the benefits that cloud can bring. Among the under-fulfilled promises, self-service stands out as one of cloud computing’s least adopted features.

    This is a shame: self-service has helped drive down costs in many other industries.

    There is hardly a glut of virtualisation administrators with the requisite 10 years’ experience in a five-year-old technology stack, so a lot of companies are “waiting for the technology to mature”.

    The far more important reason for the slow adoption of self-service is that for years the marketing of cloudy self-service has been overhyped and poorly targeted.

    Self-service was sold as the technological hammer that would break departmental dependence on IT.

    That didn’t quite work out. You will find few end-users playing with virtual infrastructure. Those making use of cloud computing are systems administrators who already have a solid grounding in the theory behind what happens when they push a given button.

    Cloud computing was to be so simple it would abstract the difficulty of IT away from end-users. Today we are seeing an increase in companies that offer to abstract away the difficulty of managing cloud computing. Where did this all go sideways?

    In the truest sense of the concept, this is cloud computing: software as a service (SaaS) running on a distributed, virtualised infrastructure abstracted from the end-user.

    The problem is that only a handful of these services are anywhere close to consumer-ready.

    SaaS may be the end-goal of cloud computing

    Infrastructure as a service (IaaS) and platform as a service (PaaS) are the nuts-and-bolts self-service elements of a cloud infrastructure that underlie SaaS applications.

    Despite the availability of the technology, we don’t seem to take advantage of it much. PaaS is around in numerous forms and yet most developers I know would still prefer provisioning, configuration and maintenance to be taken care of by IT.

    Even if most of that is automated and provisioning has been reduced to filling out a form, there is a psychological barrier there that is hard to overcome.

    The same is true of attempting to push IaaS out to the world.

    Our tools and technology have provided automation and standardisation of our working environments.

    In the end, however, we still have the desire to understand what we are doing when we hit that button.

    Reply
  17. Tomi Engdahl says:

    Rackspace attacks Amazon with new cloudy clones
    AWS spies clone army advancing on service provider, telco fronts
    http://www.theregister.co.uk/2013/04/15/rackspace_sells_cloud_clones_to_sps/

    Look out, Amazon Web Services. Rackspace is cloning its own cloudy service – and to quote Jimi Hendrix’s Foxy Lady, it’s “comin’ to getcha.”

    Way back when, Rackspace Hosting teamed up with NASA to create the OpenStack community precisely to leverage the smarts and excitement of the open source community to take on the closed and controlled AWS cloud. Now Rackspace will take OpenStack and leverage its own experience in building custom infrastructure to house OpenStack clouds, and deliver it as a service to telecommunication and service provider customers.

    Basically, it’ll now sell and operate clones of its own Rackspace Cloud, turning its enemies into allies in its battle against AWS.

    But as Engates has explained before, Rackspace doesn’t think infrastructure is necessarily a huge differentiator; customer service and operations are. And enlisting the help of many different cloud providers all around the world – building a mercenary army – does give OpenStack a better shot in taking on AWS, Google, and Microsoft in the public and private cloud spaces, companies that have a lot more resources than Rackspace can ever bring to bear.

    If this idea is crazy, it is crazy like a fox – and pretty much the only move that Rackspace could make to help OpenStack catch up to and perhaps even pass AWS in terms of revenue and usage.

    “One of our goals is to advance cloud computing – and OpenStack in particular – around the globe,”

    Reply
  18. Tomi Engdahl says:

    Linux Foundation takes over Xen, enlists Amazon in war to rule the cloud
    Xen virtualization gains support from Amazon, Cisco, Google, Intel, and more.
    http://arstechnica.com/information-technology/2013/04/linux-foundation-takes-over-xen-enlists-amazon-in-war-to-rule-the-cloud/

    The Linux Foundation has taken control of the open source Xen virtualization platform and enlisted a dozen industry giants in a quest to be the leading software for building cloud networks.

    The 10-year-old Xen hypervisor was formerly a community project sponsored by Citrix, much as the Fedora operating system is a community project sponsored by Red Hat.

    Amazon is perhaps the most significant name on that list in regard to Xen. The Amazon Elastic Compute Cloud is likely the most widely used public infrastructure-as-a-service (IaaS) cloud, and it is built on Xen virtualization. Rackspace’s public cloud also uses Xen.

    Xen is thus a threat to VMware in its quest to evolve from a virtualization vendor into a cloud vendor. Xen is even complementary to OpenStack, the popular open source cloud infrastructure software that can be used by either private businesses or service providers to build IaaS clouds. Xen is one of several hypervisors that can be used with OpenStack.

    Reply
  19. Tomi Engdahl says:

    Amazon: S3 cloud contains two trillion objects
    ‘We’ve doubled our big number in a year’
    http://www.theregister.co.uk/2013/04/18/amazon_2_trillion_s3/

    Amazon Web Services now has over two trillion objects within its S3 storage cloud, just one year after Bezos & Co. smashed through the one-trillion ceiling.

    Each Amazon object, they say, can range from “range from zero to 5 TB in size,” but Amazon does not disclose the size distribution of stored objects. An object consists of a key, a Version ID, a value, metadata, subresources, and access control information.

    “It took us six years to grow to one trillion stored objects, and less than a year to double that number,”

    S3 is now regularly peaking at 1.1 million requests per second

    These figures may not reflect the actual size of Amazon’s cloud, as they do not factor in Elastic Block Storage – a service used by a very large proportion of EC2 instances.

    Microsoft, meanwhile, stated in July 2012 that its Azure cloud stores 4.03 trillion objects, and that the peak request rate was 880,000 per second (versus Amazon’s 1.1m across two trillion objects).

    Reply
  20. Tomi Engdahl says:

    Datacentre recovery times are on the rise, as outage costs hit $1.6m
    http://www.zdnet.com/datacentre-recovery-times-are-on-the-rise-as-outage-costs-hit-1-6m-7000012323/

    Summary: The amount of time it takes to recover a datacentre has increased and CIOs are concerned that their backup and recovery tools won’t be able to cope with increasing volumes of data.

    Rising datacentre recovery times are a concern to businesses which stand to lose big money with every hour of downtime.

    backup and recovery tools will become less effective as the amount of data and servers in the organisation rises.

    The study also found that recovering virtual servers is faster than recovering physical servers

    This slight increase in downtime can have a significant impact on a business. The survey claims that every hour of datacentre downtime costs an enterprise $324,793 (£215,594), which means the average cost to an organisation for each incident is $1.6m (£1.06m).

    Reply
  21. Tomi Engdahl says:

    Linux on Azure—a Strange Place to Find a Penguin
    http://www.linuxjournal.com/content/linux-azure—-strange-place-find-penguin

    Linux enthusiasts might think the idea of running a Linux virtual machine on Microsoft’s Azure service is like finding a penguin sun tanning in the Sahara. Linux in the heart of the Microsoft cloud? Isn’t that just wrong on so many levels?

    Why would anyone want to run Linux on Microsoft servers?

    For the hobbyist, I suppose for the same reason people climb Mount Everest: because it’s there. For the business user, the prospect of spinning up Linux VMs in Microsoft’s fabric offers new options for collocating open-source technologies with existing Microsoft Azure services.

    For the cloud market in general, more competition is good news for consumers.

    The Cloud Marketplace

    Virtual machines in the form of virtual private servers (VPSes) have been offered for nearly a decade from a galaxy of providers, using virtualization technologies such as Xen, Virtuozzo/OpenVZ and KVM. These providers subdivide a physical server into multiple small virtual servers. Users typically subscribe on a monthly basis, with an allotment of memory, disk and network bandwidth.

    Later vendors, such as Amazon, Rackspace and now Microsoft, offer the same service with a finer-grained commitment. Users can spin up a VM (or a hundred) by the hour, pay for bandwidth by the gigabyte and utilize more advanced features, such as private networks, SAN-like storage features, offloaded database engines and so on.

    Amazon enjoyed early success with its Elastic Compute Cloud and other vendors, such as Rackspace, soon followed suit.

    Microsoft originally opted for a different, more complex cloud strategy. Azure was built as a “platform as a service” offering (see the Cloud Flavors sidebar) in which developers could write applications that ran in various roles and talked to Azure APIs.

    In practice, developers were forced to write Azure-centric applications and adoption was slow. Many enterprises with mixed Windows/Linux environments found that hosting their own self-managed servers on Amazon and other cloud environments was more attractive than spending time porting and debugging their applications.

    In 2012, Microsoft added “infrastructure as a service” (virtual machines) offerings to its lineup, allowing users to run and administer Windows and Linux virtual machines they directly control.

    Azure virtual machines are still in “Community Preview”, which is Microsoft lingo for “Beta”. Support is limited to forums

    Reply
  22. Tomi Engdahl says:

    Rackspace fluffs .NET cloud support
    ‘We were here first, Microsoft!’
    http://www.theregister.co.uk/2013/05/02/rackspace_net_support/

    Rackspace is making overtures to Microsoft users by broadening .NET support for its cloud and managed hosting, though these devs may be increasingly swayed by Azure.

    The company announced a Cloud SDK for Microsoft.NET, and a PowerShell-based API client “PowerClient” for Rackspace public cloud services on Wednesday. The two tools see the company try to gain developers from Microsoft – but with Azure targeted very carefully at .NET organizations, what can Rackspace offer that Microsoft can’t?

    PowerClient is an alternative to the Linux-based NovaClient. The software works with Rackspace’s OpenStack-based servers and is intended to eventually become a full API for all OpenStack deployments.

    “Why do you think [Microsoft is] investing all that in building their cloud infrastructure? Guess who is already there – we’re there.”

    Along with Amazon, Google, Joyent, Linode, SoftLayer, and so on, we might add.

    Reply
  23. Tomi Engdahl says:

    Platform clouds can make enterprises all teeth and no tail
    Red Hat and VMware want to be your private parts
    http://www.theregister.co.uk/2013/04/22/paas_clouds/

    The cloud is at the same point in its history that proprietary minicomputers were at four decades ago.

    Back then, everybody was trying to figure out how to use this new technology, which offered substantial economic and ease-of-programming benefits compared to the big iron systems they replaced.

    At the time it was not obvious where the upstart platforms would go or how enterprises would adopt them or reject them.

    A slew of Unix-based system makers followed suit in the open systems war of the late 1980s and early 1990s, making minicomputers more compatible

    “IT departments hate this rogue compute that is actually very traditional,” he explains. “The problem that they have is that it will never go away. We have had rogue computing forever. I recall when I was at Oracle in the early days when the IT departments wouldn’t take meetings with my sales team because they thought relational databases were stupid. And that was fine, because we sold it departmentally on DEC VAXes. It is all kind of humorous now, but if you go back and use history as an example, you’ll find that this is how most technology comes in. We had the IBM PC come in that way, and I was at Saleforce.com and that came in departmentally – the IT department didn’t want that stuff.”

    “Big corporations that have large sunk costs in IT and compliance issues are going to dabble,” says Dillon. “If they are a little progressive, instead of fighting the rogue compute guys, they may give them rules – use these programming languages and this stack under these
    restrictions.”

    Hybrid vigor

    Red Hat and VMware, which will be rivals alongside Microsoft for private platform clouds, are positioning their various cloudy wares to run both inside and outside the corporate firewall – and across it if necessary.

    “Hybrid is where we really think the industry is going,” says Joe Fernandez, senior product manager for OpenShift Enterprise at Red Hat.

    “It is interesting to note that Google with App Engine, Microsoft with Azure, and Amazon with the Elastic BeanStalk are completely committed to a public PaaS cloud,” Fernandez observes. “Google and Amazon are not interested in packaging up commercial software and selling services or licenses, and Red Hat is. Microsoft might go with Azure where we went with OpenShift at some point.”

    Reply
  24. Tomi Engdahl says:

    ‘Inconsistent’ watchdogs throw cloud biz barons into a tizzy
    Inexperienced officials wreck everything, says expert
    http://www.theregister.co.uk/2013/05/03/data_privacy_cloud_market_asia/

    A lack of consistency over the way Asian regulators approach data privacy issues has led to a slow take-up of cloud services by businesses in the region, an expert has said.

    Hong Kong-based outsourcing contracts expert Peter Bullock of Pinsent Masons, the law firm behind Out-Law.com, said that cloud providers are not offering business customers services that account for the fragmented regulatory approach to cross-border transfers of personal data.

    Bullock said EU firms benefit from “a level of consistency” on how data privacy issues are dealt with by regulators in the trading bloc. “This is not the case across Asia Pacific,” he said.

    Reply
  25. Tomi Engdahl says:

    Dell dumps its public cloud offerings
    It will offer public cloud services through partners rather than its own public cloud
    http://www.itworld.com/cloud-computing/357227/dell-dumps-its-public-cloud-offerings

    May 20, 2013, 12:16 PM — Dell has become one of the first high profile companies to dump its public cloud ambitions, announcing today that it will no longer invest in its OpenStack and VMware-based cloud services.

    Network World’s Brandon Butler just last week suggested Dell might discontinue its OpenStack cloud and now Dell has essentially confirmed it.

    Instead of offering its own public cloud, Dell will sell through partners. Initial partners include Joyent, ScaleMatrix and ZeroLag.

    Curiously, it appears that none of those platforms is built on OpenStack. ZeroLag is based on VMware’s technology and Joyent’s cloud is proprietary. ScaleMatrix, whom I hadn’t heard of, mentions OpenStack on its web site but doesn’t appear to have built a cloud service on the technology. I’ve asked for more details though in case I’m wrong about ScaleMatrix. The Dell spokeswoman said the company planned to continue offering OpenStack public cloud services through its partners.

    Reply
  26. Tomi Engdahl says:

    FedRAMP seal of approval clears Amazon for more government work
    http://gigaom.com/2013/05/20/fedramp-seal-of-approval-clears-amazon-for-a-lot-more-government-work/

    Amazon Web Services can now claim a rare blessing among cloud providers: it has earned the FedRAMP accreditation that certifies that it has met a variety of security standards. That certification, which covers AWS GovCloud as well as Amazon’s other U.S. regions, should make it easier for state, local and government agencies to put workloads on Amazon’s public cloud infrastructure without having to jump through so many hoops.

    FedRAMP, which stands for the Federal Risk and Authorization Management Program, “is a U.S. government-wide standardized approach to security assessment, authorization and monitoring,”

    AWS now has both a FISMA (Federal Information Security Management Act) Moderate and a FedRAMP Moderate ranking.The latter designation means that ”sensitive data” can be stored and managed on AWS infrastructure.

    “This is a journey, a sliding scale. Sensitive data is a term of art used in government. Even more top secret categories of data require additional certifications,” Selipsky said.

    To date, exactly one cloud provider — Autonomic Resources, a small North Carolina company — had earned the FedRAMP seal of approval from the General Services Administration. Now AWS is in the mix

    Up to 15 providers are expected to clear FedRAMP hurdles this year with double that number expected to do so in 2014 when FedRAMP certification becomes mandatory

    Reply
  27. Tomi Engdahl says:

    Analyst: Most data centers use virtualization for select applications only
    http://www.cablinginstall.com/articles/2013/05/infonetics-data-center-operators.html

    “Server virtualization has been the focus of the data center industry for several years now, and the largest data center owners and Internet content providers like Google are ubiquitously exploiting virtual machines,” says Michael Howard, principal analyst for carrier networks and co-founder of Infonetics Research.

    Howard adds, “Yet the reality is the bulk of data center owners are more pedestrian in their deployments, finding it more operationally convenient to leave many areas of their data centers alone, using server virtualization for only select applications.”

    Reply
  28. Tomi Engdahl says:

    VMware’s Web Services Challenge Cloud Rivals Amazon to Microsoft
    http://www.businessweek.com/news/2013-05-21/vmware-s-web-services-challenge-cloud-rivals-amazon-to-microsoft

    VMware Inc. (VMW) is debuting a service that lets customers use the Web to access information and programs stored in its data centers, an effort to challenge Amazon.com Inc. (AMZN) and Microsoft Corp. (MSFT) in cloud computing.

    Early testers of the vCloud Hybrid Service include News Corp. (NWSA)’s Fox Broadcasting and the state of Michigan, VMware Chief Executive Officer Pat Gelsinger in an interview. The product will be more widely available in the third quarter, he said.

    VMware, the biggest provider of software that lets computers run multiple operating systems, is expanding in cloud-computing to bolster sales as U.S. customers trim technology spending.

    “Customers are asking for it — clearly the whole public cloud service area has been growing and maturing,”

    Gelsinger said VMware’s new offering can command a premium because it lets users easily move applications between so-called private clouds, where software and services are run on customers’ own machines, to public ones on VMware’s servers.

    Cloud Partners

    Microsoft rolled out an infrastructure-as-a-service product last month along with a pledge to match Amazon’s prices on certain offerings. While 71 percent of public-cloud customers said they use Amazon in a Forrester Research Inc. (FORR) survey, about 20 percent said they used Microsoft.

    VMware will announce partnerships with Tibco Software Inc. (TIBX) as well as Pivotal — a spinoff of VMware and its parent company EMC Corp. (EMC), according to Gelsinger. Those agreements will let customers run the applications they purchase from those companies in VMware’s data centers, he said.

    Reply
  29. Tomi Engdahl says:

    The truth about Cloud security
    http://www.edn.com/electronics-blogs/practical-chip-design/4414580/The-truth-about-Cloud-security

    There are times when it seems as if a rumor gets started and then it grows on itself until we just accept it as a truth. Nobody seems to really question it even though many, including those that quote it, know of its shaky derivation. One that comes to mind is that 70% of development is spent in verification. Nobody even knows if this means number of people, elapsed time, cost or some other measure, but we do know that 70% of something is taken up by this task.

    Over the past few months another such myth has started to emerge and that is what I want to discuss today. The myth is that nobody will trust their design to the Cloud and this is why EDA in the Cloud has not worked.

    It was from a small, independent IP provider and basically said – if someone steals my design, then I take it as a sign that it has value and that I did a good job. I would rather get paid for it, and most of the companies that are reputable will do so, because it is not worth them stealing something and getting caught.

    Next I spoke to Mohamed Kassem, an entrepreneur who is constructing a yet to be announced cloud-based semiconductor company. He told me that there is a big difference based on the size of company that you talk to. He said that large semiconductor companies talk a lot about the security of their data and yet they don’t really walk the talk.

    There is also an issues regarding small versus large cloud providers. People are more concerned with knowing where their data is located and how it is protected. Some people may have an issue with using Amazon Cloud services and may prefer a small dedicated service provider that makes it very clear how the system is organized and maintained. Others would rather trust a company that is putting it reputation on the line.

    But this comes back to the point Mohamed was making. He said that most companies IT infrastructure is so badly put together and maintained that it provides less protection than they think.

    If the data is stored on the cloud and used in the cloud, it actually provides a much easier upgrade, maintenance and control environment compared to the situation today where they have to ship stuff to many customer sites and then have no visibility into how it is used. They can see who access it, uses it and they can track stuff.

    So, in the past it has been the large EDA companies that have tried and failed with the Cloud and offering cloud-based services. It may be that their top few customers are concerned about the Cloud and not yet ready to take the leap.

    Reply
  30. Tomi Engdahl says:

    IDC: Outsourcing sector needs rescue fund for cloudy customers
    http://www.channelregister.co.uk/2013/02/15/outsourcing_rescue_fund/

    The outsourcing industry should develop a voluntary crisis fund to give protection to customers should their services provider hit the wall.

    This was proposed by IDC in light of 2e2′s recent high profile collapse that left some customers scrambling for alternative suppliers, and highlighted the pitfalls of outsourcing.

    One solution is to create a “voluntary shared rescue fund” along the lines of the Association of British Travel Agents bond, said IDC associate veep Douglas Hayward.

    Hayward said hosting and cloudy firms could hold a pot of cash in escrow to be used so that hosted data can be transitioned to new providers should the need arise.

    “This could be marketed either as an industry-wide service, or as an optional value-added service to be bought by clients when signing a hosting/outsourcing contract,” he said.

    Another option is for hosting firms to guarantee regular data backups are made to third party DR providers who are obliged to hand over the data to the customer in the event the hosting entity goes pop.

    “That option, however, would be costly and arguably wasteful, not to mention bad for the environment, by generating huge volumes of duplicated data in independent data centres.”

    Reply
  31. Tomi Engdahl says:

    Top 10 countries in which to locate a data center
    http://www.zdnet.com/top-10-countries-in-which-to-locate-a-data-center-7000015971/

    Summary: The safest and cheapest place to open a data center? The United States.

    Cushman & Wakefield just issued its annual “Data Center Risk Index,” which evaluates the favorable and not-so-favorable factors to weigh in locating data centers in various countries around the globe.

    The United States comes out on top, the report’s authors have determined. As they describe it, the U.S. “still has the highest internet bandwidth capacity of all the countries included in the index, the average cost of electricity has remained relatively low whilst most other countries have seen prices increase.” However, they adds, “natural disasters remain the most significant risk to data centers, as we saw last year with hurricane Sandy in New York.”

    Top 10 countries:
    US
    UK
    Sweden
    Germany
    Canada
    Hong Kong
    Iceland
    Norway
    Finland
    Qatar

    Reply
  32. Tomi Engdahl says:

    IBM to buy cloud specialist SoftLayer
    http://news.cnet.com/8301-1001_3-57587551-92/ibm-to-buy-cloud-specialist-softlayer/

    The computing giant continues to push its cloud computing effort in a deal rumored in the region of $2 billion, according to The Wall Street Journal.

    IBM said Tuesday it is buying SoftLayer Technologies, as the computing giant aims to bolster its cloud computing efforts.

    While financial terms of the deal were not disclosed, the Wall Street Journal said the acquisition is worth around $2 billion, citing a person familiar with the deal.

    IBM will create a new cloud services division within its Global Services unit that will house SoftLayer as a standalone company. It will act as a junction box between other cloud services the company owns.

    Reply
  33. Tomi Engdahl says:

    Red Hat parachutes into crowded PaaS market
    OpenShift jostles with Azure, GAE, Heroku, Elastic Beanstalk, for developers
    http://www.theregister.co.uk/2013/06/10/red_hat_openshift_ga/

    After an extended beta, Red Hat’s OpenShift PaaS is ready for general consumption, pitting the Linux company’s platform cloud against similar products from Amazon, Google, Microsoft, and others.

    OpenShift was launched two years ago, and only now is it being commercialized. Prices for OpenShift Online start at $20 per month, which gets developers three small application containers (“gears” in Red Hat parlance), the company announced on Monday.

    Along with this they get access to Red Hat technical support and 6GB of storage per gear. Additional app containers are charged at $0.04 per hour for a small gear

    This compares with a base price of $0.05 per hour for 512MB app containers in PaaS leader Heroku, or $.10 for 1024MB ones.

    When we asked Red Hat about the hype that PaaS’s have received versus their adoption, Badani, said: “It’s hard for me to say exactly where we are on the hype cycle because I’m in the tornado. [PaaS] sounds fantastic… it seems like a natural thing to do, but that being said you’ve got to say, that’s interesting, but i need to stand up my apps and see if its actually for me”.

    Reply
  34. Tomi says:

    CIA spooks picked Amazon’s “superior” cloud over IBM
    Procurement report reveals tech gap in cloud cold war
    http://www.theregister.co.uk/2013/06/15/cia_amazon/

    The CIA picked Amazon over IBM for a lucrative government contract not because of price, but because of the company’s “superior technical solution” – a view that contrasts with IBM’s vision of itself as the go-to tech organization for governments.

    The revelation came to light on Friday when the US Government Accountability Office released a partially-redacted report that outlined the reasons for why the spooks plumped for Amazon over IBM for a $600m private cloud contract, and why IBM protested this decision.

    Although Amazon came in with an evaluated annual price of $148 million per year versus IBM’s $94 million, the CIA chose Amazon for its technical sophistication, the report says.

    “While IBM’s proposal offered an evaluated [deleted] price advantage over 5 years, the [source selection authority] concluded that this advantage was offset by Amazon’s superior technical solution,” the report says.

    In defense of Amazon’s reliability, the report quotes a glowing endorsement of the AWS cloud by the Jet Propulsion Laboratory, which said that it was not affected by outages in specific Amazon data centers because “we implement failover and elastic load balancing but it’s simple and inexpensive and very much worth it”.

    What the report highlights is that though IBM came in with a lower price than Amazon, its technology was seen to be simply lacking by the procurement bods over at the CIA.

    A specific area where IBM fell down was in its ability to provide auto-scaling within a platform-as-a-service environment consisting of thousands of nodes that need to process MapReduce jobs over raw datasets of 100TB in size at a time. In other words – Amazon’s Hadoop tech beat IBM’s.

    Given IBM’s recent annointment of OpenStack as the preferred platform for its SmartCloud technology, the fact that OpenStack is several years behind Amazon Web Services in capability, and the lackluster take-up of SmartCloud so far, and even the recently acquired SoftLayer bare metal cloud, we reckon that Amazon’s technical dominance over IBM is likely to hold true for some time to come.

    Reply
  35. Tomi Engdahl says:

    Amazon’s Invasion of the CIA Is a Seismic Shift in Cloud Computing
    http://www.wired.com/wiredenterprise/2013/06/amazon-cia/

    The rumors are true. Amazon is providing cloud services to the CIA. But what’s most intriguing about the multi-million-dollar deal is not what Amazon is doing, but how the company is doing it — and what that means for the future of that thing called cloud computing.

    The deal was big news across the web. Amazon, pundits said, had stepped up its effort to challenge old-school giants like IBM in an area the old guard had long dominated federal contracting.

    For years, cloud computing has been defined by sharp contrast in philosophy. New-age companies like Amazon and Google said computing power should be offered over the internet, much like electricity is offered over the grid. This, they said, was cloud computing. But old-school companies like IBM and HP — companies threatened by this new way of doing things — urged businesses to duplicate cloud computing services like Amazon EC2 and Google Compute Engine inside private data centers, arguing that this provided greater security and privacy. You could still have cloud computing, the old guard said, without the public internet.

    Amazon, in particular, scoffed at this notion of the “private cloud.” Behind such voices as Andy Jassy, the head of the company’s Amazon Web Services business, and AWS chief technology officer Werner Vogels, the web giant made a point of telling the world a private cloud was not a cloud — that a cloud, by definition, was delivered to everyone, across the public internet.

    Yes, some of this was just semantics, an effort to grab hold of a marketing term — cloud — that has become vitally important in the computing world.

    Until now.

    Amazon declined to discuss its contract with the CIA. But in typically fashion, it did provide a canned statement, arguing that the CIA contract does not represent that big of change for the company. Amazon already offers cloud services, known as GovCloud and FinQloud, designed specifically for government agencies and financial institutions. “We can tell you that GovCloud and FinQloud are examples of ‘community clouds’ where we are delivering members-only implementations of AWS to groups of organizations who share specific requirements,” the statement reads.

    But GovCloud and FinQloud reside inside Amazon data centers. The CIA deal is something different. The GAO’s makes it clear that building cloud services inside CIA data centers is part of the pact, and a source familiar with Amazon’s thinking confirms this represents a significant change in strategy for the web giant.

    Amazon has been hugely successful offering its public cloud services to developers and startups — by one estimate, AWS now runs as much as one percent of the internet — but it’s now looking for ways to expand its cloud business into much larger operations, the so-called “enterprise” and government agencies such as the CIA.

    Reply
  36. Tomi Engdahl says:

    99.999 Is Not Enough: An OpenCloud Approach to Delivering Application Uptime and Performance
    http://www.rackspace.com/knowledge_center/whitepaper/99999-is-not-enough-an-opencloud-approach-to-delivering-application-uptime-and?cm_mmc=SMB12Display-_-Techmeme-_-AppDev-_-whitepaper

    The pressure to keep vital applications online and performing well is extreme. The stakes are high; application downtime means loss of revenue, and application slowdown means loss of customers.

    At the same time, it is hard to achieve end-to-end visibility of production environments because they span data centers, vendors, and even internal IT teams. Sometimes the only group that can help troubleshoot problems for an application is the development team; this diverts the time of important resources.

    As a result, IT departments remain mired in the present, tied to keeping the application up and running, and expected to avoid problems from the past. Looking strategically toward the future is a luxury many can’t afford, despite the constant demands on IT for the newest and latest.

    In this white paper CITO Research examines how Rackspace® Critical Application Services can help clients achieve end-to-end visibility of their application environments, maintain high performance, and help prevent applications from crashing, all at a reasonable monthly cost.

    Reply
  37. Tomi Engdahl says:

    Points to Consider When Choosing a Hosting Company
    http://smallbiztrends.com/2009/10/choosing-hosting-company.html

    Here’s how to get in front of the curve, anticipate issues, and determine whether a hosting company is a good fit for your needs and will be there when you need them most:

    1. Contact current customers of the hosting provider. See how satisfied they really are, whether they have encountered any problems, and how the hosting company responded.

    2. Pick up the phone and call the support line. Ask a few questions and see how they respond.

    3. “Make sure you understand the different packages and services the company provides,” says security consultant Mitnick. “Read the website; ask questions.” There are a number of factors to consider. How much storage space will you get? What about bandwidth and data transfer – how much is covered? Will you be charged for over-usage and if so, how much? How frequently will site backups be made? What’s the hosting company’s uptime / downtime experience? What level and type of customer support will you be entitled to with the package you choose – email-only support, telephone customer support during business hours, or telephone support 24/7? What level of security monitoring and intrusion prevention/detection is available?

    4. Look for a secure provider. In today’s world, where intrusion attacks have increased dramatically, security is a much bigger issue than in the past for small businesses.

    Bottom line: Next time you are in the market for website hosting, take the time to make an informed decision. Do not rush into it without doing due diligence. You may regret a snap decision later on when you find out just how momentous your decision was for your company.

    Reply
  38. Tomi Engdahl says:

    Oracle and Microsoft To Announce Cloud Partnership Monday
    http://developers.slashdot.org/story/13/06/23/0255243/oracle-and-microsoft-to-announce-cloud-partnership-monday

    ” On Monday Microsoft and Oracle are expected to announce a ‘cloud” partnership’. ”

    Comments:
    Microsoft has built an impressive new entrant to the Infrastructure-as-a-Service market, and Ubuntu is there for customers who want to run workloads on Azure that are best suited to Linux. Windows Azure was built for the enterprise market, an audience which is increasingly comfortable with Ubuntu as a workhorse for scale-out workloads; in short, it’s a good fit for both of us, and it’s been interesting to do the work to bring Ubuntu to the platform.

    Reply
  39. Tomi Engdahl says:

    Microsoft Pumps $700 Million Into Iowa Data Center
    http://allthingsd.com/20130623/microsoft-pumps-700-million-into-iowa-data-center/

    The mysterious company behind the $700 million “Project Mountain” data center in Des Moines, Iowa, finally has a name. And it’s not Apple or Amazon.

    It’s Microsoft.

    Microsoft is the latest tech behemoth to choose Iowa as the site of its data-center ambitions. Earlier this year, Facebook announced plans to build a $300 million server farm in the state. And last year, Google said it would plow another $400 million into its data center in Council Bluffs.

    Reply
  40. Tomi Engdahl says:

    Oracle and Microsoft have signed a cooperation agreement that will bring Oracle products, such as databases and the Java programming language for Microsoft’s Azure cloud computing platform.

    In addition, the contract will Oracle support Microsoft Windows Server Hyper-V virtualization platform. IT giants reported yesterday on the subject on Monday.

    The companies explain that cooperation starts with the customer’s desire and need to get a more adaptable IT systems.

    “Cloud era of co-operation behind the scenes is not enough. We want more, from Oracle to be more, “said Microsoft CEO Steve Ballmer.

    Source: http://www.tietoviikko.fi/kaikki_uutiset/jatit+lyovat+hynttyyt+yhteen++oracle+ja+microsoft+tekevat+tuotteistaan+yhteensopivia/a911177?s=r&wtm=tietoviikko/-25062013&

    Reply
  41. Tomi Engdahl says:

    Microsoft loads rival into its platform cloud
    Engine Yard revs up inside Azure as Redmond beckons to developers
    http://www.theregister.co.uk/2013/06/26/azure_engine_yard_cloud/

    Microsoft is adding a rival technology into its Windows Azure cloud as the company strives to gain relevance among developers.

    The addition of the Engine Yard platform-as-a-service into the Windows Azure marketplace was announced by the two companies on Wednesday. It will give developers the option of using Engine Yard over Azure’s own platform features, which were much hyped by Microsoft at launch but have since received less attention than its low end infrastructure-as-a-service tech.

    Engine Yard is a tool that automates infrastructure, middleware, and management, giving developers a platform for running apps built in Ruby, Node.JS, and PHP. It competes with offerings from monolithic cloud providers like Google via Google App Engine, Microsoft via Azure, and Amazon via Elastic Beanstalk, and Salesforce via Heroku.com, and also open source projects like Pivotal’s CloudFoundry or Red Hat’s OpenShift.

    “We want to be fully multi-cloud,”

    Reply
  42. Tomi Engdahl says:

    Test equipment gets boost from the cloud
    http://www.edn.com/electronics-blogs/test-times/4419233/Test-equipment-gets-boost-from-the-cloud

    The rapid increase in the adoption of cloud-based services among enterprises, financial institutions, healthcare, and government sectors will drive the demand for test solutions among service providers and network equipment manufacturers.

    According to Frost & Sullivan’s recent research, the global cloud infrastructure testing market generated revenue of $95.2 million in 2012.

    In order to mitigate these challenges and ensure customer satisfaction, the cloud infrastructure must have high reliability and quality. Cloud service providers typically have individual approaches towards infrastructure testing and there are no widely accepted methods in the industry. As businesses continue to move their applications and data to the cloud, the need for service providers and Network Equipment Manufacturers (NEMs) to invest in test solutions is expected to increase rapidly. Other key factors behind the growth of this market are increasing importance for security, service assurance and adoption of IPv6.

    The cloud infrastructure testing market is expected to reach revenue of $366.2 million in 2020 by growing at a CAGR of 18.3 percent from 2012 to 2020.

    Cloud Security is a MUST!

    Importance of SLAs in the Cloud

    Enterprise Adoption of Cloud Computing

    Impact of Network Downtime on Business

    Reply
  43. Tomi Engdahl says:

    Lockheed Martin clinches $1bn US Government cloud deal
    http://www.cloudpro.co.uk/saas/5885/lockheed-martin-clinches-1bn-us-government-cloud-deal

    Lockheed Martin has won a $1 billion (£651.46 million) contract to help the US Department of the Interior (DoI) move from on premise systems to the cloud.

    The DoI’s data is currently housed in over 400 datacentres, rooms and closets. The organisation has said it has chosen to transition to a cloud system in order to increase efficiency and meet the US Federal Data Center Consolidation initiative.

    The contract was awarded on an indefinite delivery/quantity basis with options to extend it through to the end of 2023.

    “We expect to provide greater variety of services, security, and support for application owners and employees.”

    Reply
  44. Tomi Engdahl says:

    NSA spying may cost cloud companies $35 billion
    http://blog.sfgate.com/techchron/2013/08/08/nsa-spying-may-cost-cloud-companies-35-billion/

    The National Security Agency surveillance programs aren’t just costing the United States credibility on the world stage — they’re costing domestic tech companies big money.

    The recent revelations that the NSA is closely tracking the electronic footprints of foreign citizens could cut as much as $35 billion off the top lines of U.S. cloud computing companies over the next three years. It might also put the nation’s leadership position in the fast growing sector at stake.

    That’s according to a new study by the Information Technology and Innovation Foundation, which tried to assess the financial toll of the clandestine PRISM program uncovered by The Guardian and Washington Post in early June. Leaks from defense contractor Edward Snowden showed that the NSA is routinely analyzing emails, photographs, online searches and other digital files that cross the servers of tech giants like Apple, Facebook, Google, Microsoft and Yahoo.

    “the severity of the threat depends on whether it will come to light that other governments also have Prism-like programs.”

    “It remains to be seen how big a hit,”

    The ITIF based its conclusions, which it acknowledged were a rough guess, on a recent survey of 500 respondents by the Cloud Security
    Alliance. The industry group found that “56 percent of non-US residents were less likely to use US-based cloud providers, in light of recent revelations about government access to customer information.”

    The Cloud Security Alliance survey suggests overseas citizens and businesses have begun to wonder if they can trust their information with major U.S. companies.

    Reply
  45. Tomi Engdahl says:

    Microsoft Is Working On a Cloud Operating System For the US Government
    http://yro.slashdot.org/story/13/08/11/2058235/microsoft-is-working-on-a-cloud-operating-system-for-the-us-government

    “It seems that Microsoft is relying even more on the opportunities provided by the cloud technology. The Redmond behemoth is preparing to come up with a cloud operating system that is specially meant for government purposes.”

    “Government agencies already use two of Microsoft’s basic cloud products: Windows Azure and Windows Server.”

    “somewhat new Cloud OS that could bear the name “Fairfax””

    “enhanced security, relying on physical servers on site at government locations.”

    Reply
  46. Tomi Engdahl says:

    How Much Will PRISM Cost the U.S. Cloud Computing Industry ?
    http://www2.itif.org/2013-cloud-computing-costs.pdf

    BY DANIEL CASTRO
    AUGUST 2013
    The recent revelations about the extent to which the National Security
    Agency (NSA) and other U.S. law enforcement and national security
    agencies have used provisions in the Foreign Intelligence Surveillance Act (FISA) and USA PATRIOT Act to obtain electronic data from third-parties will likely have an immediate and lasting impact on the competitiveness of the U.S. cloud computing industry if foreign customers decide the risks of storing data with a U.S. company outweigh the benefits

    The U.S. cloud computing industry stands to lose $22 to $35 billion over the next three years as a result of the recent revelations about the NSA’s electronic surveillance programs.

    What is the basis for these assumptions? The data
    are still thin—clearly this is a developing story and perceptions will likely evolve — but in June and July of 2013, the Cloud Security Alliance surveyed its members , who are industry practitioners, companies, and other cloud computing stakeholders, about their reactions to the NS
    A leaks. 16 Nor non-U.S. residents, 10 percent of respondents indicated that they had cancelled a project with a U.S.-based cloud computing provider; 56 percent said that they would be less likely to use a U.S.-based cloud computing service. For U.S. residents, slightly more than a third (36 percent) indicated that the NSA leaks made it more difficult for them to do business outside of the United States.

    Thus we might reasonably conclude that given current conditions U.S.
    cloud service providers stand to lose somewhere between 10 and 20 percent of the foreign market in the next few years.

    Reply
  47. Tomi Engdahl says:

    Tier 3 debuts DIY cloud networks
    Network administration with a robot touch
    http://www.theregister.co.uk/2013/08/14/tier_3_cloud_updates/

    Cloud operator Tier 3 has added reconfigurable networking to its cloud technology in an attempt to make it easier for resellers to massage the tech to suit their needs.

    The company announced the upgrades on Wednesday. They see Tier 3 implement self-service networking capabilities for its ESX-based cloud that will let customers and resellers create and manage load balancers, virtual LANS, site-to-site VPNs, and custom IP ports for edge firewalls – and all without having to contact the company’s network operations centre.

    This upgrade is designed to give Tier 3 customers greater reconfigurability for their networks and follows the introduction of a global object store based on the well-thought-of Riak CS technology.

    “What we’ve spent a lot of time doing is trying to raise the bar on self service,” Jared Ruckel, Tier 3′s director of product marketing, says.

    Though the company is built on top of VMware, it does not use VMware’s software-defined networking “Nicira” technology to deliver this service.

    “NSX (VMware Nicira) is something that we are currently looking into.”

    Though Tier 3′s press materials bill it as a “leading public cloud,” it is rather slight: the company has some 30TB of RAM capacity spread across an undisclosed quantity of Dell servers in 9 colocation facilities spread across North America and Western Europe.

    Reply
  48. Tomi Engdahl says:

    Forrester: NSA Spying Could Cost Cloud $180B, But Probably Won’t
    http://yro.slashdot.org/story/13/08/15/2259206/forrester-nsa-spying-could-cost-cloud-180b-but-probably-wont

    “Forrester’s James Staten argues in a blog post that the U.S. cloud computing industry stands to lose as much as $180 billion, using the reasoning put forth by a well-circulated report from The Information Technology and Innovation Foundation that pegged potential losses closer to $35 billion”

    Reply
  49. Tomi Engdahl says:

    The Cost of PRISM Will Be Larger Than ITIF Projects
    http://blogs.forrester.com/james_staten/13-08-14-the_cost_of_prism_will_be_larger_than_itif_projects

    Earlier this month The Information Technology & Innovation Foundation (ITIF) published a prediction that the U.S. cloud computing industry stands to lose up to $35 billion by 2016 thanks to the National Security Agency (NSA) PRISM project, leaked to the media in June. We think this estimate is too low and could be as high as $180 billion or a 25% hit to overall IT service provider revenues in that same timeframe. That is, if you believe the assumption that government spying is more a concern than the business benefits of going cloud.

    The high-end figure, assumes US-based cloud computing providers would lose 20% of the potential revenues available from the foreign market. However we believe there are two additional impacts that would further be felt from this revelation:

    1. US customers would also bypass US cloud providers for their international and overseas business – costing these cloud providers up to 20% of this business as well.

    2. Non-US cloud providers will lose as much as 20% of their available overseas and domestic opportunities due to other governments taking similar actions.

    If it is to be believed, as ITIF estimates, that half the cloud market will be fulfilled by non-US providers, then assuming this factor has just as much impact as the PRISM leak will have on US providers, then non-US cloud providers would take a hit of another $35 billion by 2016.

    Add it all up and you have a net loss for the service provider space of about $180 billion by 2016 which would be roughly a 25% decline in the overall IT services market by that final year, using Forrester market estimates. All from the unveiling of a single kangaroo-court action called PRISM.

    Scary picture but probably unrealistic.

    Reply
  50. Tomi Engdahl says:

    Why Some Startups Say the Cloud Is a Waste of Money
    http://www.wired.com/wiredenterprise/2013/08/memsql-and-amazon/

    Eric Frenkiel is through with convention and conformity. It was just too expensive.

    In Silicon Valley, tech startups typically build their businesses with help from cloud computing services — services that provide instant access to computing power via the internet — and Frenkiel’s startup, a San Francisco outfit called MemSQL, was no exception. It rented computing power from the granddaddy of cloud computing, Amazon.com.

    But in May, about two years after MemSQL was founded, Frenkiel and company came down from the Amazon cloud, moving most of their operation onto a fleet of good old fashioned computers they could actually put their hands on. They had reached the point where physical machines were cheaper — much, much cheaper — than the virtual machines available from Amazon. “I’m not a big believer in the public cloud,” Frenkiel says. “It’s just not effective in the long run.”

    Frenkiel’s story shows that while cloud computing is suited to many tasks — including getting your startup off the ground or running a modest website — it doesn’t make sense for others. When Zynga’s online gaming empire expanded to epic sizes in 2012, the company made headlines in shifting much of its operation off the Amazon cloud and into its own data centers, but smaller operations are making the move too.

    “I don’t know how much this is written about,” says Kit Colbert, an engineer at VMware, whose software is used by cloud services as well as in private data centers. “Within IT departments, public clouds do tend to get more expensive over time, especially when you reach a certain scale.”

    This past April, MemSQL spent more than $27,000 on Amazon virtual servers. That’s $324,000 a year. But for just $120,000, the company could buy all the physical servers it needed for the job — and those servers would last for a good three years. The company will add more machines over that time, as testing needs continue to grow, but its server costs won’t come anywhere close to the fees it was paying Amazon.

    Frenkiel estimates that, had the company stuck with Amazon, it would have spent about $900,000 over the next three years. But with physical servers, the cost will be closer to $200,000. “The hardware will pay for itself in about four months,” he says.

    “The public cloud is phenomenal if you really need its elasticity,” Frenkiel says. “But if you don’t — if you do a consistent amount of workload — it’s far, far better to go in-house.”

    Geolocation outfit Geoloqi moved off of Amazon in 2011 — but then moved back a year later. “We reached a point where we needed to be able to scale faster than would have been practical with physical servers,”

    Reply

Leave a Reply to Tomi Cancel reply

Your email address will not be published. Required fields are marked *

*

*