Docker and other Linux containers

Virtual machines are mainstream in cloud computing. The newest development on the this arena are fast and lightweight process virtualization.  Linux-based container infrastructure is an emerging cloud technology that provides its users an environment as close as possible to a standard Linux distribution.

Linux Containers and the Future Cloud article tells that as opposed to para-virtualization solutions (Xen) and hardware virtualization solutions (KVM), which provide virtual machines (VMs), containers do not create other instances of the operating system kernel. This brings advantage of containers over VMs is that starting and shutting down a container is much faster than starting and shutting down a VM. The idea of process-level virtualization in itself is not new (remember Solaris Zones and BSD jails).

All containers under a host are running under the same kernel. Basically, a container is a Linux process (or several processes) that has special features and that runs in an isolated environment, configured on the host.  Containerization is a way of packaging up applications so that they share the same underlying OS but are otherwise fully isolated from one another with their own CPU, memory, disk and network allocations to work within – going a few steps further than the usual process separation in Unix-y OSes, but not completely down the per-app virtual machine route. The underlying infrastructure of modern Linux-based containers consists mainly of two kernel features: namespaces and cgroups. Well known Linux container technologies are Docker, OpenVZ, Google containers, Linux-VServer and LXC (LinuX Containers).

Docker is an open-source project that automates the creation and deployment of containers. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows.
Docker started as an internal project by a Platform-as-a-Service (PaaS) company called dotCloud at the time, and now called Docker Inc. Docker is currently available only for Linux (Linux kernel 3.8 or above). It utilizes the LXC toolkit. It runs on distributions like Ubuntu 12.04, 13.04; Fedora 19 and 20; RHEL 6.5 and above; and on cloud platforms like Amazon EC2, Google Compute Engine and Rackspace.

Linux containers are turning to a way of packaging up applications and related software for movement over the network or Internet. You can create images by running commands manually and committing the resulting container, but you also can describe them with a Dockerfile. Docker images can be stored on a public repository. Docker is able to create a snapshot. Docker, the company that sponsors the Docker.org open source project, is gaining allies in making its commercially supported Linux container format a de facto standard. Red Hat has woken up to the growth of Linux containers and has begun certifying applications running in the sandboxing tech.

Docker was last week a lot in IT news because Docker 1.0 has been released. Here are links to several articles on Docker:

Docker opens online port for packaging and shipping Linux containers

Docker, Open Source Application Container Platform, Has 1.0 Coming Out Party At Dockercon14

Google Embraces Docker, the Next Big Thing in Cloud Computing

Docker blasts into 1.0, throwing dust onto traditional hypervisors

Automated Testing of Hardware Appliances with Docker

Continuous Integration Using Docker, Maven and Jenkins

Getting Started with Docker

The best way to understand Docker is to try it!

This Docker thing looks interesting. Maybe I should spend some time testing it.

 

341 Comments

  1. Tomi Engdahl says:

    Docker’s Solution to Slimmer Containers
    http://www.linuxjournal.com/content/dockers-solution-slimmer-containers?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Recently, I wrote about how Docker is investing in Unikernels to reduce the size of its containers, but there is more than one way to skin a cat. Unikernels are a hot new technology, but many developers prefer stability and maturity over “new and shiny”. And, that’s where Alpine Linux comes in.

    Docker containers are an amazing boon to developers and operations, and they’re essential tools to the emerging field of DevOps. By bundling an application with its runtime environment, you sidestep a potential world of pain.

    Reply
  2. Tomi Engdahl says:

    Cisco CTO: Containers will ride to private cloud’s rescue. Oh yes!
    Translation: We’re touting services but please don’t forget to buy our on-prem kit
    http://www.theregister.co.uk/2016/03/03/cisco_cto_says_cloud_no_mo/

    Cisco Partner Summit The emergence of containers will spark a renaissance for on-premises data centers, thus luring many businesses away from public cloud services, Cisco CTO Zorawar Biri Singh reckons.

    Speaking at the Cisco Partner Summit in San Diego, Singh said he believes as much as 30 per cent of public cloud workloads will be going offline in the next five years as customers opt instead for local data centers based on container stacks.

    Singh predicted that, as companies become more comfortable developing and deploying data centers with containers, larger deployments with public clouds will make less sense financially for many.

    “It is very expensive at that scale, as IT practitioners see simpler container-based infrastructure come out, they will build more smaller container-based data centers,” he said.

    Singh notes that Cisco would, well, obviously stand to profit from such a trend, though he argues that, with its focus on networking and UCS, Switchzilla has less to lose from public cloud growth than other server vendors.

    “There is a misperception that we are super exposed,” he said.

    “Overall port count decreases over time, but it is not as hard hit as compute and storage.”

    “We know exactly where our revenue base is, we are investing more in software because it is a natural balance,” he said. “There is nothing here that is a crazy leap.”

    Reply
  3. Tomi Engdahl says:

    Cloud Native Computing Foundation adopts Kubernetes
    Google-derived container code transferred to Foundation as its first project
    http://www.theregister.co.uk/2016/03/11/cloud_native_computing_foundation_adopts_kubernetes/

    The Cloud Native Computing Foundation (CNCF) formed last December has fired its first shot in anger, by deciding that Kubernetes is worthy of its community-coddling and standard-creating assistance.

    The CNCF formed under the auspices of the Linux Foundation and aims to cook up container-oriented and microservices-centric efforts.

    Kubernetes is a fine place for the Foundation to start, because it is a container orchestration tool

    Reply
  4. Tomi Engdahl says:

    Docker app for Windows 10 now in limited beta
    http://www.zdnet.com/article/docker-app-for-windows-10-now-in-limited-beta/

    Docker is launching a limited beta program for its new Docker for Windows and Docker for Mac apps.

    “Docker for Mac and Docker for Windows are at different stages of development, although they do share a significant code base,” blogged Patrick Chanezon, a member of the Docker technical staff and formerly a Director of Enterprise Evangelism at Microsoft. “Docker for Windows will initially be rolled out to users at a slower pace but will eventually offer all the same functionality as Docker for Mac. Docker for Windows currently only ships on Windows 10 editions that support Hyper-V.”

    Reply
  5. Tomi Engdahl says:

    CoreOS delivers container security with Clair
    http://www.zdnet.com/article/coreos-delivers-container-security-with-clair/

    Want to be certain that your containers’ software is safe and stable? Then CoreOS’s Container Image Analyzer 1.0 is for you.

    Reply
  6. Tomi Engdahl says:

    Docker for Mac and Windows Beta: the simplest way to use Docker on your laptop
    https://blog.docker.com/2016/03/docker-for-mac-windows-beta/

    To celebrate Docker’s third birthday, today we start a limited availability beta program for Docker for Mac and Docker for Windows, an integrated, easy-to-deploy environment for building, assembling, and shipping applications from Mac or Windows. Docker for Mac and Windows contain many improvements over Docker Toolbox.

    Faster and more reliable: no more VirtualBox! The Docker engine is running in an Alpine Linux distribution on top of an xhyve Virtual Machine on Mac OS X or on a Hyper-V VM on Windows, and that VM is managed by the Docker application. You don’t need docker-machine to run Docker for Mac and Windows.

    https://beta.docker.com/

    Reply
  7. Tomi Engdahl says:

    Linux is so grown up, it’s ready for marriage with containers
    Beats dating virtualisation, but – oh – the rules
    http://www.theregister.co.uk/2016/04/07/containers_and_linux/

    Linux is all grown up. It has nothing left to prove. There’s never been a year of the Linux desktop and there probably never will be, but it runs on the majority of the world’s servers. It never took over the desktop, it did an end-run around it: there are more Linux-based client devices accessing those servers than there are Windows boxes.

    Linux Foundation boss Jim Zemlin puts it this way: “It’s in literally billions of devices. Linux is the native development platform for every SOC. Freescale, Qualcomm, Intel, MIPS: Linux is the immediate choice. It’s the de facto platform. It’s the client of the Internet.”

    Linux is big business, supported by pretty much everyone – even Microsoft. Open source has won, but it did it by finding the niches that fit it best – and the biggest of these is on the millions of servers that power the Web. Linux is what runs the cloud, and the cloud is big business now.

    But VMs are expensive. Not in terms of money – although they can be – but in resources and complexity. Whole-system virtualisation is a special kind of emulator: under one host OS, you start another, guest one. Everything is duplicated – the whole OS, and the copy that does the work is running on virtual – in other words: pretend, emulated – hardware, with the performance overhead that implies. Plus, of course, the guest OS has to boot up like a normal one, so starting VMs takes time

    Which is what has led one wag to comment that: “Hypervisors are the living proof of operating system’s incompetence.”

    Fighting words! What do they mean, incompetence? Well, here are a few examples.

    The kernel of your operating system of choice doesn’t scale well onto tens of cores or terabytes of NUMA RAM? No problem: partition the machine, run multiple copies in optimally sized VMs.

    Your operating system isn’t very reliable? Or you need multiple versions, or specific app versions on the operating system? No problem. VMs give you full remote management, because the hardware is virtual. You can run lots of copies in a failover cluster – and that applies to the host hardware, too. VMs on a failed host can be auto-migrated to another.

    Make no mistake, virtualisation is a fantastic tool that has enabled a revolution in IT. There are tons of excellent reasons for using it, which in particular fit extremely well in the world of long-lived VMs holding elaborately configured OSs which someone needs to maintain. It enables great features, like migrating a live running VM from one host to another. It facilitates software-defined networking, simplifying network design. If you have stateful servers, full of data and config, VMs are just what you need.

    And in that world, proprietary code rules: Windows Server and VMware, and increasingly, Hyper-V.

    But it’s less ideal if you’re an internet-centric business, and your main concern is quick, scalable farms of small, mostly-stateless servers holding microservices built out of FOSS tools and technologies. No licences to worry about – it’s all free anyway. Spin up new instances as needed and destroy them when they’re not.

    Each instance is automatically configured with Puppet or Ansible, and they all run the same Linux distro – whatever your techies prefer, which probably means Ubuntu for most, Debian for the hardcore and CentOS for those committed to the RPM side of the fence.

    In this world, KVM and Xen are the big players, with stands and talks at events such as LinuxCon devoted to them. Free hypervisors for free operating systems – but the same drawbacks apply

    And the reason that everyone is talking about containers is they solve most of these issues. If your kernel scales well and all your workloads are on the same kernel anyway, then containers offer the isolation and scalability features of VMs without most of the overheads.

    We talked about how they work in 2011, but back then, Linux containers were still fairly new and crude.

    Since then, though, one product has galvanised the development of Linux containers: Docker.

    None of this means the end of “traditional” virtualisation. Containers are great for microservices, but at least in their current incarnations, they’re less ideal for existing complex server workloads.

    Reply
  8. Tomi Engdahl says:

    A big step forward in container standardization
    http://www.zdnet.com/article/a-big-step-forward-in-container-standardization/

    The Open Container Initiative has agreed to work on a common open container Image Format Specification.

    Server and cloud admins all agree that containers are great. What we don’t agree on is which containers are the best. Rather than let this spark into a standards fire-fight, the Open Container Initiative (OCI), has sought to create common container standards, The newest of these is open container Image Format Spec project.

    The OCI’s first project, OCI Runtime Spec, set the rule on how to run containers. The new project, OCI Image Format Spec, provides an open container image specification. This is the build artifact that contains everything needed to run a piece of software. This is a major step toward the promise of “package once, run anywhere” containers.

    What does that mean–besides reminding you of Java’s slogan write once, run anywhere? CoreOS CEO Alex Polvi’s answer: “It’s like Firefox and Chrome, Those are like Docker and rkt [two popular containers]; it’s like both of them sharing HTML5. It’s the thing developers develop against, and the web should just work the same in the browser.”

    The new specification will be based on Docker v 2.2. It will also draw from CoreOS’s appc spec. This is a work in progress, but the goal is in sight. As Boulle explained, “developers [will be able” to package and sign application containers, then run them in a variety of container engines”

    For users, “With a container image specification that anyone is free to influence and to implement, containers will run without modification in a variety of runtimes, from rkt and Docker, to Kubernetes and Amazon ECS.”

    Reply
  9. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft’s Azure Container Service is now generally available
    http://techcrunch.com/2016/04/19/microsofts-azure-container-service-is-now-generally-available/

    Azure Container Service, Microsoft’s container scheduling and orchestration service for its Azure cloud computing service, is now generally available.

    The service, which allows its users to choose either Mesosphere’s Data Center Operating System (DC/OS) or Docker’s Swarm and Compose to deploy and orchestrate their containers, was first announced in September 2015 and hit public preview this February.

    As Microsoft’s CTO for Azure (and occasional novelist) Mark Russinovich told me, he believes this ability to use both Docker Swarm/Compose and the open-source components of DC/OS — both of which are based on open-source projects — makes the Azure Container Service stand out from some of its competitors.

    Microsoft also believes that using these open-source solutions means its users can easily take their workloads and move them on-premise when they want (or move their existing on-premise solutions to Azure, too, of course).

    Azure Container Service
    Deploy and manage containers using the tools you choose
    https://azure.microsoft.com/en-us/services/container-service/

    Reply
  10. Tomi Engdahl says:

    Google brings robust cluster scheduling to its cloud
    http://www.computerworld.com/article/2491299/cloud-computing/google-brings-robust-cluster-scheduling-to-its-cloud.html

    Google Cloud users can now run Docker jobs alongside their Hadoop workloads in the same cluster

    Google is drawing from the work of the open-source community to offer its cloud customers a service to better manage their clusters of virtual servers.

    On Monday, the Google Cloud Platform started offering the commercial version of the open-source Mesos cluster management software, offered by Mesosphere.
    Cloud Watch

    Google brings robust cluster scheduling to its cloud
    What failure looks like in Microsoft’s cloud-first world
    VC investors hot for the cloud, mobile and robots
    Peer pressure! Business pushing the cloud on enterprise IT
    HP trims its cloud offer for lighter use

    With the Mesosphere software, “You can create a truly multitenant cluster, and that drives up utilization and simplifies operations,” said Florian Leibert, co-founder and CEO of Mesosphere. Leibert was also the engineering lead at Twitter who introduced Mesos to the social media company.

    Reply
  11. Tomi Engdahl says:

    New Container Image Standard Promises More Portable Apps
    http://www.linuxjournal.com/content/new-container-image-standard-promises-more-portable-apps?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    After all, if you use Docker for development, testing and deployment, why should you care about supporting Kubernetes or rkt? The problem becomes more apparent when you want to move from one cloud host to another, or if another container engine releases a feature that you realize you can’t live without.

    As things stand now, your only solution would be to rebuild the application images using the new container engine, but that introduces variables of its own, so it’s no longer possible to guarantee the code will run identically on both platforms.

    The ideal solution would be to build the app once and run it anywhere—and that means on any container runtime.

    That’s the goal of the Open Container Initiative, an organization that was formed by the Linux Foundation. Its goal is to close the gap between different container engines, so applications can move freely from one runtime to another.

    The new standard also will be good news for customers of off-the-shelf solutions that ship in container form. They no longer will be tied down to the developer’s choice of container runtime. Instead, they can choose the stack that suits their requirements and circumstances.

    Reply
  12. Tomi Engdahl says:

    New Image Specification Project for Container Images
    https://www.opencontainers.org/news/blogs/2016/04/new-image-specification-project-container-images

    The OCI recently formed the open container Image Format spec project. This project is tasked with creating a software shipping container image format spec with security and federated naming as key components.

    This represents an expansion of the OCI’s first project, OCI Runtime Spec, that focuses on how to run containers. Industry leaders are collaborating to enable users to package and sign their application, then run it in any container runtime environment of their choice – such as Docker or rkt. With the development of the new OCI Image Specification for container images, both vendors and users can benefit from a common standard that is widely deployable across any supporting environment of the user’s choice.

    “The OCI was formed in a vendor-neutral setting with industry leaders to come together on container standards,” said Chris Aniszczyk for the Open Container Initiative at The Linux Foundation. “With the formation of the OCI Image Format project we celebrate an important milestone that is fulfilling what the group intended – to develop a standard image format that vendors and users can all widely use and benefit from.”

    Reply
  13. Tomi Engdahl says:

    Serdar Yegulalp / InfoWorld:
    CoreOS Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized apps, simplifying management of OpenStack components

    CoreOS Stackanetes puts OpenStack in containers for easy management
    http://www.infoworld.com/article/3061676/openstack/coreos-stackanetes-puts-openstack-in-containers-for-easy-management.html

    Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized apps, simplifying management of OpenStack components

    The ongoing effort to make OpenStack easier to deploy and maintain has received an assist from an unexpected source: CoreOS and its new Stackanetes project, announced today at the OpenStack Summit in Austin.

    Containers are generally seen as a lighter-weight solution to many of the problems addressed by OpenStack. But CoreOS sees Stackanetes as a vehicle to deliver OpenStack’s benefits — an open source IaaS with access to raw VMs — via Kubernetes and its management methodology.

    OpenStack in Kubernetes

    Kubernetes, originally created by Google, manages containerized applications across a cluster. Its emphasis is on keeping apps healthy and responsive with a minimal amount of management. Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized applications, one container for each service.

    The single biggest benefit, according to CoreOS, is “radically simplified management of OpenStack components,” a common goal of most recent OpenStack initiatives.

    But Stackanetes is also a “single platform for consistently managing both IaaS and container workloads.” OpenStack has its own container management service, Magnum, used mainly as an interface to run Docker and, yes, Kubernetes instances within OpenStack.

    Stackanetes is mostly concerned with making sure individual services within OpenStack remain running — what CoreOS describes as the self-healing capacity. It’s less concerned with under-the-hood configurations of individual OpenStack components — which OpenStack has been trying to make less painful.

    With Stackanetes, CoreOS is betting more people would rather use Kubernetes as a deployment and management mechanism for containers than OpenStack.

    Reply
  14. Tomi Engdahl says:

    Sean Michael Kerner / eWeek:
    Docker’s security scanning product for repositories becomes generally available

    Docker Rolls Out Tool to Scan Containers for Vulnerabilities
    http://www.eweek.com/security/docker-rolls-out-tool-to-scan-containers-for-vulnerabilities.html

    The Project Nautilus effort first announced in 2015 and now named Docker Security Scanning is now generally available as container security ramps up.
    Among the big pieces of news that Docker Inc. announced at its DockerCon EU conference in November 2015 was its Project Nautilus effort to scan Docker repositories for security vulnerabilities. Now six months later, the company is making Nautilus generally available under the product name Docker Security Scanning. And Docker is complementing the new security product with an update to Docker Bench, a container best practices security tool, further improving the overall security tooling for Docker.

    Reply
  15. Tomi Engdahl says:

    Google reveals the Chromium OS it uses to run its own containers
    Dumps Debian as preferred OS for running Docker and Kubernetes
    http://www.theregister.co.uk/2016/05/16/google_releases_the_chromium_os_it_uses_to_run_its_own_containers/

    Google’s decided the Chromium OS is its preferred operating system for running containers in its own cloud. And why wouldn’t it – the company says it uses it for its own services.

    The Alphabet subsidiary offers a thing called “Container-VM” that it is at pains to point out is not a garden variety operating system you’d ever contemplate downloading and using in your own bit barn. Container-VM is instead dedicated to running Docker and Kubernetes inside Google’s cloud.

    The Debian-based version of Container-VM has been around for a while, billed as a “container-optimised OS”.

    Now Google has announced a new version of Container-VM “based on the open source Chromium OS project, allowing us greater control over the build management, security compliance, and customizations for GCP.”

    The new Container-VM was built “primarily for running Google services on GCP”.

    Reply
  16. Tomi Engdahl says:

    Ansible adds .1 to Ansible 2.0, de-betas networking
    Also covers MS and Docker bases
    http://www.theregister.co.uk/2016/05/27/ansible_revs_core_platform/

    Ansible has pushed out version 2.1 of its eponymous automation platform, with a large part of the update consisting of peeling off beta stickers on features it announced earlier this year.

    The vendor unveiled a foray into networking back in February at its London AnsibleFest. That technical preview has now been formalised as a “first order feature set”, director of Ansible Core Jason McKerr wrote in a blog post today, with support for Cisco, HP Enterprise, Juniper, Arista and Cumulus.

    “Ansible’s agentless model works particularly well in the network management space,” McKerr wrote, “and with a lot of help and support from the vendors, we are very pleased to have our first major release with support for these features.”

    That London event also saw the platform jack up its Windows support, and according to McKerr, “We significantly upped our game for both Windows and Azure Cloud. We’re happy to take the beta tag off of our Windows support, and make it a fully supported part of the Ansible automation platform.”

    Reply
  17. Tomi Engdahl says:

    Citrix to unleash containerised NetScaler this month
    Microservices make a lot of traffic that needs taming, which can get expensive and in-locky
    http://www.theregister.co.uk/2016/06/07/citrix_to_unleash_containerised_netscaler_this_month/

    Citrix is mere weeks away from releasing the containerised version of its NetScaler application delivery controller is revealed last December.

    “NetScaler CPX” was shown off at the company’s Synergy conference last month, but NetScaler veep and general manager Ash Chowdappa today told The Register the software has snuck into a “we’ll sell it if you really must have it now” version and will be generally available by month’s end.

    Chowdappa said the swift release is attributable to strong demand: apparently folks footling with containers find they make a lot of East/West traffic as containers spawn wherever they can. NetScaler’s traffic-grooming features come in handy to stop LANs melting down as container counts climb.

    NetScaler CPX is a Docker container. Chowdappa explained that, ideally, when containers are created and/or orchestrated with the likes of Kubernetes it makes sense to fire up a CPX Container too, so that the whole collection of containers in a microservice can start to enjoy its light traffic-tickling touch.

    Reply
  18. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Chef launches Habitat, an open source project to make applications infrastructure-independent — Chef today launched Habitat, a new open source project that allows developers to package their applications and run them on a wide variety of infrastructures. — Habitat essentially wraps applications …

    Chef’s new Habitat project wants to make applications infrastructure-independent
    http://techcrunch.com/2016/06/14/chefs-new-habitat-project-wants-to-make-applications-infrastructure-independent/

    Chef today launched Habitat, a new open source project that allows developers to package their applications and run them on a wide variety of infrastructures.

    Habitat essentially wraps applications into their own lightweight runtime environments and then allows you to run them in any environment, ranging from bare metal servers to virtual machines, Docker containers (and their respective container management services), and PaaS systems like Cloud Foundry.

    “We must free the application from its dependency on infrastructure to truly achieve the promise of DevOps,” Chef co-founder and CTO Adam Jacob said in a statement today. “There is so much open source software to be written in the world and we’re very excited to release Habitat into the wild. We believe application-centric automation can give modern development teams what they really want — to build new apps, not muck around in the plumbing.”

    The Chef team argues that today’s solutions are often too narrowly focused on the enterprise, where “the deep silo-ing of responsibility present in most enterprises drives us to design software specifically for one silo or another.”

    Habitat, then, tries to solve the question for how to best build, deploy and manage applications from the application perspective. Instead of defining the infrastructure, you define what the application needs to run and take it from there. The Habitat “supervisor” will handle deployment, upgrades and security policies for the environment you want to deploy in.

    https://www.habitat.sh/about/why-habitat/

    Reply
  19. Tomi Engdahl says:

    Ingrid Lunden / TechCrunch:
    Ubuntu’s container-style Snap app packages now work on other Linux distributions
    http://techcrunch.com/2016/06/14/ubuntus-container-style-snap-app-packages-now-work-on-other-linux-distributions/

    Docker’s container-style approach to distributing and running apps on any platform has been a big boost to helping patch up some of the fragmentation in the world of Linux. Now, a new package format hopes to have the same effect for smaller apps that need to speak to each other, or simply get updated. Snaps — a packaging format Canonical introduced earlier this year to help install apps in Ubuntu — is now available for multiple Linux distributions to work across desktops, servers, clouds and devices.

    Snap! Ubuntu 16.04 Just Made Installing New Apps MUCH Easier
    http://www.omgubuntu.co.uk/2016/04/ubuntu-16-04-lts-snap-packages

    One of the (few) sucky things about sticking with an Ubuntu LTS release is when newer versions of apps you love are released and you can’t install them.

    Well, prepare to bid that pang of disappointment goodbye.

    Ubuntu 16.04 LTS introduces support for Canonical’s (relatively new) ‘Snap’ packaging format.

    Snap packages are the aspirin to the headache of dependency-addled app upgrades.

    For the desktop user there are 2 key benefits of snaps:

    Developers can give you the latest version of their app
    App isolation and confinement improves the security and reliability of the app

    Snap packages can (though don’t have to) contain both application binary and any dependencies required for it to run. Yup, even if those libraries are already installed on the host system.

    Safer Apps, Always

    Applications packaged in the format are isolated from the rest of the system. You can install a snappy package without worrying about what it might do to other software you have installed.

    This enhanced security smashes the traditional bottle neck in app review and approval. Updates can be pushed out almost instantly through automated review and easily rolled back should something go wrong.

    Reply
  20. Tomi Engdahl says:

    Containerization code you should – “No to move the binaries, but entities”

    Docker is just a three-year-old technology, but it has had time to become a basic tool for a wide devops team in the library.

    Docker in spirit similar to virtualization. In the various services revolve around our own containers all on the same server, just below the linux kernel is common. A container packed all the needed service. The end result is loaded in the server, and is operational it is movable from place to place.

    Docker, for example, are widely used by Internet giants Google and Amazon in their services.

    Eficode is one of the docker utilizing Finnish companies.
    Technology was introduced after trials and published in 2014, the first stable version of the docker will also be customer interest has been awakened.
    Nowadays, more and more well-informed customer is able to propose the use of docker.

    Docker is usually attributed to the micro-architecture based on services, but they tend to talk Klemetti wants to avoid.

    “It is currently the trend of the word, the people associated with it a variety of emotions as devops in due course.”

    In practice, all the software production should be Klemet, however, now think of micro-services point of view – no monolith – will be built instead of smaller independent entities.

    On the other hand, existing applications can be transferred onto the docker.
    “Conceptual change is that it will not move the code changes or binaries, but moved entities.”

    “When you do something on top of docker, the system is similar to testing to production and everything in between. Does not get that, “works on my machine” ”

    When the container is, any development team can download it from the server on their own computer, or it can be moved to another server. Right Java version or the version nodejs accompany the container.

    Secondly, the benefit of Klemetti to increase the speed of the upgrade and restoration.

    The third advantage is scalability.
    “If I want to, rather than one run by five or fifty service, I can put them on or off in a few seconds,”

    Klemetti strongly believes in the future of containers. Docker is not necessarily the ultimate solution

    Source: http://www.tivi.fi/Kaikki_uutiset/koodin-kontitus-kannattaa-ei-liikutella-binaareita-vaan-kokonaisuuksia-6559362

    Reply
  21. Tomi Engdahl says:

    Snap can combine all linux versions

    Ubuntu developer Canonical today announced the Linux World, that the latest Ubuntu and developed for mobile applications, Snap-wrapping will be supported by all possible linux distributions.

    Different In Linux, has so far been used in any of the applications for installation package management tool, but no uniform practice has existed between them. Therefore, the transition to the second distribution means a new package management studies in addition to a slightly different user interface.

    Snap is Canonical’s new solution, which allows for faster introduction of new applications in Ubuntu. In the past, the application had to solve the dependencies throughout the system level.

    Snap is now natively available on linux versions: Arch, Debian and Fedora distributions based on Ubuntu and Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Unity Ubuntu and Xubuntu. Support is also coming Mintille Linux and SUSE Linux as one of the schedule.

    Application developers point of view it is a big change. Previously, the developer had to practically decide what the distribution version of the application is compiled. You no longer have to worry about the different package formats.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4594:snap-saattaa-yhdistaa-kaikki-linuxit&catid=13&Itemid=101

    Ubuntu Snappy-Based Package Format Aims to Bridge Linux Divide
    https://www.linux.com/news/ubuntu-snappy-based-package-format-aims-bridge-linux-divide

    Could the transactional mechanism that drives Canonical’s IoT-focused Snappy Ubuntu Core help unify Linux and save it from fragmentation? Today, Canonical announced that the lightweight Snappy’s “snap” mechanism, which two months ago was extended to all Ubuntu users in Ubuntu 16.04, can also work with other Linux distributions. Snap could emerge as a universal Linux package format, enabling a single binary package “to work perfectly and securely on any Linux desktop, server, cloud or device,” says Canonical.

    Snap works natively on Arch, Debian, and Fedora, in addition to Ubuntu-based distros like Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Ubuntu Unity, and Xubuntu. It is now being validated on CentOS, Elementary, Gentoo, Mint, openSUSE, RHEL, and OpenWrt.

    The containerized snap technology offers better security than is available with typical package formats such as .deb, says Canonical. Snaps are isolated from one another to ensure security, and they can be updated or rolled back automatically. Each snap is confined using a range of tailored kernel isolation and security mechanisms and receives only the permissions it needs to operate.

    Snaps sit alongside a Linux distro’s native packages and do not infringe on its own update mechanisms for those packages, says Canonical.

    Snapcraft Overview
    https://developer.ubuntu.com/en/snappy/build-apps/

    Snapcraft is a build and packaging tool which helps you package your software as a snap. It makes it easy to incorporate components from different sources and build technologies or solutions.

    A .snap package for the Ubuntu Core system contains all its dependencies. This has a couple of advantages over traditional deb or rpm based dependency handling, the most important being that a developer can always be assured that there are no regressions triggered by changes to the system underneath their app.

    Snapcraft makes bundling these dependencies easy by allowing you to specify them as “parts” in the snapcraft.yaml file.

    A central aspect of a snapcraft recipe is a “part”. A part is a piece of software or data that the snap package requires to work or to build other parts. Each part is managed by a snapcraft plugin and parts are usually independent of each other.

    After the build of each part the parts are combined into a single directory tree that is called the “staging area”.

    Reply
  22. Tomi Engdahl says:

    Red Hat Launches Ansible-Native Container Workflow Project
    https://news.slashdot.org/story/16/06/20/210235/red-hat-launches-ansible-native-container-workflow-project

    Red Hat launched Ansible Container under the Ansible project, which provides a simple, powerful, and agent-less open source IT automation framework. Available now as a technology preview, Ansible Container allows for the complete creation of Docker-formatted Linux containers within Ansible Playbooks, eliminating the need to use external tools like Dockerfile or docker-compose. Ansible’s modular code base, combined with ease of contribution, and a community of contributors in GitHub, enables the powerful IT automation platform to manage today’s infrastructure, but also adapt to new IT needs and DevOps workflows.

    Red Hat launches Ansible-native container workflow project
    https://www.helpnetsecurity.com/2016/06/20/ansible-native-container-workflow-project/

    Red Hat launched Ansible Container under the Ansible project, which provides a simple, powerful, and agentless open source IT automation framework. Available now as a technology preview, Ansible Container allows for the complete creation of Docker-formatted Linux containers within Ansible Playbooks, eliminating the need to use external tools like Dockerfile or docker-compose.

    ansible/ansible-container
    https://github.com/ansible/ansible-container
    Ansible Container is a tool to build Docker images and orchestrate containers using only Ansible playbooks.

    Reply
  23. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Docker announces private beta of the Docker Store, a new marketplace for containerized software

    Docker launches a new marketplace for containerized software
    https://techcrunch.com/2016/06/21/docker-launches-a-new-marketplace-for-containerized-software/

    At its developer conference in Seattle, Docker today announced the private beta of the Docker Store, a new marketplace for trusted and validated dockerized software.

    The idea behind the store is to create a self-service portal for Docker’s ecosystem partners to publish and distribute their software through Docker images — and for users to make it easier to deploy these applications.

    While Docker already offered its own registry for containers, too, the Docker Store is specifically geared toward the needs of enterprises. The store will offer enterprises “with compliant, commercially supported software from trusted and verified publishers, that is packaged as Docker images,” the company says

    Reply
  24. Tomi Engdahl says:

    Microsoft shows off SQL Server in a Linux container, Docker Datacenter comes to Azure Marketplace
    http://venturebeat.com/2016/06/21/microsoft-shows-off-sql-server-in-a-linux-container-docker-datacenter-comes-to-azure-marketplace/

    Microsoft today is announcing new capabilities for running applications in containers — an alternative to more traditional virtual machines — both on its Azure public cloud and in companies’ on-premises data centers.

    Rather than simply drop the news in a blog post, today Mark Russinovich, Azure’s chief technology officer, is taking the stage at container startup Docker’s Dockercon conference in Seattle to demonstrate some of the new features.

    First, the Docker Datacenter software is coming to the Azure Marketplace, and so Russinovich is showing an example of a container cluster provisioned by Docker Datacenter that’s running on top of infrastructure in Azure and private-cloud infrastructure managed by Microsoft’s Azure Stack software.

    Reply
  25. Tomi Engdahl says:

    Docker builds container orchestration right into its core Docker Engine
    https://techcrunch.com/2016/06/20/docker-builds-swarm-right-into-its-core-core-tools/

    Docker, which is hosting its sold-out developer conference in Seattle this week, today announced a major addition to its core Docker Engine. While the company previously split up many of the features it takes to use containers in production (think building containers, deploying them and then orchestrating them), it is now building container orchestration right into the Docker Engine.

    The company is also making it easier to deploy its tools on Microsoft’s Azure and Amazon’s AWS cloud computing platforms.

    As Docker COO Scott John Johnston told me, he sees this move as the company’s attempt to extend its work in making containers easy to use to also democratizing container orchestration,

    What Docker has essentially done here is build the core features of Docker Swarm and Compose, its existing clustering and orchestration services which came out of beta last November, right into its core Engine. Developers will now be able to turn on “swarm mode” to create self-healing clusters of Docker engines that can discover each other. Swarm mode includes support for an overlay network that allows for automatic service discovery and load balancing tools, as well as a new Service Deployment API that allows developers to declare which services, images and ports they want to use.

    Reply
  26. Tomi Engdahl says:

    Containerized Security: The Next Evolution of Virtualization?
    http://www.securityweek.com/containerized-security-next-evolution-virtualization

    We in the security industry have gotten into a bad habit of focusing the majority of our attention and marketing dollars on raising awareness of the latest emerging threats and new technologies being developed to detect them.

    For example, when security became virtualized, it brought with it the promise of several benefits, including increased speed and scalability with decreased overhead and costs of security infrastructure in virtualized data centers and cloud environments. There’s little doubt that this transition to virtualized security has been a positive one for many organizations who are now able to more effectively scale and customize security policies faster than ever before.

    But what can we do next to make sure we’re continuing to innovate and keep our security functions ahead of the curve?

    One of the most promising new approaches is putting security functions into containers. Just as containers provide a wide range of benefits for applications that need to migrate between computing environments, there are also benefits to using them to secure networks. The decrease in size and power needed to run security operations through a container using one operating system, as opposed to operations through several operating systems, can have a massive effect on cost and scalability, while providing an efficient way to secure your network.

    There are several benefits to containerizing your security functions. The most obvious of these is cost savings – with all of your operations able to run through only one container, you can decrease the amount you need to spend on multiple operating systems. From a performance standpoint, you will be able to achieve massive scalability and a significant increase in speed of services. Containers can be booted up almost immediately, while your average virtual machine (VM) may take several minutes to start.

    Just as we started with VMs on servers, there was a general perception that there was no need for security. But as adoption of containers progresses in data centers and clouds, many organizations have quickly realized the need to add security in the overall mindset of building virtualized environments.

    1) Are you already using Dockers? If your organization is using containers for any other part of their infrastructure, it’s highly logical to extend this practice to security. Once containers are in place, their scalability makes it easy to add other features to their existing functions with minimal additional cost or impact on performance.

    2) What kind of environment are you looking to support?

    3) What is your long term strategic business direction?

    While using containers to secure your organization is a relatively novel approach, it can lead to cost savings and massive scalability. By considering containers for security, you could be an early adopter to an innovative new approach that will allow you to stay ahead of both the competition and the cybercriminals.

    Reply
  27. Tomi Engdahl says:

    New RancherOS Offers Lean Linux Functionality Within Docker Containers

    https://linux.slashdot.org/story/16/08/14/2241226/new-rancheros-offers-lean-linux-functionality-within-docker-containers?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    RancherOS is a lean Linux distribution aiming to offer “the minimum necessary to get Docker up and running,” and tucking many actual Linux services into Docker containers.

    What’s New in RancherOS v0.5.0
    http://rancher.com/whats-new-rancheros-v0-5/

    Official Raspberry Pi Image

    On our releases page you can now find an official Raspberry Pi image which is known to work on both Raspberry Pi 2 and 3. We’re especially excited about this since it offers users a cheap method of getting started with Docker and RancherOS. We’d like to thank the Hypriot team for their assistance on this feature.

    Our build process has been refactored to support multiple architectures. With a given kernel, RancherOS can now be built for both ARM and ARM64 devices.

    As always, we are keeping to the philosophy that RancherOS should only include the minimum necessary to get Docker up and running.

    Reply
  28. Tomi Engdahl says:

    ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs
    http://www.linuxjournal.com/content/containercon-vendors-offer-flexible-solutions-managing-all-your-new-micro-vms

    As you might expect, this week’s LinuxCon and ContainerCon 2016, held in Toronto, is heavy on the benefits and pitfalls of deploying containers, but several vendors aim to come to the rescue with flexible tools to manage it all.

    Take Datadog, a New York-based company that offers scalable monitoring of your containerized infrastructure—and just about everything else—from a single interface. This is an off-premise, cloud-based tool that can monitor tens of thousands of your hosts and integrate with stuff you already know, like AWS, Cassandra, Docker, Kubernetes, Postgre and 150 other tools.

    Other LinuxCon/ContainerCon vendors are offering similar products that look to make life easier for anyone with hybrid environments of on-premise and cloud-based systems and applications.

    Virtuozzo’s latest offering is a comprehensive virtualization platform that allows you to deploy and manage any combination of operating systems, applications, VMs and containers. At the same time, Sysdig offers SaaS monitoring, troubleshooting and alerting for distributed containerized environments.

    Updates from LinuxCon and ContainerCon, Toronto, August 2016
    http://www.linuxjournal.com/content/updates-linuxcon-and-containercon-toronto-august-2016

    The Future of Linux: Continuing to Inspire Innovation and Openness

    The first 25 years of Linux has transformed the world, not just computing, and the next 25 years will continue to see more growth in the Open Source movement, The Linux Foundation Executive Director Jim Zemlin said during the opening keynote of LinuxCon/ContainerCon in Toronto on Monday, August 22, 2016.

    “Linux is the most successful software project in history”, Zemlin said, noting that the humble operating sytem created by Linus Torvalds 25 years ago this week is behind much of today’s software and devices.

    But the message of Linux is far more than software, Zemlin said. It’s about the open exchange of ideas that’s world-changing and inspiring. The concept of sharing has changed how the world thinks about technology and how it’s made, he said.

    “We’ve learned that you can better yourself while bettering others at the same time”, he said. “We’re building the greatest shared technology asset in the history of computing.”

    In the coming years, Zemlin predicts an even more rapid shift to open source, particularly in a world that now makes it nearly impossible to deploy software without collaborating and taking advantage of open resources.

    Anchore, one of the container-related sponsors at LinuxCon 2016 and ContainerCon being held this week in Toronto, seeks to change that. The company offers a product (now in beta) that aims to provide container image management and analytics.

    Reply
  29. Tomi Engdahl says:

    Raspberry Pi Hive Mind
    http://hackaday.com/2016/08/29/raspberry-pi-hive-mind/

    Setting up a cluster of computers used to be a high-end trick used in big data centers and labs. After all, buying a bunch of, say, VAX computers runs into money pretty quickly (not even counting the operating expense). Today, though, most of us have a slew of Raspberry Pi computers.

    Because the Pi runs Linux (or, at least, can run Linux), there are a wealth of tools out there for doing just about anything. The trick is figuring out how to install it. Clustering several Linux boxes isn’t necessarily difficult, but it does take a lot of work unless you use a special tool. One of those tools is Docker, particularly Docker Swarm Mode.

    It is easy to set up a swarm using the instructions on the Docker website.

    Docker is a container manager, which means it doesn’t pretend to be a piece of hardware, it pretends to be a running operating system. Programs see their own file system and other resources, but in reality, there is only one kernel running on the host hardware.

    The idea is similar to running something in a chroot jail.

    Getting started with swarm mode
    https://docs.docker.com/engine/swarm/swarm-tutorial/

    This tutorial introduces you to the features of Docker Engine Swarm mode. You may want to familiarize yourself with the key concepts before you begin.

    The tutorial guides you through the following activities:

    initializing a cluster of Docker Engines in swarm mode
    adding nodes to the swarm
    deploying application services to the swarm
    managing the swarm once you have everything running

    This tutorial uses Docker Engine CLI commands entered on the command line of a terminal window. You should be able to install Docker on networked machines and be comfortable running commands in the shell of your choice.

    Reply
  30. Tomi Engdahl says:

    Docker Overview
    https://docs.docker.com/engine/understanding-docker/

    Docker is an open platform for developing, shipping, and running applications. Docker is designed to deliver your applications faster. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker helps you ship code faster, test faster, deploy faster, and shorten the cycle between writing code and running code.

    Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications.

    Reply
  31. Tomi Engdahl says:

    How to Use Docker to Cross Compile for Raspberry Pi (and More)
    http://hackaday.com/2016/09/01/how-to-use-docker-to-cross-compile-for-raspberry-pi-and-more/

    It used to be tedious to set up a cross compile environment. Sure you can compile on the Raspberry Pi itself, but sometimes you want to use your big computer — and you can use it when your Pi is not on hand like when on an airplane with a laptop. It can be tricky to set up a cross compiler for any build tools, but if you go through one simple step, it becomes super easy regardless of what your real computer looks like. That one step is to install Docker.

    Docker is available for Linux, Windows, and Mac OS. It allows developers to build images that are essentially preconfigured Linux environments that run some service. Like a virtual machine

    The reality is, setting up the Raspberry Pi build environment isn’t any easier. It is just that with Docker, someone else has already done the work for you and you can automatically grab their setup and keep it up to date. If you are already running Linux, your package manager probably makes the process pretty easy too

    Docker maintains a repository of images on their website called the Hub.

    Here’s the command to run to get the rpxc script:

    sudo docker run sdthirlwall/raspberry-pi-cross-compiler:legacy-trusty > rpxc

    The rpxc script generally runs any command you like in the new environment. Since it runs Docker, you need to be root or in the Docker group, of course. All the usual build tools are prefixed with rpxc, so:

    rpxc rpxc-gcc -o hello-world hello-world.c

    Or, if you have a Makefile:

    rpxc make

    You might enjoy searching for ESP8266, too. There’s even a Docker image for Eagle PCB layout software.

    Reply
  32. Tomi Engdahl says:

    Secure Desktops with Qubes: Compartmentalization
    http://www.linuxjournal.com/content/secure-desktops-qubes-compartmentalization?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    The first concept to understand with Qubes is that it groups VMs into different categories based on their use. Here are the main categories of VMs I refer to in the rest of the article:

    Disposable VM: these also are referred to as dispVMs and are designed for one-time use. All data in them is erased when the application is closed.

    Domain VM: these also often are referred to as appVMs. They are the VMs where most applications are run and where users spend most of their time.

    Service VM: service VMs are split into subcategories of netVMs and proxyVMs. These VMs typically run in the background and provide your appVMs with services (usually network access).

    Template VM: other VMs get their root filesystem template from a Template VM, and once you shut the appVM off, any changes you may have made to that root filesystem are erased (only changes in /rw, /usr/local and /home persist). Generally, Template VMs are left powered off unless you are installing or updating software.

    Reply
  33. Tomi Engdahl says:

    Docker user? Haven’t patched Dirty COW yet? Got bad news for you
    Repeat after me, containerization isn’t protection, it’s a management feature
    http://www.theregister.co.uk/2016/11/01/docker_user_havent_patched_dirty_cow_yet_bad_news/

    Here’s another reason to pay attention to patching your Linux systems against the Dirty COW vulnerability: it can be used to escape Docker containers.

    That news comes from Paranoid Software’s Gabriel Lawrence, who describes the escape here.

    Dirty COW is a race condition in Linux arising from how Copy-On-Write (the COW in the name) is handled by the kernel’s memory subsystem’s use of private mappings.

    Lawrence writes: “more interesting to me than a local privilege escalation, this is a bug in the Linux kernel, containers such as Docker won’t save us.”

    Dirty COW – (CVE-2016-5195) – Docker Container Escape
    https://blog.paranoidsoftware.com/dirty-cow-cve-2016-5195-docker-container-escape/

    Reply
  34. Tomi Engdahl says:

    Hello Operator, automate my Kubernetes
    CoreOS is introducing software to simplify cluster configuration
    http://www.theregister.co.uk/2016/11/03/hello_operator_automate_my_kubernetes/

    CoreOS, which makes a container-oriented version of Linux and the Tectonic platform for Kubernetes, on Thursday plans to introduce software called “Operators” to make it easier to configure and manage distributed applications.

    Operators extend the Kubernetes API to specific applications, allowing multiple instances of those applications to be used in distributed clusters.

    “What we’re trying to do with Operators is to encode the operational knowledge people need to manage these distributed apps,” said Brandon Philips, CTO of CoreOS, in a phone interview with The Register.

    CoreOS is releasing two Operators, for etcd and Prometheus, as open source projects.

    etcd is a distributed key value store for storing data across a cluster of machines. It’s used by Kubernetes for service discovery and it stores cluster state and configuration data. The etcd Operator can be installed in a Kubernetes cluster with a single command, in order to allow the declarative management of etcd clusters.

    Prometheus, an open-source monitoring and alerting toolkit, is also getting an Operator that enables the deployment and management of Kubernetes resources through Prometheus instances.

    Reply
  35. Tomi Engdahl says:

    Think GitHub and Git but for data – and you’ve got FlockerHub and fli
    ClusterHQ debuts information time machine for better production testing
    http://www.theregister.co.uk/2016/11/03/cluster_flocker_for_fewer_expletives_in_docker/

    Flocker is a mouthful. It’s an open-source container data volume orchestrator, which means it helps migrate data when containers shift hosts. It makes data volumes portable within clusters.

    Two years into its life, it’s spawned a hosted service called FlockerHub. Its creator, ClusterHQ, describes it and its command line companion fli as the equivalent of GitHub and Git for data.

    “FlockerHub and fli, we think, can help DevOps teams test better,” said Michael Ferranti, ClusterHQ’s VP of marketing, in a phone interview with The Register.

    A developer survey conducted by the company suggests that code quality can be improved, a finding that could more or less be assumed. The firm polled 386 people, from companies large and small, 41 per cent of whom described themselves as DevOps team members, 37 percent as developers, and the remainder cited association with operations, QA, and security.

    Among the respondents, 43 per cent said they spend between 10 and 25 per cent of their time debugging application errors discovered in production, a chore that cuts into time that might otherwise be used developing new features.

    The top challenges these IT professionals cited were:

    Inability to recreate production environments in testing (33 per cent).
    Testing difficulties arising from the interdependence of external systems (27 per cent).
    Testing against unrealistic data prior to production (26 per cent).

    Reply
  36. Tomi Engdahl says:

    Microsoft ❤️ Linux? Microsoft ❤️ running its Windows’ SQL Server software on Linux
    http://www.theregister.co.uk/2016/11/18/microsoft_running_windows_apps_on_linux/

    In March, when Microsoft announced plans to release SQL Server for Linux, Scott Guthrie, EVP of Microsoft’s cloud and enterprise group, said, “This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud.”

    The release of the first public preview of SQL Server for Linux on Wednesday reveals just how consistent that platform is: It’s the Windows version of SQL Server running on the Windows NT kernel as a guest app, more or less.

    When Microsoft declared its love for Linux, it appears to have been looking in the mirror.

    Microsoft could have ported SQL Server to run as a native Linux application. Instead, it has chosen to use its Drawbridge application sandboxing technology.

    SQL Server for Linux runs atop a Drawbridge Windows library OS – a user-mode NT kernel – within a secure container called a picoprocess that communicates with the host Linux operating system through the Drawbridge application binary interface.

    In other words, Microsoft’s SQL Server for Linux is really the Windows SQL Server executable with a small Windows 8 kernel glued underneath, all running in a normal Linux process.

    Virtualization has helped blur the boundaries between operating systems, a trend that’s been underway for years. Mac users have been able to boot into Windows through Boot Camp or virtualization software like Parallels. Linux users have been able to run Windows apps using Wine.

    Containerization has encouraged further levels of abstraction and cross-platform compatibility, even as it distances users from their software. It’s difficult to care much about operating systems when many containers get launched and shut down in less than a minute.

    Reply
  37. Tomi Engdahl says:

    Docker Swarm Cluster with docker-in-docker on MacOS
    https://medium.com/@alexeiled/docker-swarm-cluster-with-docker-in-docker-on-macos-bdbb97d6bb07#.k0jv2zdnw

    Docker-in-Docker dind can help you to run Docker Swarm cluster on your Macbook only with Docker for Mac (v1.12+). No VirtualBox, docker-machine, vagrant or other app is required.

    One day, I’ve decided to try running Docker 1.12 Swarm cluster on my MacBook Pro. Docker team did a great job releasing Docker for Mac

    Of cause, it’s possible to create Swarm cluster with docker-machine tool, but it’s not MacOS friendly and requires to install additional VM software, like VirtualBox or Parallels (why? I already have xhyve!).

    The Idea

    The basic idea is to use Docker for Mac for running Swam master and several Docker-in-Docker containers for running Swarm worker nodes.

    Reply
  38. Tomi Engdahl says:

    Ron Miller / TechCrunch:
    Docker open sources critical infrastructure component “containerd”; cloud vendors including Alibaba, AWS, Google, IBM, and Microsoft sign on to work on it

    Docker open sources critical infrastructure component
    https://techcrunch.com/2016/12/14/docker-open-sources-critical-infrastructure-component/

    Docker announced today that it was open sourcing containerd (pronounced Container D), making a key infrastructure piece of its container platform available for anyone to work on.

    Containerd, which acts as the core container runtime engine, is a component within Docker that provides “users with an open, stable and extensible base for building non-Docker products and container solutions,” according to the company. Leading cloud providers have signed on to work on it including Alibaba, AWS, Google, IBM and Microsoft.

    For now, Docker has not announced the foundation that will house the open source project, but they intend to put it in a neutral foundation some time during the first quarter of 2017.

    Reply
  39. Tomi Engdahl says:

    Runner: container-free filesystem virtualization
    http://gobolinux.org/runner.html

    Runner is a brand new filesystem virtualization tool, specifically designed for GoboLinux. It dynamically changes a process’ view of /System/Index based on the program’s Dependencies file.

    From day one, GoboLinux has always supported keeping multiple versions of a program installed on disk at the same time, but when two versions had conflicts, you had to choose which one would be activated in the system as the default.

    With Runner, you don’t need to worry about which version of a given dependency is currently linked (or activated) in /System/Index: Runner gives the process its own virtual /System/Index with all the right dependencies.

    Reply
  40. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    ClusterHQ, an early player in the container ecosystem, shuts down operations

    ClusterHQ, an early player in the container ecosystem, calls it quits
    https://techcrunch.com/2016/12/22/clusterhq-hits-the-deadpool/

    ClusterHQ, an early player in the container DevOps ecosystem, today announced that it is shutting down operations immediately.

    “Unfortunately, it’s often the pioneers who end up with arrows in their backs. And sometimes, archery injuries are self-inflicted,” the company’s CEO Mark Davis writes in a blog post today. “For a confluence of reasons, the ClusterHQ board of directors, of which I am chairman, have decided it best to immediately shut down company operations.”

    The company’s shutdown does come as a surprise. Not only did the company raise $12 million in a Series A round led by Accel Partners London last year, but it also launched a number of new products last month. The company also opened up a new office in Silicon Valley in 2016.

    The company open sourced a lot of its products, including its Flocker container data volume manager, and that code will obviously outlive the company.

    Reply
  41. Tomi Engdahl says:

    Containers will become 2.6 billion worth of software business in 2020, predicts research firm 451 Research. This year containerization the company estimates that US $ 762 million in an area, so the projected rate of growth in the coming years will explode to 40 per cent a year.

    “The sector has not yet seen a winning combination of software containers, and services and support addressed to them,”

    According to the report of containers operating with the eight most important companies will each receive a container technologies related to business operations pins more than $ 20 million. among the companies listed include Docker, Red Hat, and Engine Yard.

    Cloud services are expected to have 22 percent annual growth rate. By 2020, the sector would grow from the current 22.2 billion to $ 46 billion.

    Source: http://www.tivi.fi/Kaikki_uutiset/koodin-kontitus-rajahtaa-kasvuun-6615431

    Reply
  42. Tomi Engdahl says:

    Google opens cloudy cannery to let you cram code into containers
    ‘Cloud Container Builder’ offers 120 minutes of container creation, for any platform
    https://www.theregister.co.uk/2017/03/07/google_cloud_container_builder/

    Google’s found another way to wrap developers more closely into its warm embrace: a cloudy software build environment it reckons should be free for most users.

    The new “Cloud Container Builder” has reached general availability status after a year running Google App Engine’s gcloud app deploy operation.

    Described as a “stand-alone tool for building container images regardless of deployment environment”, Cloud Container Builder’s sweetener is 120 minutes a day of free build time. If you need more, it’ll set you back US$0.0034 per minute.

    The Chocolate Factory reckons this means “most users” can move builds to the cloud free, and get rid of the overhead of managing their own build servers.

    Specs of Cloud Container Builder include:

    A REST API and a gcloud command line interface;
    Two additions to the Google Cloud console, so users can track their build history, and create build triggers.

    “Build triggers lets you set up automated CI/CD workflows that start new builds on source code changes. Triggers work with Cloud Source Repository, Github, and Bitbucket on pushes to your repository based on branch or tag”, its blog note says.

    Not everybody wants Docker, so Mountain View also supports open source builders for “languages and tasks like npm, git, go and the gcloud command line interface”, and DockerHub images like “Maven, Gradle and Bazel work out of the box.”

    Google Cloud Container Builder: a fast and flexible way to package your software
    https://cloudplatform.googleblog.com/2017/03/Google-Cloud-Container-Builder-a-fast-and-flexible-way-to-package-your-software.html?m=1

    Reply
  43. Tomi Engdahl says:

    Performance made easy with Linux containers
    https://opensource.com/article/17/2/performance-container-world?sc_cid=7016000000127ECAAY

    Real-world applications are likely hosted on the cloud. An application could avail to very large (or conceptually infinite) amounts of compute resources. Its needs in terms of both hardware and software would be met via the cloud. The developers working on it would use the cloud-offered features for enabling faster coding and deployment. Cloud hosting doesn’t come free, but the cost overhead is proportional to the resource needs of the application.

    Software containers backed with the merits of microservices design, or Service-oriented Architecture (SoA), improves performance because a system comprising of smaller, self-sufficient code blocks is easier to code and has cleaner, well-defined dependencies on other system components.

    Reply
  44. Tomi Engdahl says:

    Neglected Step Child: Security in DevOps
    http://www.securityweek.com/neglected-step-child-security-devops

    The use of microservices and containers like Docker have led to a revolution in DevOps. Providing the agility that business have long awaited, these new technologies also introduce inherent security implications that cannot be ignored at a time when the enterprise attack surface continues to grow wider. Let’s consider these risks and how organizations can minimize their exposure to them.

    According to a recent report by 451 Research, nearly 45% of enterprises have either already implemented or plan to roll out microservices architectures or container-based applications over the next 12 months. This confirms the hype surrounding these emerging technologies, which are meant to simplify the life of application developers and DevOps teams. Microservices can break down larger applications into smaller, distinct services; whereby containers in this context are viewed as a natural compute platform for microservices architectures.

    Microservices and containers enable faster application delivery and improved IT efficiency. However, the adoption of these technologies has outpaced security. A recent research study by Gartner (DevSecOps: How to Seamlessly Integrate Security into DevOps) shows that fewer than 20% of enterprise security teams have engaged with their DevOps groups to actively and systematically incorporate information security into their DevOps initiatives.

    Reply
  45. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft to buy Kubernetes container-orchestration vendor Deis

    Microsoft to buy Kubernetes container-orchestration vendor Deis
    http://www.zdnet.com/article/microsoft-to-buy-kubernetes-orchestration-vendor-deis/

    Containers, containers, containers: Microsoft is buying Deis, a San Francisco-based Kubernetes orchestration specialist for an undisclosed amount.

    Microsoft is looking to make use of San Francisco-based Deis’ technology to round out its Windows and Linux container portfolio, said Scott Guthrie, executive vice president of Cloud and Enterprise, in an April 10 blog post. The acquisition is part of Microsoft’s quest to ensuring Azure is the best place to run containerized workloads, Guthrie blogged.

    In his own blog post, Gabe Monroy, chief technology officer of Deis, said the Deis team will continue with its contributions to Workflow, Helm, and Steward, as well as “maintaining our deep engagement with the Kubernetes community.”

    Reply
  46. Tomi Engdahl says:

    OpenJDK and Containers
    https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/?sc_cid=7016000000127ECAAY

    What can be done to help the OpenJDK JVM play well in the world of Linux Containers?

    Reply
  47. Tomi Engdahl says:

    Containers are Linux
    https://www.redhat.com/en/about/blog/containers-are-linux?sc_cid=7016000000127ECAAY#

    Containers are Linux. The operating system that revolutionized the data center over the past two decades is now aiming to revolutionize how we package, deploy and manage applications in the cloud.

    Ultimately, containers are a feature of Linux. Containers have been a part of the Linux operating system for more than a decade, and go back even further in UNIX. That’s why, despite the very recent introduction of Windows containers, the majority of containers we see are in fact Linux containers. That also means that if you’re deploying containers, your Linux choices matter a lot.

    Reply
  48. Tomi Engdahl says:

    How to run Android OS and applications on any GNU/Linux distro
    https://www.linuxnewssite.com/how-to-run-android-os-and-applications-on-any-gnulinux-distro-16042017397.html

    Anbox uses Linux tech Linux LXC (“Linux containers”) to run Android operating system on on Linux host.

    Reply
  49. Tomi Engdahl says:

    Ubuntu 17.04 supports widest range of container capabilities
    https://insights.ubuntu.com/2017/04/13/ubuntu-17-04-supports-widest-range-of-container-capabilities/

    Ubuntu 17.04 released today, supporting Kubernetes, Docker, LXD and Snaps. This is the 26th release of Ubuntu, the world’s most widely deployed Linux OS and the leading platform for cloud and IoT operations.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*