Docker (http://docker.com) is the new attraction in the software industry. Interest is taking off like crazy, spawning many projects, often with names containing puns around shipping containers.
It is described as an open platform for distributed applications for developers and sysadmins. It is designed around Linux containerization technology and focuses on describing the configuration of software on any variant of Linux.
Docker automates the application deployment within software containers. The basic concepts of Linux containers date back to chroot jail's first implementation in the 1970s, and other systems such as Solaris Zones. The Docker implementation creates a layer of software isolation and virtualization based on Linux cgroups, kernel namespaces, and union-capable filesystems, which blend together to make Docker what it is. That was some heavy geek-speak, so let's try a simpler explanation.
A Docker container is a running instantiation of a Docker image. An image is a given Linux OS and application configuration designed by developers for whatever purpose they have in mind. Developers describe an image using a Dockerfile. The Dockerfile is a fairly simple-to-write script showing Docker how to build an image. Docker images are designed to be copied to any server, where the image is instantiated as a Docker container.
A running container will make you feel like you're inside a virtual server running on a virtual machine. But Docker containerization is very different from a virtual machine system such as VirtualBox. The processes running inside the container are actually running on the host OS. The containerization technology (cgroups, kernel namespaces, and so on) create the illusion of running on the Linux variant specified in the Dockerfile, even if the host OS is completely different. Your host OS could be Ubuntu and the container OS could be Fedora or OpenSUSE; Docker makes it all work.
By contrast, with Virtual Machine software (VirtualBox, and VMWare, among others), you're using what feels like a real computer. There is a virtual BIOS and virtualized system hardware, and you must install a full-fledged guest OS. You must follow every ritual of computer ownership, including securing licenses if it's a closed source system such as Windows.
While Docker is primarily targeted at x86 flavors of Linux, it is available on several ARM-based OSes, as well as other processors. You can even run Docker on single-board computers, such as Raspberry Pis, for hardware-oriented Internet of Things projects. Operating systems such as Resin.IO are optimized to solely run Docker containers.
The Docker ecosystem contains many tools, and their number is quickly increasing. For our purposes, we'll be focusing on the following three specific tools:
- Docker engine: This is the core execution system that orchestrates everything. It runs on a Linux host system, exposing a network-based API that client applications use to make Docker requests, such as building, deploying, and running containers.
- Docker machine: This is a client application performing functions around provisioning Docker Engine instances on host computers.
- Docker compose: This helps you define, in a single file, a multi-container application, with all its dependencies defined.
With the Docker ecosystem, you can create a whole universe of subnets and services to implement your dream application. That universe can run on your laptop or be deployed to a globe-spanning network of cloud-hosting facilities around the world. The surface area through which miscreants can attack is strictly defined by the developer. A multicontainer application will even limit access so strongly between services that miscreants who do manage to break into a container will find it difficult to break out of the container.
Using Docker, we'll first design on our laptop the system shown in the previous diagram. Then we'll migrate that system to a Docker instance on a server.