A look into Docker concepts for application deployment
Docker is a tool to help in the deployment of applications across host systems. Virtualization, union file systems, image registries, orchestration services - while Docker is a useful tool for staging and deployment, there is a learning curve to get to grips with the whole ecosystem.
This is not an introduction to Docker, and it is not a tutorial. This is an ad-hoc collection of knowledge pulled from numerous sources across the internet, aimed at people who've started looking into Docker, possibly created their first containers - and are trying to understand what is happening.
This was written based on Docker 1.4.1.
The first, and most important point, to understand is that Docker containers are not virtual machines. Docker uses virtualization technologies to separate process space, memory and file system between containers - but a Docker container does not contain it's own operating system. It uses the host's operating system.
This means Docker containers are far more efficient than virtual machines - you do not need to simulate the hardware, and you do not need to run another operating system on top of the host. You also do not need to "boot" a container - the startup times are very fast. A container does not automatically start any services - if you want to run a service inside a container, you must start it explicitly (see Running a container for more on this topic).
If you want to deploy an application that has a hard dependency on a kernel feature, then you must ensure the host kernel has the feature. But in practice this is something quite rare.