Anatomy of a Docker Container Execution

Containers, Swarm, Plugins, there are just so many things to attempt to comprehend when trying to understand Docker and that Docker ecosystem. One of the biggest problems that I had when trying to learn about the platform was understanding what actually happened when I executed a docker command to run a container. Today we’ll cover the very basics of what happens under the covers on a machine running the docker engine and docker client. The beauty is that many of these commands translate into swarm enabled docker clusters, as they end up being just another endpoint so we’ll be building on these basics.

A basic container execution:

To get started,  I’m assuming that you have the docker engine installed somewhere and that you’ve logged into that host to execute these commands. I’m leveraging my Dell XPS developer edition laptop running Ubuntu Linux but you could do this on any machine running docker engine and the docker client. Here’s the first command that we’ll execute.

docker run -i -t centos

Here’s the output of that and a few others:

docker_ps_pull

You’ll notice that the command executes and that the docker engine realizes that it needs to download latest version of an image named centos (the docker centos image). The docker engine pulls the latest image layers from Docker hub by default because I didn’t specify a specific image version. You’ll see that I’m dropped to a CentOS container shell, which I exit letting the container continue to run by pressing “CTRL + P + Q”. A standard CTRL + C would have killed the container process completely.

With the container process still running you can show show running containers in the local docker engine by executing:

docker ps

To check all of the container processes including running and stopped container processes execute:

docker ps -a

So that’s it, we have a basic container running on our docker engine host. But what actually happened? First let’s look at the components involved:

Docker Engine:

Under the covers the machine is running the docker engine (daemonized Unix/Linux process). Docker engine exposes a RESTful API on the local Unix socket /var/run/docker.sock that the docker client or any other client could leverage to execute tasks on the engine.

Docker Client:

Executing any docker-based command usually begins with the command “docker”. Anytime this happens what’s really happening is that the docker client is formatting JSON to send to the docker engine. In fact, should the docker port be exposed on remote hosts (don’t do this without understanding the security implications of doing so) you can export DOCKER_HOST=”tcp://ip_address:port” as an environmental variable to send your commands in that host’s direction.

Docker Hub:

Docker hub is a remote repository for images. The “pulling” section of the above output after the docker run command execution is the output of the docker engine pulling the needed components (actually called layers, more on that later) needed to execute the container. There are two types of images, docker maintained images and user maintained images out on Docker hub. Docker images will only have a singular name. You’ll notice that centos or ubuntu for example appear to have only one name, simply “centos” or “ubuntu”. User-created images are denoted by username/imagename. Docker Hub serves as a great location to store docker images.

Docker Trusted Registry:

For enterprises, there’s a Docker supported and commercially available option to leverage an on-prem or on-network alternative to Docker Hub called Docker Trusted Registry. Many enterprises for business or compliance reasons choose to store code on-prem. For these businesses this is a great option providing all of the functionality of Docker Hub in a private fashion.

Putting it all together:

Now that we’ve looked at the bare bones components, let’s have a look at look at what happened:

Docker_Execution_Anatomy

 

The process flow starts with the docker client:

  • The Docker client requests that the docker engine executes the centos image via a REST call to the Docker Engine’s API.
  • The Docker Engine checks the local image cache and realizes that it doesn’t have a locally cached copy of the centos image and begins downloading the layers of the centos image from the Docker Hub (or if the docker engine was configured to look at a Docker Trusted Registry it would do that as well).
  • The container image is used to execute a new container on the Docker host by the Docker engine.

The next steps:

There are several things that actually are magically just addressed by the Docker engine after creating the container including:

  • Allocating a read/write filesystem for the container
  • Getting a network bridge interface to communicate to the outside world
  • Getting a network address from an IP address pool
  • Setting up logging for the container
  • optional (executing any process within the container that is specified at runtime)

We’ll get into some of these additional specifics in another post. For now, happy containerizing and happy automating.

 

1 Response

  1. Chris Williams says:

    Great article Tim. Set up a subscription link for your site so I get your new articles emailed to me 😉

Leave a Reply

Your email address will not be published. Required fields are marked *