Photo by Rubaitul Azad / Unsplash

Understanding Docker Architecture: A Beginner's Guide to How Docker Works

Prince Onyeanuna
Prince Onyeanuna

Table of Contents

Using Docker, developers can package all their code and its dependencies into an isolated environment called an image and then run that image as a container. With Docker, deploying an application from a development server to a production server without worrying about compatibility issues is easy.

Aside from knowing basic Docker commands, while learning Docker, it is necessary you understand how these commands work. Therefore, in this article, you will learn about the fundamental components that make up Docker and how they work.

What is a Docker container?

Containers are lightweight, standalone units containing all the code and libraries needed to run an application. Unlike a virtual machine, a Docker container runs directly on the host operating system. This means it shares the host operating system kernel with other containers.

Docker containers are designed to be moved around easily between different environments without changing the application or its dependencies.

What is a Docker engine?

The Docker engine is the core of the Docker platform. It manages containers, including their creation, running and shipping and the entire lifecycle. When you install Docker, the Docker engine gets installed as well. This engine is the primary client-server technology that manages containers using all Dockers services and components.

The 3 fundamental components of the Docker engine

The Docker engine consists of three (3) fundamental components, including the Docker daemon, Docker API and Docker client.

Docker daemon

Docker daemon is a fundamental Docker component. It is a background process that listens to requests from the Docker client and manages the creation and running of Docker containers. The Docker daemon can be considered as the engine that powers the Docker environment. This allows developers to run, build and manage containerized applications.

The Docker daemon pulls Docker images from registries and manages the resources needed to run Docker containers. Docker daemon functions include:

  • Image management: The Docker daemon manages images, including pulling and caching images for fast and efficient container creation.
  • Volume management: Persisting data in containers is possible due to the Docker daemon. It enables the creation and management of volumes, which ensures data is saved when containers are deleted.
  • Network management: The Docker daemon manages communication between containers and the outside world. It manages container network interfaces, ensuring they are isolated from each other and the host machine.
  • Container management: The Docker daemon manages the starting, stopping and deleting containers.

Docker API

The Docker API is a programmatic interface that communicates with the Docker daemon. With the Docker API, you can tell the Docker daemon to perform tasks like starting, stopping and deleting containers or downloading or uploading Docker images. Docker API makes networks and volumes possible and manages user permissions and access.

All Docker daemon tasks are possible due to the Docker API. Without the Docker API, communicating with the Docker daemon programmatically wouldn't be possible.

Docker client

This is the primary mode of communicating with the Docker daemon. The Docker client is a command line interface (CLI) developers use to interact with the Docker daemon from their computers. When a user uses a command such as a docker run, the Docker client sends this request to the Docker daemon.

With the Docker client, developers can manage Docker containers, images, networks, and volumes from the command line. Below are the key features of the Docker client:

  • Command line interface: The Docker client provides developers a command line interface to execute Docker commands.
  • Integration with development tools: With the Docker client, it is possible to manage Docker containers from popular development environments, including Visual Studio Code and IntelliJ IDEA.

Note: Although the Docker API and Docker client may seem similar, as they are tools you can use to interact with the Docker daemon, they differ in a few ways. The Docker client sends requests to the Docker daemon via a Unix socket or a network interface, while the Docker API exposes a RESTful HTTP interface over a network.

Docker workflow

The Docker client, Docker API and Docker daemon work together for a complete Docker workflow. A practical example would be creating a container to see all these parts in action.

To create a Docker container, you would follow the steps below:

Step 1

Using the Docker client, you can pull an image from a registry (such as Docker Hub) or build an image from a Dockerfile.

$ docker pull nginx:latest 
//The above will pull the nginx image from Docker Hub

$ docker build -t <image_tag> . 
// When you run the above in a directory with a Dockerfile, it builds and tags it.

Step 2

Once you have an image, create a container using the Docker client.

$ docker run --name mycontainer -d nginx:latest

The above command requests the Docker API to create a container. The Docker API communicates with the Docker daemon to create the container. The Docker daemon sets up a network interface that allows the container to communicate with other containers or the host system. It also sets up volumes that the container can use to persist data.

Step 3

You can interact with the container once it's running using the Docker client. You can use the following commands to do this.

$ docker exec -it mycontainer bash # Execute a command inside a running container
$ docker logs mycontainer # View the logs of a container
$ docker stop mycontainer # Stop a container
$ docker start mycontainer # Start a container   

Step 4

You can also use the Docker client to stop or delete the container.

$ docker stop mycontainer # Stop a container
$ docker rm mycontainer # Start a container

The Docker client sends commands or requests to the Docker API, which communicates with the Docker daemon to create and manage containers and their resources.

Alternatives to Docker

Although Docker isn't the first platform developers use for containerization, it played a significant role in popularizing containerization with a simplified process for creating containers and a user-friendly interface.

However, there are alternatives to Docker, such as container runtimes such as containerd and CRI-O and tools for building images such as Buildah.

Containerd: This is an open-source container runtime to manage a container's lifecycle. Docker and Kubernetes can use Containerd by providing a high-level API for managing containers and a low-level runtime for container orchestration.

CRI-O: This is an open-source container runtime designed for use with Kubernetes. It is a lightweight and stable environment for containers. It also complies with the Kubernetes Container Runtime Interface (CRI), making it easy to integrate with Kubernetes.

Buildah: This lightweight, open-source command-line tool for building and managing container images. It is an efficient alternative to Docker. With Buildah, you can build images in various ways, including using a Dockerfile, a podmanfile or by running commands in a container. Buildah is a flexible, secure and powerful tool for building container images.

Conclusion

Containerization is rapidly becoming the new standard for application deployment, and Docker is leading the way in this area. With its robust and flexible architecture, Docker makes it easy to build, deploy, and manage containerized applications on any platform.

If you're looking to improve your application deployment process or if you want to explore the exciting world of containerization, it is advised that you dive in and explore Docker.

To learn more about Docker, check out the following resources:

DockerContainers

Prince Onyeanuna Twitter

Prince is a technical writer and DevOps engineer who believes in the power of showing up. He is passionate about helping others learn and grow through writing and coding.