Build a Docker Image and Push to Docker Hub: A Quick Guide
Table of Contents
In the build process of modern applications, one important step you wouldn't want to skip is containerizing your app, with Docker being the go-to tool for this process. After containerization, your application will be tagged and pushed to a public repository like Docker Hub.
The workflow is usually similar to this:
- Building the Docker image from a Docker file
- Tagging the image
- Authenticating with a container registry, in this case, Docker Hub
- Pushing the Docker image to the container registry
In this article, you'll learn how Docker fits perfectly into your application build workflow. You'll go through the steps listed above and get a quick practical guide on building and pushing your Docker image to Docker Hub. By the end of this article, you'll be a pro at utilizing the Docker build workflow for your project.
Brief Intro to Docker
Docker is an open-source containerization tool that helps you create and manage lightweight and portable containers. These containers package your application and everything necessary (dependencies, configuration files, etc.) for it to run consistently across different environments.
Containerization addresses the classic problem of "but it works on my machine" by providing a standardized unit of software that behaves the same regardless of where it's deployed.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
Why Docker is Important in Modern Software Development
- Consistency and Portability: Docker containers encapsulate your applications with their entire runtime environment, including libraries, configuration files, and dependencies. This encapsulation ensures that your applications run the same way on any system that supports Docker, whether a developer's laptop, a test server, or a production environment.
- Efficiency and Resource Management: Unlike traditional virtual machines (VMs), which include an entire operating system for each instance, Docker containers share the host OS kernel. This makes containers much more lightweight and efficient in terms of resource usage. They start up quickly and use less memory & storage compared to VMs.
- Simplified DevOps: Docker fits perfectly into DevOps practices, enabling more streamlined and automated CI/CD pipelines. You can build and test Docker images on your machine, which can be deployed directly to any production environment, reducing the time between writing and deploying code.
- Microservices Architecture: Aside from its' easy integration with DevOps practices, Docker has played a pivotal role in adopting microservices. Each microservice can run in its container, which allows you to manage, scale, and update services independently.
Role of Docker in Creating Containers
Docker has a dedicated set of tools that work hand-in-hand to create these lightweight containers. These tools include:
- Docker Engine: This core component creates, runs, and manages containers on a host machine.
- Docker Images: These are immutable templates for creating containers. These images are built from Dockerfiles, which specify the application's dependencies and setup instructions.
- Docker Hub: This is a Docker registry where users can share and access container images, promoting reuse and collaboration within the developer community.
How does Docker Work?
The usual Docker workflow looks like this:
Figure 1: Docker Workflow
In this workflow, you'll write your Docker file, build the Docker image, and push it to Docker Hub. Once this workflow is integrated into your project, it'll look something like this:
Figure 2: Application Build Process Workflow with Docker integration
In this workflow, after pushing to Docker Hub, your image will be available for use in your project. You can then pull the image into your server (development or production) to run it.
From the images above, you can see that understanding how Docker works involves grasping three core concepts: containerization, images, and containers. That's what we'll focus on in this section.
Containerization
Containerization is a lightweight form of virtualization that involves packaging your application into a single unit called a container. Although containers share the host operating system's kernel, they run in isolated user spaces, ensuring they do not interfere with each other or the host system.
Docker Images
As mentioned earlier, a Docker image is an immutable template that contains everything needed to run a piece of software. Docker images are built using a Dockerfile - a text file containing a series of instructions on how to construct the image. The instructions in this file specify:
- Base Image: This is the starting point for the image, which is often a minimal OS or a pre-configured runtime environment.
- Commands to Install Dependencies: These are the steps to add necessary software, libraries, and dependencies.
- Application Code: These are instructions to copy the application code into the image.
- Configuration: Environment variables and configurations required for the application to run.
Docker images are stored in registries like Docker Hub, which acts as a repository for sharing and distributing images.
Docker Containers
A Docker container is an executable instance of a Docker image. While an image is a static blueprint, a container is a dynamic entity created from that blueprint. When you run a Docker image, it creates a container. Containers are ephemeral by nature, meaning they can be started, stopped, moved, or deleted as needed without affecting the image itself.
Interaction Between Images and Containers
- Building an Image: You build a Docker image with a Docker file.
- Storing and Sharing: Once the image is built, you'll push it to Docker Hub.
- Running a Container: When you pull an image from Docker Hub, you can run it to create a container. Each time you run an image, Docker creates a new container instance from that image.
Key Differences
- Docker Image: These are static, read-only templates with the application and its dependencies. It is built once and can be reused many times to create containers. Images are versioned and immutable.
- Docker Container: These are live, running instances of an image. They are mutable, meaning you can start, stop, and modify them during their lifecycle. However, any changes you make to a container do not affect the original image.
How to Build a Docker Image
Building a Docker image involves writing your Dockerfile to define your application's environment and dependencies. Below is a step-by-step process of achieving this:
Prerequisites
To follow through with this guide, you'll need the following:
- Docker: You'll need Docker installed on your machine. Go to the Official Docker website and Download the version that fits your machine. To know if it's installed, try running some Docker commands like
Docker --version
to see your current Docker installation version. - Docker Hub: Ensure you have a Docker Hub account. If you don't have one, you can create one on Docker Hub's website.
Write a Dockerfile
Below are standard instructions that you'll find in a Dockerfile:
- FROM: Specifies the base image to use.
- LABEL: Adds metadata to the image.
- RUN: Executes commands in the container.
- COPY/ADD: Copies files/directories from the host to the container.
- WORKDIR: Sets the working directory inside the container.
- EXPOSE: This instruction informs Docker that the container listens on the specified network ports.
- CMD/ENTRYPOINT: Specifies the command to run when the container starts.
Create a file called Dockerfile
and paste the following instructions into it. The following Dockerfile packages a Node.js application:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["node", "app.js"]
This Dockerfile says, "Use the Node.js version 14 image as the base. Set the working directory inside the container to /usr/src/app
and copy the package.json
and package-lock.json
files to that directory.
Run npm install
to install the necessary dependencies and copy all the remaining application files to the working directory. Expose port 8080 to allow external access and execute the app.js
file using Node.js when the container starts."
Build the Docker Image
To build your Docker image from the Dockerfile, navigate to the directory containing your Dockerfile in the terminal and run the Docker build command:
docker build -t my-node-app .
This command builds the image and tags it as my-node-app
.
Run the Docker Container
After building the image, you can create and run a container from it using the command below:
docker run -p 8080:8080 my-node-app
This command will map port 8080 of the container to port 8080 on your host machine.
Some Things to Consider When Writing Dockerfiles
- Use a
.dockerignore
File: This file is like a.gitignore
, telling Docker which files and directories to ignore when building your image. This can significantly reduce the build context size and speed up the build process.
Minimize Layers: Each RUN
, COPY
, and ADD
instruction creates a new layer in your image. Combine instructions where possible to reduce the number of layers. For example:
RUN apt-get update && apt-get install -y \
package1 \
package2 \
package3 && \
apt-get clean && rm -rf /var/lib/apt/lists/*
This example minimizes the number of layers by combining multiple commands into a single RUN
instruction. Instead of having separate RUN
instructions for each package installation and cleanup step, everything is done in one command.
- Use Caching: Docker caches the results of each instruction in a Dockerfile. To take advantage of this, place instructions less likely to change at the top and those that change frequently (like copying application code) at the bottom.
Use Multistage Builds: This technique helps reduce the final image size by using multiple FROM
statements in a single Dockerfile. The first stage includes the dependencies and builds tools, and the final stage copies only the necessary artefacts. For instance:
# Stage 1: Build
FROM node:14 AS build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:14
WORKDIR /usr/src/app
COPY --from=build /usr/src/app/dist ./dist
EXPOSE 8080
CMD ["node", "dist/app.js"]
Choose Lightweight Base Images: Use minimal base images alpine
where possible to reduce the overall size of your Docker images.COPYCOPY
FROM node:14-alpine
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
How to Tag and Push a Docker Image to Docker Hub
After building your Docker image, the next step is usually tagging and pushing the image to a pubic registry, in this case, Docker Hub. This process involves a few steps, and we'll go through it in this section.
Tagging the Docker Image
By tagging an image, you want to attach a preferred name and version to that image. This version information helps in managing different versions of your image.
There are two ways you can tag your image. As demonstrated above, you can build and tag your image simultaneously using the -t
option with the docker build
command. For example:
docker build -t my-username/my-node-app:1.0 .
In this example, my-username
is your Docker Hub username. my-node-app
is the name of your application and 1.0
is the version tag.
The second way is to tag an existing image with a new tag using the docker tag
command. For example:
docker tag my-node-app:latest my-username/my-node-app:1.0
Authenticate with Docker Hub
Before you can push an image, you'll need to authenticate with Docker Hub. To authenticate, run the Docker login command:
docker login
You will be prompted to enter your Docker username and password, after which you'll get a message that looks like this:
Login Succeeded
Pushing the Docker Image to Docker Hub
After running the Docker login command, you can push your image to Docker Hub. To do so, run the following command:
docker push my-username/my-node-app:1.0
This uploads the image to your Docker Hub repository under the specified name and version tag.
You should get an output similar to the following:
The push refers to repository [docker.io/your-username/your-repository]
latest: digest: sha256:abcd1234... size: 1234
latest: pushed to docker.io/your-username/your-repository
Let's go through the complete example
Build and Tag the Image:
docker build -t my-username/my-node-app:1.0 .
docker tag my-username/my-node-app:1.0 my-username/my-node-app:latest
Log in to Docker Hub:
docker login
Enter your Docker Hub credentials when prompted.
Push the Image to Docker Hub:
docker push my-username/my-node-app:1.0
docker push my-username/my-node-app:latest
Some Things to Consider for Managing Docker Images on Docker Hub
- Descriptive Tags: Use descriptive tags to indicate the purpose of each version, such as
1.0
,1.1-beta
,2.0-alpha
, etc. - Automate with CI/CD: Integrate Docker image building, tagging, and pushing into your CI/CD pipeline to automate the process and ensure consistency.
- Repository Management: Regularly clean up old or unused tags from your Docker Hub repository to maintain clarity and manage storage efficiently.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
Conclusion
In this article, you have learned how to build, tag, and push a Docker image to Docker Hub. You also got some tips on writing Dockerfile and managing Docker images. With these tips, you will surely be well-grounded in building and pushing Docker images to Docker Hub.
Like this article? Sign up for our newsletter below and become one of over 1000 subscribers who stay informed on the latest developments in the world of DevOps. Subscribe now!
The Practical DevOps Newsletter
Your weekly source of expert tips, real-world scenarios, and streamlined workflows!