5 Docker Best Practices I Wish I Knew When I Started

5 Docker Best Practices I Wish I Knew When I Started

Written by Bobby Iliev on Aug 23rd, 2024 Views Report Post

Introduction

Hey! I'm Bobby, a Docker Captain and the author of the free Introduction to Docker eBook.

In this article, I'll share five Docker best practices that I wish I knew when I first started using Docker. These tips will help you avoid common mistakes and improve your Docker workflows, making your containerization journey smoother and more efficient.

Prerequisites

To follow along, you should have:

  • Basic knowledge of Docker
  • Docker installed on your system

If you're new to Docker, I encourage you to check out my free Introduction to Docker eBook. It covers all the basics you need to get started with Docker and will provide a solid foundation for understanding the best practices we'll discuss in this article.

Step 1 — Use Multi-Stage Builds for Smaller Images

Multi-stage builds are a powerful feature in Docker that help you create smaller, more efficient Docker images. This is crucial because smaller images offer several advantages:

  1. They're faster to push and pull from registries, reducing deployment times.
  2. They use less storage space, which can lead to cost savings in cloud environments.
  3. They have a smaller attack surface, potentially improving security.

Here's an example of a multi-stage Dockerfile for a Go application:

# Build stage
FROM golang:1.16 AS builder
WORKDIR /app
COPY . .
RUN go build -o main .

# Final stage
FROM alpine:latest
WORKDIR /root/
COPY --from=builder /app/main .
CMD ["./main"]

Let's break this down:

  1. The first stage, labeled as builder, uses the official Go image to compile our application. This stage includes all the necessary build tools and dependencies.

  2. The second stage starts from a minimal Alpine Linux image. We copy only the compiled binary from the builder stage using the COPY --from=builder command.

  3. The final image contains only the compiled application and the minimal runtime requirements, resulting in a much smaller image.

By using this approach, you can significantly reduce your image size. For example, a Go application image could shrink from several hundred megabytes to just a few megabytes. This not only saves space but also reduces the time it takes to deploy your application.

Step 2 — Use .dockerignore Files

A .dockerignore file is to Docker what a .gitignore file is to Git. It helps you exclude files and directories from your Docker build context. This is important for several reasons:

  1. It speeds up the build process by reducing the amount of data sent to the Docker daemon.
  2. It prevents sensitive or unnecessary files from being inadvertently included in your Docker image.
  3. It helps reduce the final size of your Docker image.

Here's an example of a .dockerignore file:

.git
*.md
*.log
node_modules
test

Let's go through each line:

  • .git: Excludes the entire Git repository, which is usually not needed in the Docker image.
  • *.md: Ignores all Markdown files, typically documentation that's not required for running the application.
  • *.log: Prevents any log files from being included in the image.
  • node_modules: For Node.js projects, this excludes all dependencies, which should be installed fresh during the build process.
  • test: Excludes the test directory, as tests typically aren't needed in production images.

By using a .dockerignore file, your Docker builds will be faster and your images will be cleaner and more secure. It's a simple step that can make a big difference in your Docker workflow.

For more information check out the official .dockerignore documentation.

Step 3 — Implement Health Checks in Your Dockerfiles

Health checks are an important feature in Docker that help you make sure that your containers are not only running, but actually working as expected. They allow Docker to regularly check if your application is functioning correctly.

Here's an example of how to add a health check to a Dockerfile for an Nginx server:

FROM nginx:latest
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost/ || exit 1

Let's break down the health check options:

  • --interval=30s: Docker will run the health check every 30 seconds.
  • --timeout=3s: The health check must complete within 3 seconds, or it's considered failed.
  • --start-period=5s: Docker will wait 5 seconds before running the first health check, giving the application time to start up.
  • --retries=3: If the health check fails 3 times in a row, the container is considered unhealthy.

The actual health check command (curl -f http://localhost/ || exit 1) attempts to make an HTTP request to the server. If the request fails, the health check fails.

Implementing health checks offers several benefits:

  1. It allows Docker to automatically restart containers that have become unhealthy.
  2. In a swarm or orchestrated environment, it enables automatic rerouting of traffic away from unhealthy containers.
  3. It provides a clear indicator of application health, making it easier to monitor and troubleshoot issues.

By implementing health checks, you're adding an extra layer of reliability to your containerized applications.

Step 4 — Use Docker Compose for Local Development

Docker Compose is a tool for defining and running multi-container Docker applications. It's especially useful for local development environments where you might need to run several interconnected services.

As of recent updates to Docker, the default path for a Compose file is compose.yaml (preferred) or compose.yml in the working directory. Docker Compose also supports docker-compose.yaml and docker-compose.yml for backwards compatibility. If both files exist, Compose prefers the canonical compose.yaml.

Here's an example of a compose.yaml file:

version: '3'
services:
  web:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - .:/code
    environment:
      - DEBUG=True
  db:
    image: postgres:13
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_PASSWORD=secret

This compose.yaml file defines two services:

  1. web: This is your application. It builds from the current directory (.), maps port 8000, mounts the current directory as a volume for live code reloading, and sets an environment variable.

  2. db: This is a PostgreSQL database using the official image. It sets up a database named myapp with a password.

To run this multi-container setup, you simply use:

docker compose up

Docker Compose offers several advantages for local development:

  1. It allows you to define your entire application stack in a single file.
  2. You can start all services with a single command.
  3. It provides an isolated environment that closely mimics production.
  4. It's easy to add or remove services as your application evolves.

By using Docker Compose, you can significantly simplify your development workflow and ensure consistency across your team.

Step 5 — Be Cautious with the Latest Tag

While using the latest tag might seem convenient, it can lead to unexpected issues and make your builds less reproducible. Here's why you should be cautious with the latest tag:

  1. Unpredictability: The latest tag doesn't refer to a specific version. It usually points to the most recent version, which can change without warning.
  2. Lack of consistency: Different team members or environments might pull the latest image at different times, potentially getting different versions.
  3. Difficulties in debugging: If an issue arises, it's harder to pinpoint which exact version of the image is causing the problem.

Instead of using latest, it's better to specify exact versions in your Dockerfile or compose file. For example:

FROM node:18.20.4

Or in your compose.yaml:

services:
  db:
    image: postgres:13.3

By specifying exact versions:

  1. You ensure that everyone on your team is using the same version.
  2. You can easily reproduce builds and deployments.
  3. You have more control over when and how you update your dependencies.

When you're ready to update to a new version, you can do so intentionally by changing the version number in your Dockerfile or compose file. This gives you the opportunity to test the new version thoroughly before deploying it to production.

Bonus Tip: Implement Regular Security Scans

Security should be a top priority when working with Docker images. Implementing regular security scans can help you identify and address vulnerabilities in your containers before they become a problem.

One powerful tool for this purpose is Docker Scout, which is integrated directly into Docker Desktop and the Docker CLI. Here's how you can use it:

  1. First, ensure you have the latest version of Docker installed.

  2. To scan an image, use the following command:

    docker scout cve <image_name>
    

    For example:

    docker scout cve nginx:latest
    
  3. This command will provide a detailed report of any known vulnerabilities in the image, including severity levels and available fixes.

You can also integrate Docker Scout into your CI/CD pipeline to automatically scan images before deployment. Here's an example of how you might do this in a GitHub Actions workflow:

name: Docker Image CI

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Build the Docker image
      run: docker build . --file Dockerfile --tag my-image:$(date +%s)
    - name: Scan the Docker image
      run: docker scout cve my-image:$(date +%s)

By implementing regular security scans, you can:

  1. Identify vulnerabilities early in your development process
  2. Keep your production environments more secure
  3. Stay informed about the security status of your Docker images
  4. Make informed decisions about when to update your base images or dependencies

Regular scans, combined with prompt updates and patches, will help keep your Docker environments secure.

Conclusion

These tips - using multi-stage builds, leveraging .dockerignore files, implementing health checks, using Docker Compose for local development, being cautious with the latest tag and running security scans - will help you create more efficient, reliable, and maintainable Docker workflows.

If you're looking to dive deeper into Docker, I encourage you to check out my free Introduction to Docker eBook.

Also, if you're setting up your Docker environment and need a reliable host, consider using DigitalOcean. You can get a $200 free credit to get started!

Happy Dockerizing, and may your containers always run smoothly!

- Bobby

Comments (0)