Skip to content

Jenkins Pipeline: Docker in Docker

  • 13 min read

Ever found yourself wrestling with the complexities of running Docker within a Jenkins pipeline? You’re not alone. Many DevOps engineers face this challenge, often hitting roadblocks when trying to weave Docker into their continuous integration and continuous delivery workflows. The key lies in understanding the nuances of Jenkins Docker in Docker—a powerful but sometimes tricky setup. Let’s delve deep into how you can harness this capability to supercharge your CI/CD pipelines.

What is Jenkins Docker in Docker?

The term “Docker in Docker,” often shortened to DinD, refers to a technique that lets you run a Docker daemon inside a Docker container. This might sound a bit like inception—a container within a container—and that’s a fair assessment. Now, imagine placing that setup within the realm of a Jenkins pipeline. This is where things get interesting.

When you use Jenkins, you’re typically spinning up agents to perform tasks. These agents can be physical machines, virtual machines, or even Docker containers themselves. If you want to build and manage Docker images as part of your pipeline, the usual way is to have the Docker client on your agent connect to the host Docker daemon on the machine running the agent.

However, what if you want to make sure your agents are ephemeral, clean, and completely isolated each time? This is where DinD shines. It lets you bring up a whole Docker environment—daemon and all—inside a container. So, when your Jenkins job is over, that container can be destroyed, ensuring a pristine setup next time.

Why Use Jenkins Docker in Docker?

You might be thinking, “Why go to all this trouble?” There are several compelling reasons why engineers choose to use DinD in their Jenkins pipelines:

  • Isolation: The big one is isolation. Every build job gets its own, independent Docker daemon. This means builds won’t interfere with one another, even when they happen at the same time. This also ensures a level of cleanliness that makes each build reliable, without any carry-over from prior build jobs that might interfere with the new one.
  • Clean Builds: Each time your pipeline runs, it starts with a fresh, new Docker environment. No leftover images, containers, or volumes from previous runs. This means your builds are more consistent and predictable. This ensures that any errors you have are from the present job, without any ambiguity from past jobs that might throw some wrenches into the gears.
  • Reproducibility: Because you’re spinning up a full Docker environment from scratch each time, the builds become much more reproducible. You know exactly what to expect from one run to the next, which is a boon for troubleshooting and testing.
  • Simplified Configuration: You don’t have to configure the Docker client on your agents to connect to the host. The Docker daemon lives inside the agent itself. This is useful when you’re dealing with multiple agent types or running your Jenkins setup in cloud-based environments.

The Challenges of Jenkins Docker in Docker

While the benefits of DinD are hard to ignore, it isn’t a walk in the park. There are some known hurdles:

  • Security: DinD, by design, gives containers significant power. If a malicious image gets into your DinD setup, it could compromise your host system. You must take some steps to make sure the agent container is secure.
  • Performance: Running a Docker daemon inside a container can add a layer of overhead that might impact performance. This means builds might take longer to complete, which can affect your pipeline speed, especially in high volume builds.
  • Complexity: Setting up DinD isn’t as straightforward as simply installing the Docker client. You’ll need to configure the Docker daemon inside your agent, which can be a little tricky, depending on your environment and experience.
  • Image Size: Your DinD image needs to contain both the Docker client and daemon. This will result in an increase in size for your agent image that you will need to pull whenever you run a build.

Let’s explore ways to overcome these challenges and set up DinD in Jenkins.

Setting Up Jenkins Docker in Docker

Here are two approaches you can take to configure DinD in a Jenkins pipeline. Both of them follow this same common principle: using a specific Docker image that is built to work as a DinD agent. Then you use a pipeline script to set up and trigger the needed actions.

Approach 1: Using a Docker-in-Docker Image Directly

This approach is the simplest in terms of setup, but will be a bit less flexible than the second one. You’ll be directly using a pre-built Docker image that contains all that you will need.

Step 1: Choose a Docker-in-Docker Image

Several images are available that are specifically designed to run DinD. The most common is the official docker:dind image. You can also find many community-built images. The docker:dind image is generally considered the gold standard. It’s often the best bet for security and stability. Here’s how you could use it in your Jenkinsfile, which is the file that defines your pipeline.

pipeline {
    agent {
        docker {
            image 'docker:dind'
            args  '-v /var/run/docker.sock:/var/run/docker.sock'
        }
    }
    stages {
        stage('Build') {
            steps {
              sh 'docker version'
              sh 'docker build -t my-app .'
            }
        }
    }
}

Let’s break down what this Jenkinsfile does:

  • agent: This defines where the pipeline will execute. Here, it’s a Docker container.
  • docker: This specifies that we are using Docker and provides details for creating the container.
  • image 'docker:dind': This declares we’re using the docker:dind image from Docker Hub.
  • args '-v /var/run/docker.sock:/var/run/docker.sock': This line is essential. It mounts the Docker socket on your host to your container and makes it possible to use Docker inside a Docker container. It basically connects the Docker socket that is inside of the container to the Docker socket on the host.

Step 2: Create a Jenkins Pipeline Job

  1. Log in to your Jenkins dashboard.
  2. Click “New Item” to create a new job.
  3. Choose “Pipeline” and give it a name.
  4. In the “Pipeline” section, select “Pipeline script” from the “Definition” drop-down.
  5. Copy and paste the Jenkinsfile code provided into the “Script” field.
  6. Click “Save”.

Step 3: Run the Pipeline

  1. Go back to the job you have just created.
  2. Click “Build Now” to start the pipeline.

You’ll now be running your pipeline within the isolated docker:dind container.

Pros of Approach 1:

  • Very simple to set up, just use a pre-built image.
  • Minimal configuration required in your Jenkinsfile.

Cons of Approach 1:

  • Less customization options.
  • Depends on the Docker Hub image being up to date.

Approach 2: Using a Custom Agent Image

This approach is more powerful but also requires a bit more setup. You will be creating a custom Docker agent that already has the DinD functionality, which is more tailored to your needs.

Step 1: Create a Dockerfile for Your Custom Agent

First, you need a Dockerfile that sets up your agent image with DinD. This Dockerfile will extend the base Docker image with the needed customizations. It will be similar to the image we were using, but allows to add more tools, packages, etc., as needed.

FROM docker:dind

# Install extra tools
RUN apt-get update && apt-get install -y \
    curl \
    git

# Add your custom scripts or configurations here

This is what that Dockerfile does:

  • FROM docker:dind: This states that our custom image is based on the docker:dind image, which already contains the Docker daemon.
  • RUN apt-get update && apt-get install -y ...: This line installs some extra useful tools you might need, such as curl and git.
  • Add your custom scripts or configurations here: This is a place you can add any of your own configuration details or bash scripts.

Step 2: Build and Push the Image

Build your custom image, tag it appropriately, and push it to a Docker registry. For this example, we will use Docker Hub.

docker build -t your-docker-hub-username/your-custom-dind-agent .
docker push your-docker-hub-username/your-custom-dind-agent

Step 3: Update Your Jenkinsfile

Update your Jenkinsfile to use your custom image:

pipeline {
    agent {
        docker {
            image 'your-docker-hub-username/your-custom-dind-agent'
             args  '-v /var/run/docker.sock:/var/run/docker.sock'
        }
    }
    stages {
        stage('Build') {
            steps {
              sh 'docker version'
              sh 'docker build -t my-app .'
            }
        }
    }
}

Notice that the changes are minimal, we just changed the name of the image.

Step 4: Create or Update Your Jenkins Pipeline Job

If you already created your pipeline job, as indicated in the previous approach, you only need to update the Jenkinsfile. Otherwise, create it as described before.

Step 5: Run Your Pipeline

As before, go back to your job, and click “Build Now” to start the pipeline.

Pros of Approach 2:

  • More control over the agent environment.
  • Ability to add custom tools and configurations.

Cons of Approach 2:

  • Requires more effort to set up.
  • You need to maintain your own Docker image.

Security Best Practices

Running DinD introduces security risks. Here are some steps to take to improve the security of your setup:

  • Use Official Images: Stick to the official docker:dind image or trusted, reputable community-built images. Avoid images from unknown sources.
  • Regular Updates: Keep the Docker image you use up to date to make sure it has the most current security patches. Use tools like Dependabot or similar tools to help you keep your image up to date.
  • Limit Privileges: Avoid running your Docker daemon in privileged mode. While privileged mode allows the Docker daemon to act as if it has root access, it also makes it easier for a compromised container to affect the host.
  • Image Scanning: Scan all Docker images you use for vulnerabilities. Several tools can help you here, like Snyk, Trivy, or Clair.
  • Network Isolation: Isolate your DinD environment from the rest of your infrastructure. That can be done with the use of Docker networks.
  • Resource Limits: Set resource limits for your agent containers to prevent them from hogging up the resources. This helps with stability, and it limits the scope of the impact of a compromised agent container.
  • Secrets Management: Use Jenkins’ secret management tools to protect sensitive information, like API keys and passwords. Never store sensitive information directly in your Jenkinsfile.

Troubleshooting Common Issues

When setting up DinD, you might hit some roadblocks. Here are some common issues and fixes:

  • “Cannot connect to the Docker daemon”: This can mean the Docker daemon isn’t running inside the container. Make sure you’re using a docker:dind image, or equivalent, and mounting the Docker socket (-v /var/run/docker.sock:/var/run/docker.sock).
  • Permission Errors: You may run into errors when the user inside of the agent container does not have enough rights. Make sure the user inside the container has enough rights to run the docker client and daemon.
  • Slow Builds: DinD can introduce some overhead. Make sure your agent has enough resources. Also try to optimize your Docker builds. For example, avoid copying large files into the agent every time it runs by creating a Docker layer for all the files that don’t change across builds. This would make that layer reusable for each build job.
  • Image Pull Errors: Check if your agent image exists in the registry and has the right permissions so it can be pulled by your build jobs.

Performance Optimization for Jenkins Docker in Docker

Optimizing performance will be paramount to guarantee the fast throughput of your pipelines, and to ensure the efficient use of your resources. Here are a few recommendations:

  • Use Docker Layer Caching: Docker uses a system of layers that cache steps of the Dockerfile. Make sure you’re leveraging this feature effectively. Keep the unchanging layers at the top of the Dockerfile to fully benefit from the caching system.
  • Minimize Image Sizes: Smaller images build, pull, and push faster. Use multi-stage builds, to reduce the size of the final image and remove unnecessary tooling you won’t need in your final build.
  • Optimize Dockerfile: Make sure your Dockerfile is well optimized. Remove any unnecessary steps and use best practices in the Dockerfile itself. The Docker documentation is a very valuable resource for this task.
  • Resource Allocation: Make sure your Jenkins agent containers get enough resources. The default is often too little, and you might find out that allocating more RAM and CPU can boost the performance of your builds.
  • Use BuildKit: Docker’s BuildKit is a next-generation build system that can improve the build speed and efficiency. It has powerful features that can bring your builds to the next level.
  • Parallelize Builds: Break your builds into parallel stages when possible. This can significantly reduce overall build time, if you have the hardware available.
  • Use SSD Storage: When possible use fast SSD drives to host your Docker containers. This makes a big difference compared to spinning hard drives.

When Not to Use Docker in Docker

While DinD is a very useful technique, it’s not the right fit for every situation. In some cases, it makes more sense to use other approaches.

  • Simple Builds: If you only need to run a few simple Docker commands, using the Docker client on the host might be easier and faster than setting up a full DinD environment.
  • Host Access: If your builds depend on direct access to the host machine, then a DinD setup may be unnecessarily complicated.
  • Resource Constraints: If you’re dealing with limited resources or have a large number of concurrent builds, the extra overhead of DinD may not be worth it.
  • Complex Networking: If your builds need complicated network setups, it might be more straightforward to rely on the host’s network directly.

Alternative Approaches to Docker in Jenkins

Before committing to a DinD setup, it’s worth considering a few alternative approaches:

  • Docker Client on the Agent: This is the simplest approach. Install the Docker client on your Jenkins agent. Then configure it to connect to the host’s Docker daemon. This is the simplest approach, but can introduce isolation and dependency issues.
  • Docker Compose: Use Docker Compose to define your build environment. Docker Compose has all that you need to start up a full environment that can be used to run your builds. This is good if you need to run more than one container during your builds.
  • Kaniko: Kaniko is a tool that can build Docker images without the need for a Docker daemon running inside the container. This approach can be a much safer and simple way to create docker images.

Docker in Docker: A Powerful Tool When Used Correctly

Jenkins Docker in Docker gives you a way to create isolated and reproducible builds. While it may come with some overhead and complexities, it can greatly enhance your CI/CD workflows if implemented right. You now have a strong grasp of how DinD works, when to use it, and how to configure it securely. Always prioritize security, optimize for performance, and carefully select your tools. Armed with the above information, you’re set to make informed decisions and harness the full potential of DinD in your Jenkins pipelines.