Security Patching Docker Containers
Share this

Security Patching Docker Containers

In the final section of my series on creating a comprehensive security program around Docker, I'll be looking at some ideas and best practices around patching running containers.

In the previous articles, I talked about running static analysis on containers and rolling out intrusion prevention and detection.

Containers are immutable, meaning that they should not be patched in place the way VM's or physical servers would be. Instead, updating a container requires re-deploying an updated container and destroying the old one.

Container Patching Challenges

If your organization is anything like the ones I have worked with, it wouldn't be surprising to find very old servers, operating systems, applications, and everything else.

Containers are really no different in this respect - many teams become wary about destroying a running container that is working well. If teams have not thought about challenges such as data migration and persistence, there may be a higher workload required to run a patch cycle.

In addition, I have not yet found a reliable way to identify containers running vulnerable and out of date software for patch cycle targeting.

On the other hand, teams leveraging Docker in a full DevOps cycle are likely to be releasing continuously with the latest security patches, removing this as a concern.

Let's take a look at both practices - running a proper DevOps cycle, and some potential tools for monitoring!

DevOps Teams Patch With Every Deploy

If your development teams are practicing DevOps, with automated integration and deployment solutions, and releasing production changes at least as frequently as your patch cycle, then full container patching can be baked right into that release process with no further effort.

All that is needed is to ensure that your CI/CD pipeline is rebuilding your deployment image with every build. Below, I show a very simple example using Jenkins.

Automated Container Updates with Jenkins

Let's setup a simple example to see how it works. If you aren't familiar with Jenkins, this is an easy place to start. We will first pull down and start the Jenkins container, then update it to work with Docker. Create the following Dockerfile in a new directory:

# Start with a Jenkins image
FROM    jenkins/jenkins:lts

# Install Docker to our image
USER    root
RUN     apt update
RUN     apt-get install -y apt-transport-https ca-certificates wget software-properties-common
RUN     wget https://download.docker.com/linux/debian/gpg
RUN     apt-key add gpg
RUN     echo "deb https://download.docker.com/linux/debian stretch stable" > /etc/apt/sources.list.d/docker.list
RUN     apt update
RUN     apt-get install -y docker-ce

# Jenkins runs with the Jenkins user, and it will need
# access to the docker unix socket, which is owned by root.
# Normally, it should have group docker, but here it is 
# owned by root:root. We could modify this, but for testing purposes
# I just add the jenkins user to the root group.
RUN     usermod -G root -a jenkins

# Switch back to jenkins user
USER    jenkins

We have to do a little fudging to give the jenkins user access to the docker socket (which we will be mounting from the host system). Giving group root to jenkins isn't something I would like to do, but is ok for our testing example.

Now let's start up this container:

$ docker build . -t jenkins_test
$ mkdir -p ~/jenkins_test
$ docker run \
    -p 8080:8080 \
    -p 50000:50000 \
    --name jenkins \
    --privileged=true \
    -v /home/nullsweep/jenkins_test:/var/jenkins_home \
    -v /var/run/docker.sock:/var/run/docker.sock \
    jenkins_test

Notice a couple of things we are doing with this container to ensure that it can build and run docker images:

  • Running in privileged mode, to allow it to spin up sibling containers
  • Mounting a local volume for test configuration files
  • Mounting the docker socket, giving jenkins container control over the host docker install.

Now login to localhost:8080 and follow the steps setup Jenkins and create a pipeline called test. We will use this name later to stage project files. In your pipeline, add the simple pipeline script in the "Pipeline" section, the type is "Pipeline script"

node {
    
    stage('Build image') {
        sh "docker build -t test --pull ."
    }
}

Note that this uses a shell command to build the image rather than the docker plugin. When using the plugin, I would sometimes get errors related to the Jenkins bug 31507 java.io.IOException: Cannot retrieve .Id from 'docker inspect base AS final'. Replacing the docker pipeline with a shell command fixed this.

In the test directory you created, make a sample dockerfile to build. This would generally be pulled down in a build step prior to building the docker image, but in our case we'll just place it there to make things as easy as possible.

# test in the below path is the name of the pipeline setup in Jenkins
$ cd ~/jenkins_test/workspace/test
$ cat Dockerfile
FROM   nullsweep/test:latest
CMD    sh echo "I ran!"

Back in Jenkins, run the build by clicking "Build Now" and you should see success. The pipeline runs a shell script to call docker, and passes the --pull parameter to force re-pulling base images.

Now, we have the beginnings of a pipeline that will ship the latest OS patches with every code release (assuming the base images are kept up to date in their repositories). No more patch rollouts!

Monitoring With WatchTower

What if you aren't releasing code at least monthly? This is often the case with containers such as database containers or fully baked application containers such as Wordpress containers.

In these cases, we can consider using a tool like WatchTower to monitor and automatically upgrade our containers.

WatchTower Overview

WatchTower is a ride-along container that monitors all running containers on a host for updates in their image. If one is found, it can automatically stop, upgrade, and restart the container.

It has a respectable number of options for monitoring, remote monitoring, chatops, and notifications.

WatchTower Limitations

WatchTower is not a final solution to the patching problem, but it is the best I have been able to find. Here are a few of the gaps I have identified while testing it for my own use:

  • Ongoing updates and maintenance seems to have stopped around April 2018 (It's open source, so an internal team could make up this gap)
  • It does not monitor upstream containers. If your team has a custom container that inherits from Ubuntu, and the Ubuntu base image changes, WatchTower won't notify you. Until you push a new application image to the repository, WatchTower won't know you are behind.
  • WatchTower does not allow you to setup alert-only monitoring, meaning it is risky to deploy into a production environment (though a couple of pull requests that are not merged seek to address this)

Given the above, I can't recommend deploying it in it's current state. If you do want to try it out, I put together a simple example:

WatchTower Example

We'll create a custom base image with a version file:

FROM    alpine:latest

RUN     echo "1.0" >> version.txt
CMD     echo "Version: " && cat version.txt

We will push this out to a docker public repository called watch_base.

$ docker build . -t nullsweep/watch_base
$ docker push nullsweep/watch_base

And we will inherit from this in a new image that we seek to deploy as our application:

FROM    nullsweep/watch_base

RUN     printf "while [ 1 ] \n \
           do \n \
               echo 'Version: ' \n \
               cat version.txt \n \
               sleep 5s \n \
           done \n" >> version.sh
CMD     source version.sh

This image creates a shell script that prints out the version.txt file every 5 seconds. Let's push this to a new public repo.

$ docker build . -t nullsweep/watch
$ docker push nullsweep/watch

I had trouble getting WatchTower to work properly when I was building base images on the same docker host (since it would find no diff between what was in the local docker cache and remote repo), so I pulled up a VM without any local images and started the watch image along with WatchTower.

# In terminal one: pull and start the container to monitor:
$ docker pull nullsweep/watch
$ docker run --name watch nullsweep/watch:latest
Version:
1.0

# In a second terminal, start WatchTower:
$ docker pull v2tec/watchtower
$ docker run \
    -d \
    --name watchtower \
    -v /var/run/docker.sock:/var/run/docker.sock \
    v2tec/watchtower -i 5 --debug watch

This will start watchtower with a couple of custom flags: monitor only our own container, and check for image updates every 5 seconds instead of the default 5 minutes. I also include debug logging in case anything is wrong that you want to troubleshoot. At any time you can view the watchtower logs by running:

$ docker logs watchtower

Back outside the VM, you can now run two tests:

  1. Update the watch_base image to bump the version number and push it out to the repo. WatchTower will not notice this change.
  2. Rebuild the app image (the one I named watch above) and push that to it's repo. WatchTower will pull it down and restart the VM with the new version. We now have a patched container!

Wrapping Up

Containers present some unique patching challenges, but can really simplify security work when put in a CI/CD pipeline that automatically builds the latest containers.

Although I don't feel we have a proper solution at this time for patch monitoring across containers, WatchTower is a candidate that may evolve in that direction given time.

What do your teams do? Did I miss a tool or strategy that might help? I would love to hear about your ideas and challenges in this space!

Charlie Belmer's Image
Charlie Belmer
USA

I live for privacy, security, and online freedom because these are the building blocks of a better society, one I want to help create.