In the past couple of years, I have worked with teams trying to figure out how to adapt security tools to their Docker deployments, with varying degrees of success.
Over the next several articles, I will build up a series of security tools and practices that can help secure your containers during development and deployment, culminating in a comprehensive security program that can be rolled out piece by piece.
I wrote this series assuming you have some experience with Docker. In general, I focus on a single container and try to make sure everything can be run locally.
In this series, I'll be covering the following:
- A comprehensive Docker security strategy (this article)
- Docker static analysis and trusted images
- Host based intrusion prevention, detection, and container run-time protection
- A container patching & reporting strategy
So let's jump in with some background on Docker and an initial strategy.
1. A (Basic) Docker Threat Assessment
I tend to think of containers as just another server to secure, with all the considerations that come along with it. What makes Docker somewhat different is that sometimes what needs to be secured is the server running Docker and not the container.
We can make that distinction as we solve each potential threat vector. For now, let's think about some threats facing a single container, and we can figure out the right solution later.
The biggest concerns I hear from security colleagues center around the ability of teams to essentially determine OS configurations without outside input, so let's start there.
- Docker containers allow teams to include known insecure components, such as out of date software.
- Docker base images from public repositories are a potential attack vector, and should be considered low trust. How can their usage be controlled?
- Docker containers by default don't use the standard company OS hardening procedures and tooling installation.
The other common concerns relate to the specific container technology and how it's leveraged.
- Docker images can (and should) be immutable. How will the organization handle patching?
- Dockerfiles are plain text and checked into source control. Teams may include secrets in these files.
- Deployment of the container may bypass existing security controls on server changes.
Strategic Considerations
Our container strategy should mirror other organizational security policies.
This means writing policies and practices, considering industry best practices, working with development teams on an acceptable roll out, and making sure we are enabling rapid development and deployment.
With this in mind, I'll walk through each of the broad areas I recommend planning for, and do a hands on technical deep dive in subsequent articles.
Industry Best Practices
Docker security receives a fair amount of attention. Here are a few resources you may find helpful.
Each of these will give you many specific recommendations on your security setup and are worth understanding. As you write your initial policies, it may make sense to start with one of these as a baseline, and modify for your specific use cases.
The three pillars of our security strategy
I like strategies that can be explained in simple terms and that are easy to roll out in pieces and phases, showing success quickly and building to a large automated practice.
A key component of a successful rollout will be having good working relationships with the development teams writing and deploying Dockerfiles. If you don't have development teams you already work with and trust, I talk about some potential strategies you can use in starting a secure devops program.
There are three pillars which will shape our work.
- Container file static analysis.
- Run-time protection & detection.
- Ongoing maintenance.
Let's go into a little detail on each.
Pillar one: Container static analysis
I usually start new security programs with some form of static scans. It's unobtrusive, can be started nearly immediately, and yields results.
The docker ecosystem has plenty of static scanners for your choosing. My preferred way of managing static analysis is the following:
- Compare tools on the Market. In this series, I use Clair.
- Pilot your selected tool manually with a set of trusted teams.
- Craft an automated solution for your organization.
A few of the tools in this space that you might consider:
- Clair: This is what I will be using in this series
- Docker Bench: An excellent script for checking best practices. Based on CIS benchmarks.
- Anchore: A commercial image scanner.
- Black Duck: Another commercial offering, with more than static analysis
- Twistlock: Commercial unified container security product
In section one, I walk through setting up a Clair pilot and analyzing Dockerfiles.
Pillar two: Run-time protection
Once our containers are running, we will need a way to protect them from vulnerabilities that slipped through our static analysis protections and 0days.
As with static analysis, our roll out strategy will be critical. We must be careful not to negatively affect our product teams while supporting their technology choices.
Some things to consider as you evaluate run-time protection:
- Deployment model: Does it require an agent on the container itself (poor practice), an agent on the underlying docker host (may not work with all hosting solutions), a ride along container (sub-optimal for some deployment models), or a combination of these?
- How does it compare and integrate with existing HIDS, alerting, and detection solutions employed within your organization for non-Docker deployments? (Will it play nice with those, or do your security analysts have to check different monitoring tools now?)
- Will it work on all of the likely deployment models your teams are using? Some common ones include AWS FarGate and/or beanstalk, Kubernetes, and perhaps Mesos or Swarm.
The commercial vendors listed above all have run-time protection offerings as well. In section two, I walk through setting up the open source Wazuh and using HIDS agents on the docker host.
Pillar three: Keeping containers up to date
Containers should be immutable. That means patching is a little different.
To "patch" a container, the correct process is to update the baseline image (for OS patches) or the layer that contains the vulnerable software version, then redeploy the container.
If you've been in enterprise systems for awhile, you have probably come across some older servers with uptime in the thousands of days, so expecting teams to re-deploy every patch cycle sounds a bit scary.
With that in mind, we need to make sure that our docker deployment architecture can handle re-deploys on demand, which can actually help the security and sysadmin teams keep up to date.
Some things to consider:
- Base OS patching on the docker host is the same as it's always been. Work to build stateless and redundant deployment models so loss of any given docker host during patching is acceptable.
- Build patching and static analysis into a CI/CD pipeline so that every container deploy contains the latest security patches across the stack. For teams doing daily deployments - you're probably done at this step!
- Consider a container monitoring solution that alerts when baseline images are updated, and maintain an update rollout process.
In part three, I walk through a simple Jenkins setup that patches containers with every release, and a sample runtime patch monitoring tool called WatchTower.
Final Thoughts
We have the basic outlines of a container security strategy. In the following articles, I get hands on with tools you can use right now to start improving your container security posture.
Did I miss anything or get something wrong? I want to hear from you on what has worked for you, what additional challenges you face, or what other threats I should consider.