A look at container security assumptions and realities.

Many people are still parroting the “containers are fundamentally insecure” mantra while clearly not understanding the current state of the container development state. Since the start of the Docker-driven container revival, a host of useful tools have been developed to enhance container security. Nowadays, the larger security risks when deploying containers to production is less about the technology and more about the culture and process of how we treat application dev and deploy pipelines.

Show Notes
Mentioned Projects
Runtime Security
Kata Containers – https://katacontainers.io/
Google gVisor – https://github.com/google/gvisor
Communications Security
Istio Project – https://istio.io/
LinkerD Project – https://linkerd.io/
Container Operations
Kubernetes – https://kubernetes.io/

Kumulus Tech’s Istio Course on Udemy
Use this link to get our Introduction to Istio course for only $9.99 on Udemy

Edited Transcript
For those of you that prefer reading, we’ve run the transcript past our editors for a smoother, cleaner read, below.

What is container security? We’re looking at a number of different factors that impact the security of the code that is either written and running inside of a container or is being deployed into a production environment in a containerized format. There are a number of aspects of this environment but I think it really comes down, at its base, to a shift in how people were thinking about applications and how that impacts the ability to actually secure your application code in a way that is different than what we had done in the past when we were thinking about monolithic applications.

Now some of the things just don’t change. We should still be doing static and even dynamic code analysis. This is just good practice for any code that we write and in most cases when people are looking at continuous integration and continuous deployment pipelines that’s exactly what they’re doing.

Now the next thing that comes into play is when we start thinking about code from the perspective of how we actually manage the code lifecycle. We start thinking about the packaging process and with containerization, one of the things that I think happens is that people get a little lazy. They start saying, “Oh well. You know I’m building this in an Ubuntu environment initially (or a Linux environment of some other nature initially). So I’ll just use that container model and I can get an Ubuntu image (or CentOS image or a RHEL image for all that matters) and I can start there and I’ll just put my code on top of that because I know all the core libraries that I need are going to be there. And everything else they need should be there if I then actually want to operate this.” That’s effectively the lazy approach and one of the problems is that you really have exactly the same security issues that you would have within an operating system and you put that on every single container that you deploy. This is just a bad practice.

Another problem, of course, is the fact that once we’ve actually decided to bundle all our resources together we don’t think about necessarily what we’re doing in terms of our environment where we’re running these containers. Up until probably two years ago, for the most part, that was a single machine running a Docker environment. If we were looking at a cloud target we were probably looking at a shared container runtime environment where we would just upload our code and hope that somebody was handling the security on the back end in some reasonable fashion. Really it wasn’t clear how that was being done.

The problem, of course, is that containers themselves have some known issues in terms of how the runtime interacts with the underlying operating system kernel. Many of those issues have been patched over the years and I would say that in general the interface between a container and the kernel is as secure as we can probably make it today. Still, there are still some interactions there that are directly from an application to the running live kernel and that creates some potential vulnerabilities. Things that we might not have determined how to how to break today but might come come tomorrow and so there’s some tools that we could implement to help secure that interface in a more effective fashion. Things like the Kata Container project or the Google gVisor components all fit into that layer.

In addition, we can start thinking more about how we actually treat our container itself. Even if we’re going to stick with the Linux-based container model when we actually store that in a repository or in a container registry, we can actually start thinking about checking the code at that point. So even if our developers didn’t do good dynamic and static checking as they’re developing their application at least we can compare their system and all of the different libraries and components within that system for vulnerabilities at the time of uploading it into a registry. Now this doesn’t necessarily make things more secure because it’s just going to give you a report of where there are potential vulnerabilities where there are outstanding CVEs (Common (security) Vulnerabilities and Exposures) but it gets us one step closer.

And finally when we do think about where we’re deploying, if we want to actually start limiting that that scope of security impact- if we start to think about how our containers are being tied together- we can start thinking about both the runtime environment. So most clouds today provide a Kubernetes based model that you can either run your own Kubernetes on top of VMs or you can even use their control plane and your own VMs and I think that’s the important part the underlying becomes your own in a Kubernetes-based deployment environment. Then we can also start thinking about how those different services interact.

If it’s still effectively a fairly monolithic app- one application, one database- maybe there’s not a whole lot that needs to be done there. But we can actually start securing the communication between these resources using tools like service meshes specifically either an Istio or maybe linkerd are the – I think – frontrunners in the space today to provide communication security between the different services and resources. Now we still have to think about how traffic is coming in to our environment and how we’re limiting that itself also done through a service mesh we can do that through effectively a proxy. So we will have only one ingress path but we still have to think about you know stack overflow or issues and all these other sorts of things that come in through that one communications paradigm but at least it’s now been secured it’s running over SSL we’ve done as much as we can beyond what the application itself needs to deal with.

So there’s a lot going on in the security space from static and dynamic code checking – that’s just standard “container or not.” Registry-based scanning is a very powerful tool for helping to at least understand what vulnerabilities you may be exposed to. Running a Kubernetes environment either your own or in the cloud but on your own VMs at least gives you that VM layer of security. If you are running your own environment it’s possible that you can also add in the layer of something like a Kata Containers or gVisor to provide additional runtime security. That’s so if a container is somehow compromised that process is compromised you’re limiting the impact on the rest of your system and that particular systems model. And of course, securing the communication not just outside of the cluster but across the different components the cluster a very powerful way of continuing to add another layer of security enhancement especially if you actually start thinking about using that as a site-to-site capability and interaction as well.