Kubernetes developers and platform engineers are typically under a metric ton of pressure to keep app deployments humming at a brisk pace, and compromises always get made in the interest of speed and schedules. Platform teams are increasingly held responsible for ensuring that those compromises — such as managing Ingress, for instance — don’t result in consequences like customer data being exposed to the entire internet.
Fortunately, Kubernetes providesthe ability to set policiesthat can limit those consequences, by checking for — and preventing — deployment mistakes from ever making it into production. To ensure that your teams’ apps aren’t more consequence than confidence, here are the top five Kubernetes admission control policies that you should have running in your cluster right now.
Note: Each of the example policies below can be implemented via Open Policy Agent (OPA), the de facto policy engine for cloud native (open source). Even simpler, all these OPA policies can be implemented across clusters in literally minutes withStyra Declarative Authorization Service (DAS) Free.
1. Trusted Repo
This policy is simple, but powerful: only allow container images that are pulled from trusted repositories and, optionally, pull only those that match a list of approved repo image paths.
Of course, pulling unknown images from the internet (or anywhere besides trusted repos) comes with risks — such as malware. But there are other good reasons to maintain a single source of truth, such as enabling supportability in the enterprise. By ensuring that images only come from trusted repos, you can closely control your image inventory, mitigate the risks of software entropy and sprawl, and increase the overall security of your cluster.
Prohibit all images with the “latest” image tag
Only allow signed images, or images that match a specific hash/SHA
2. Label Safety
This policy requires all Kubernetes resources to include a specified label and do so with the appropriate format. Since labels determine the groupings of Kubernetes objects and policies, including where workloads can run — front end, back end, data tier — and which resources can send traffic, getting labeling wrong leads to untold deployment and supportability issues in production. Moreover, without access controls over how labels are applied, you lack fundamental security over your clusters. Finally, the danger with manual label entry is that errors creep in, especially because labels are both extremely flexible and extremely powerful in Kubernetes. Apply this policy and ensure that your labels are configured correctly and consistently.
Ensure that every workload requires specific annotations
Specify taints and tolerations to restricting where images can be deployed
3. Prohibit (or Specify) Privileged Mode
This policy ensures that, by default, containers cannot run in privileged mode — unless you carve out specific circumstances (typically rare) when it is allowed.
Generally, of course, you want to avoid running containers in privileged mode, because it provides access to the host’s resources and kernel capabilities — including the ability to disable host-level protections. While containers are isolated to some extent, they ultimately share the same kernel. This means that if a privileged container is compromised, it can become a jumping-off point to compromise an entire system. Still, there are legitimate reasons to run in privileged mode — just ensure that these times are the exception, not the rule.
Prohibit insecure capabilities
Prohibit containers from running as root (run as non-root)
4. Define and Control Ingress
Ingress policy allows you to expose specific services (allow Ingress) as needed, or alternatively expose no services as needed. In Kubernetes, it is all too easy to accidentally spin up a service that talks to the public internet (there are many examples of this onKubernetes Failure Stories). At the same time, overly permissive Ingresses can cause you to spin up unnecessary external LoadBalancers, which can also become very expensive (as in monthly budget spend) very fast! Furthermore, when two services try to share the same Ingress, it can just plain break your application.
The policy example below prevents Ingress objects in different namespaces from sharing the same hostname. This common issue means that new workloads “steal” internet traffic from existing workloads, which has a range of negative consequences — ranging from service outage, to data exposure, and far more.
Prohibit/Allow specific ports
5. Define and Control Egress
Every app needs guardrails to control how egress traffic can flow, and this policy lets you specify communications both intra-and extra cluster communication. As with Ingress, it is easy to accidentally “allow Egress” to every IP in the entire world by default. Sometimes that’s not even an accident — blanket allow can often be a last-ditch effort to make sure that a newly deployed app can be accessed, even if it is too permissive or introduces risk. There is also the potential, at an intra-cluster level, of unintentionally sending data to services that shouldn’t have it. Both of these situations carry the risk of data exfiltration and theft, if your services are ever compromised. Being overly restrictive with Egress, on the other hand, can sometimes cause misconfigurations that break your application. Achieving the best of both worlds means using this policy to be selective and specific about when Egress is allowed to happen, and to which services.
See Ingress policies above
With these policies in place, you can focus on building a world-class platform — one that prevents your app devs from accidentally bringing the whole thing down, exposing data to would-be thieves, or generally invoking the specter of manual remediation for you and your team. And of course, if you want to add more essential policies for Kubernetes, check outopenpolicyagent.orgor explore the library of plug-and-play policies that comes with the free tier ofStyra DAS.