Skip to navigation Skip to main content Skip to footer

NSA & CISA Kubernetes Security Guidance – A Critical Review

09 September 2021

By Iain Smart

Last month, the United States’ National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Technical Report (CTR) detailing the security hardening they recommend be applied to Kubernetes clusters, which is available here. The guidance the document contains is generally reasonable, but there are several points which are either incorrect or do not provide sufficient context for administrators to make good security-focused decisions.

In this blog post, we begin by outlining the general guidance (“The Good“), and then highlight some points where the CTR is either misleading or incorrect (“The Bad” and “The Complex“). This post is split into three parts:

  1. The Good
  2. The Bad: Places where the NSA/CISA guidance overlooked important aspects of Kubernetes security, or where the guidance was out of date at time of publication.
  3. The Complex: Considerations for some of the more common complex use cases not covered by the CTR guidance, including useful audit configurations that won’t require excessive levels of compute power or storage, handling external dependencies, and some notes around the complex mechanisms of Kubernetes RBAC.

The Good

On the whole, the guidance provided is sensible and will help administrators bring their clusters to a reasonable and secure state.

The high level guidance from the document is as below:

  • Scan containers and Pods for vulnerabilities or misconfigurations
  • Run containers and Pods with the least privileges possible
  • Use network separation to control the amount of damage a compromise can cause
  • Use firewalls to limit unneeded network connectivity and encryption to protect confidentiality
  • Use strong authentication and authorization to limit user and administrator access as well as to limit the attack surface
  • Use log auditing so that administrators can monitor activity and be alerted to potential malicious activity
  • Periodically review all Kubernetes settings and use vulnerability scans to help ensure risks are appropriately accounted for and security patches are applied

Each of these points relate back to the generic guidance for almost any platform, regardless of the technology in use: restrict access through authentication and network filtering, log what is permitted and what gets requested, apply security options where available, and keep components up to date.

The guidance also calls out the common sources of compromise, identifying supply chain attacks, malicious threat actors, and insider threats. Despite “malicious threat actors” covering a fairly broad scope as an answer to “who wants to hack us?”, these three sources of compromise cover the majority of the attack paths we have followed when reviewing customer environments.

High Level Recommendations

Scan Container Images

Vulnerability scanning is a key component of staying secure, regardless of the platform used. Performing image scanning on your images can be a good way to prevent the software you are running from becoming open to newly identified vulnerabilities.

Patching container images generally needs to happen in two stages: downloading fresh versions of the published image from a vendor, and applying any patches which have been released since the image was last built through a package manager like apt or yum. As with patching of any other system, care should be taken to ensure that all of your software still works as intended after any patches have been applied. You should also make sure all images are pulled from trusted sources (such as official Docker images, or images from verified publishers). Additionally, any programming language based dependencies (Ruby/Bundler, Python/Pip etc.) should be updated regularly.

Follow the Principle of Least Privilege

Running containers with the lowest level of privileges possible will help to reduce the blast radius of a compromise, reducing what an attacker is able to do should they gain access to a single pod through a remote code execution (RCE) vulnerability in a microservice or similar. This can be accomplished in a few ways, the simplest of which is to build container images to run as a non-root user.

Kubernetes can also force a container to run as a specific user through the use of the SecurityContext directive. This approach is effective, but may result in file permission problems in images which expect to be run as UID 0.

The NSA/CISA guidance does also mention the possibility to use a rootless Docker engine. While this can work for a standalone Docker instance, support is not widely available for running a rootless engine in Kubernetes and doing so is generally not advised on production systems.

Another option is to use user namespacing, effectively allowing contained processes to run “as root” inside but mapping to a non-0 UID on the host. This feature is supported on Docker, but is only in alpha levels of support in Kubernetes.

The concepts applied to running pods should also be applied to anywhere authentication and authorization are applied, such as Kubernetes RBAC and service account configurations. Service accounts have permissions which are effectively granted to any running pod configured to use that account. These permissions are rarely required for standard applications, and so most service accounts do not need any permissions at all.

Isolate Networks

Suggesting enforcing isolation between components is common security advice. Recommending the use of both host/network firewalling and Kubernetes network policies to provide this isolation is good advice, and something that we generally don’t see enough of on customer engagements. It’s not uncommon for us to see environments where the Kubernetes apiserver is exposed to the internet, and for the cluster itself to be completely flat. This means any compromise of one service can provide an attacker with a path to other services, some of which may be running without authentication requirements.

As a minimum we recommend isolating namespaces through the use of a default deny-all network policy for both ingress and egress, then only permitting the connections that are explicitly required. For more sensitive environments we often see customers choose to use a service mesh such as Istio or Linkerd to provide further filtering and additional encryption, however these options do tend to increase operational complexity significantly (and aren’t without their own security issues).

Logging Configuration

Like with network isolation, we regularly see customer deployments running without any logging or auditing enabled. Enabling these features will massively help to identify ongoing attacks should your cluster be targeted, as well as providing essential forensic information should the worst happen and the cluster be compromised by a malicious user.

Regular Configuration Review

Kubernetes clusters require ongoing maintenance, and are not systems which can be simply set-and-forget. Between the relentless patching cycle and an increasing number of security releases and bugfixes and changes to API versions, regular config changes will be required. Checking that security options are applied correctly or tweaking configurations to improve the security posture over time is expected as part of routine maintenance.

As discussed extensively in the released document, Kubernetes clusters are rarely configured securely out of the box. For organisations running multiple clusters, having a standardised deployment process with known-secure options applied will help you ensure consistency.

The Bad

Some of the advice contained in the CTR was not as accurate or up-to-date as guidance from other sources. While not necessarily bad advice, some of the core concepts in Kubernetes security are not discussed in the document, or are not given the attention they deserve.

PSP Deprecation

The biggest issue I have with the guidance is the over-reliance on Pod Security Policy (PSP) as a solution for a range of problems. When first released, PSP was the only available control to prevent a user with the ability to create pods from compromising an entire node in a trivial manner. The guidance correctly points out the many features of PSP, but it does not call out that the PSP feature is officially deprecated, and will be removed in the 1.25 release. This may be because the authors did not want to recommend a specific third party alternative, and the official replacement was only recently formalised.

Several technologies have appeared over the last few years which have aimed to fix holes in the PSP model, including OPA Gatekeeper, Kyverno, and k-rail. The official replacement to PSP, a newly introduced alpha feature called PodSecurity, will be added in Kubernetes 1.22 when it releases later this year.

The deprecation of PSP has only been a matter of time, and our advice for some time now has been to implement one of the PSP replacements rather than spend large amounts of engineering time on a feature that will be removed in the next year. Until the PodSecurity admission controller is more mature, this is still our recommendation to customers.

Admission Controllers

Pod Security Policy, and each of the alternatives, is implemented as an admission controller in a Kubernetes cluster. Admissions controllers are an “extra” step, required for approval after a request has passed authentication and authorization checks. These controllers can provide a vast amount of security hardening in an automated manner. Despite this, they were only mentioned in the released guidance once, as part of the image scanning section.

A well-configured admission controller can automatically enforce a significant number of the checks around pod security, for instance by blocking pods which attempt to run with the privileged flag, or programmatically adding the “RunAsUser” field to force every pod to run as a non-root user.

Inconsistencies/Incorrect Information

The CTR did contain a couple of inconsistencies, or pieces of information which were not correct. For example, page 4 of the guidance states that both the Kubelet and the Scheduler run on TCP 10251 by default. In actuality the Kubelet’s default port is TCP port 10250 for the read-write port, and 10255 for the soon-to-be-deprecated read-only port. Page 15 does provide the correct information for the Kubelet’s read-write port, but does not make any mention of the read-only port. Similarly, the kube-scheduler component runs on TCP port 10259, not 10251, in modern installs, and the controller-manager runs on 10257.

Kubernetes also has an insecure API mode, which the CTR correctly identifies as bypassing all AuthN/AuthZ checks and not using TLS encryption. However, this insecure port was deprecated in version 1.20. Since this release, the –insecure-port flag is only accepted as a parameter if the value is set to 0. If you are running a cluster and have the insecure port enabled, access should be extremely locked down and well monitored/audited, but in general there is no reason this port should be enabled in a modern cluster.

Authentication Issues

When it comes to authentication, the CTR is largely incorrect when it states that Kubernetes does not provide an authentication method by default. While the specifics will vary from install to install, the majority of clusters we review support both token and certificate authentication, both of which are supported natively. While these are supported, we generally advise against using either for production workloads as each have their downsides. In particular, client certificate authentication can provide issues when it comes removing access should it be required to cut off a cluster administrator, as Kubernetes does not support certificate revocation. This becomes more of an issue if an attacker managed to gain access to a certificate issued to the group system:masters, as this group has hard-coded administrative access to the apiserver.

The Complex

With a project as complicated as Kubernetes, it is not possible to cover every option and every edge case in a single document, so trying to write a piece of one size fits all guidance won’t be possible. Here, I would like to offer considerations for some of the more common complex use cases not covered by the CTR guidance. This includes coming up with a useful audit configuration that won’t require excessive levels of compute power or storage, handling external dependencies, and some notes around the complex mechanisms of Kubernetes RBAC.

Levels of Audit Data

While enabling auditing is an excellent idea for a cluster, Kubernetes is heavily reliant on control loops that constantly generate HTTP requests. Logging of every request, and particularly logging the request and response data as suggested in Appendix L of the released guidance, would result in massive amounts of data being stored with the vast majority of this information being expected and not of much use for forensics or debugging. Similarly, the guidance explicitly suggests logging for all pod creation requests, which will result in a large amount of data being stored by routine actions such as pods scaling or being moved from one node to another by a scheduler. Instead of logging full requests, we recommend writing a tailored policy which includes metadata for most requests, and only storing full request/response information for particularly sensitive calls. The exact logging requirements will vary from deployment to deployment in line with your security standards but, in general, logging everything is not considered essential, and can have adverse effects on storage requirements and processing time. In some cases, it can drastically increase operational costs if logs are ingested to a cloud service or a managed service which charges per log-entry.

Sidecar Resource Requirements

Similarly, the CTR advises that logging can be performed in a number of ways. This is true, and again the option you choose will depend on your specific setup. However, logging sidecars on every container does come with an increase in deployment complexity and a significant increase in resource requirements per pod.

Again, there is no “correct” logging deployment, and each option will have pros and cons. That said, it may be more efficient to have containers log to a specific directory on each node and use either a daemonset or some component of the host configuration to pick up these logs and pass them to a centralised logging location.

External Dependencies are essential

The core of a Kubernetes cluster is comprised of container images, which are generally pulled from an external source. For example the apiserver is generally retrieved from k8s.gcr.io/kube-apiserver. Even excluding the core components, most cloud-native technologies tend to assume they’re being run in a highly connected environment where updates can be retrieved in a trivial manner.

Most Kubernetes clusters can be reconfigured to require only local images, but if you decide to enable such restrictions, performing updates will become much more difficult. Given the update cadence of Kubernetes, increasing upgrade friction may not be something you want, leading to the old tradeoff of usability vs security. On the other hand, always pulling the latest version of an external dependency without performing any validation and security testing may open an organisation to supply-chain compromise. The likelihood of this varies, as some repositories will be better monitored and are less likely to be compromised in the first place, but it’s still something to consider.

Container signing is still very much not a solved problem. Historically, Docker Content Trust was viewed as the best option where it was enabled, but that mechanism was not without its problems and is no longer maintained. Several solutions are being worked on, including sigstore.

As well as verifying that your external dependencies are legitimate and not altered from the original packages, these containers may contain security vulnerabilities. At time of writing, the image k8s.gcr.io/etcd:3.5.0-0 (the newest version packed for Kubernetes) has packages vulnerable to CVE-2020-29652 which shows as a high risk vulnerability when scanned with an image scanner like Trivy or Clair. Again, you could probably take on the task of patching these images but that leads to further problems: Do you want to be responsible for performing all testing that patching will require, and what will you do when images contain vulnerabilities for which no patch exists?

RBAC is hard

Kubernetes RBAC is a difficult thing to configure well, and on engagements we regularly see customers who have an RBAC configuration allowing users or attackers to escalate their permissions, often to the point of gaining full adminsitrative control over the cluster. Plenty of guidance is available on the internet around how to do Kubernetes RBAC securely, and that goes way beyond the scope of this post.

Patching Everything is hard

This post has already discussed the difficulties of keeping everything published in containers, but patching of the worker nodes themselves is equally important. Unless you’re running a cluster backed by something like AWS’ Fargate, containers are still running on a computer that you need to keep updated. Vulnerabilities have historically been identified in every step of the execution chain, from Kubernetes to Containerd to runc and the Linux kernel. Keeping all of these components updated can be challenging, especially as there’s always the chance of breaking changes and requirements for downtime.

This is something that Kubernetes can help with, as the whole concept of orchestration is intended to keep services running even as nodes go on and offline. Despite this, we still regularly see customers running nodes that haven’t had patches applied in several months, or even years. (As a tip, server uptime isn’t a badge of honour as much as it used to be; it’s more likely indicative that you’re running an outdated kernel).

Closing Thoughts

The advice issued in this NSA/CISA document has a solid technical grounding and should generally be followed, but some aspects are outdated or are missing some key context. This is almost always the case with anything in Kubernetes given the rapid development pace at which the project is still working. As with any technical security guidance, documents such as this CTR should be taken as guidance and reviewed with suitable levels of business/security context, because when it comes to container security, one size definitely does not fit all.