Organizations are no strangers to security incidents in their Kubernetes environments. In its State of Container and Kubernetes Security Fall 2020 survey, StackRox found that 90% of respondents had suffered a security incident in their Kubernetes deployments in the last year. Two-thirds of respondents explained that they had weathered a misconfiguration incident, followed by vulnerability cases, runtime events and failed audits at 22%, 17% and 16%, respectively.
Misconfiguration incidents are so prolific because they can appear in many different aspects of an organization’s Kubernetes environment. For instance, they can affect the Kubernetes control plane. This section of a Kubernetes deployment is responsible for making global decisions about a cluster as well as for detecting and responding to events affecting the cluster, notes Kubernetes.
This raises an important question: how can organizations harden the Kubernetes control plane against digital attacks?
To answer that question, this blog post will discuss five components within the Kubernetes control plane that require special attention within organizations’ security strategy. These are the kube-apiserver, etcd, kube-scheduler, kube-controller-manager and cloud-controller-manager. It will then provide recommendations on how organizations can secure each of these components.
Per Kubernetes’ documentation, kube-apiserver is the front end for the Kubernetes control plane. It functions as the main implementation of a Kubernetes API server. Organizations can scale kube-apiserver horizontally by deploying more instances.
The Container Journal noted that attackers are committed to scanning the web for publicly accessible API servers. Acknowledging that reality, organizations need to make sure they don’t leave their kube-apiserver instances publicly exposed. If they do, they could provide attackers with an opening for compromising a Kubernetes cluster.
Administrators can follow the Container Journal’s advice by configuring their API servers to allow cluster API access only via the internal network or a corporate VPN. Once they’ve implemented that security measure, they can use RBAC authorization to further limit who has access to the cluster. They can enable this feature specifically via the kube-apiserver.
Kubernetes uses etcd as key value backing store for cluster data. In order to use etcd, organizations need to have a backup plan for the highly sensitive configuration data that they’d like to protect with this store.
As with kube-apiserver, organizations might accidentally leave etcd exposed to the Internet. The New Stack covered the work of one software developer who conducted a search on Shodan to look for exposed etcd servers. This investigation uncovered 2,284 etcd servers that malicious actors could access through the Internet.
Kubernetes notes in its cluster administration resources that etcd is equivalent to root permission in the cluster. In response, administrators should grant permission to only the nodes that require access to etcd clusters. They should also use firewall rules as well as the feature’s inherent security features, notably peer.key/peer.cert and client.key/client.cert, to secure communications between etcd members as well as between etcd and its clients.
The kube-scheduler is a component within the control plane that watches for the creation of new pods with no assigned node. If it detects such a pod, it selects a node for them to run on. It makes these decisions by taking individual and collective resource requirements, data locality and other considerations into consideration, per Kubernetes’ website.
Any compromise involving the kube-scheduler could affect the performance and availability of a cluster’s pods, explains Packt. Such an event could thereby cause disruptions in an organization’s Kubernetes environment that undermines business productivity.
Administrators can follow Packt’s advise to secure the kube-scheduler by disabling profiling, a feature which exposes system details. They can do this by setting the “–profiling” setting to “false.” Additionally, they can disable external connections to kube-scheduler using the “AllowExtTrafficLocalEndpoints” configuration to prevent outside attackers from gaining access to this control plane component.
This particular component lives up to its name in that it runs controller processes. Each of those processes, including those run by the node controller, replication controller and others, are separate processes. However, the kube-controller-manager compiles all of those processes and runs them together.
A security issue in the kube-controller-manager could negatively affect the scalability and resilience of applications that are running in the cluster. Such an event could thus have an effect on the organization’s business.
Organizations can secure the kube-controller-manager by monitoring the number of instances that they have of this feature deployed in their environments. They can also follow the recommendations that StackRox made in September 2020 by restricting the feature’s file permissions, configuring to serve only HTTPs, binding it to a localhost interfact and using Kubernetes RBAC to allow access to individual service accounts per controller.
Last but not least, the cloud-controller-manager enables administrators to link their cluster into their Cloud Service Provider’s (CSP’s) API. They can then use that feature to separate out elements that interact with the CSP’s cloud platform from those that interact with the cluster. Per Kubernetes’ documentation, cloud-controller-manager functions similarly to kube-controller-manager in its ability to compile multiple processes into one. The difference is that the cloud-controller-manager runs controllers that are specific to an organization’s CSP only.
Issues involving the cloud-controller-manager pose a similar threat to organizations as those that affect the kube-controller-manager.
Acknowledging the similarities between kube-controller-managers and cloud-controller-managers, organizations can use the same measures to secure both.
The five control plane components discussed above all demand attention as part of an organization’s overall Kubernetes security efforts. Even so, organizations’ work to secure their Kubernetes architecture doesn’t end there. There are also the Node components.
For information on how to secure that part of a Kubernetes cluster, click here.
About the Author: David Bisson is an information security writer and security junkie. He’s a contributing editor to IBM’s Security Intelligence, Tripwire’s The State of Security Blog, and a contributing writer to Bora. He also regularly produces written content for Zix and a number of other companies in the digital security space.
[adrotate banner=”9″] | [adrotate banner=”12″] |
(SecurityAffairs – hacking, Kubernetes)
[adrotate banner=”5″]
[adrotate banner=”13″]