Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Kubernetes Vulnerability Puts Clusters at Risk of Takeover (CVE-2020-8558)

July 2020 by Yuval Avrahami and Ariel Zelivansky, Palo Alto Networks Unit 42

A security issue assigned CVE-2020-8558 was recently discovered in the kube-proxy, a networking component running on Kubernetes nodes. The issue exposed internal services of Kubernetes nodes, often run without authentication. On certain Kubernetes deployments, this could have exposed the api-server, allowing an unauthenticated attacker to gain complete control over the cluster. An attacker with this sort of access could steal information, deploy crypto miners or remove existing services altogether.

The vulnerability exposed nodes’ localhost services – services meant to be accessible only from the node itself – to hosts on the local network and to pods running on the node. Localhost bound services expect that only trusted, local processes can interact with them, and thus often serve requests without authentication. If your nodes run localhost services without enforcing authentication, you are affected.

The issue details were made public on April 18, 2020, and a patch released on June 1, 2020. We worked to assess additional impact to Kubernetes clusters and found that some Kubernetes installations don’t disable the api-server insecure-port, which is normally only accessible from within the master node. Exploiting CVE-2020-8558, attackers can gain access to the insecure-port and gain full control over the cluster.

We alerted the Kubernetes security team of the potential impact of this vulnerability. In turn, the team rated the vulnerability’s impact as High in clusters where the api-server insecure-port is enabled, and otherwise Medium. Luckily, CVE-2020-8558’s impact is somewhat reduced on most hosted Kubernetes services like Azure Kubernetes Service (AKS), Amazon’s Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE). CVE-2020-8558 was patched in Kubernetes versions v1.18.4, v1.17.7, and v1.16.11 (released June 17, 2020). All users are encouraged to update.

Prisma Cloud customers are protected from this vulnerability through the capabilities described in the Conclusion section.

The kube-proxy

kube-proxy is a network proxy running on each node in a Kubernetes cluster. Its job is to manage connectivity among pods and services. Kubernetes services expose a single clusterIP, but may consist of multiple backing pods to enable load balancing. A service may consist of three pods – each with its own IP address – but will expose only one clusterIP, for example, 10.0.0.1. Pods accessing that service will send packets to its clusterIP, 10.0.0.1, but must somehow be redirected to one of the pods behind the service abstraction.

That’s where the kube-proxy comes in. It sets up routing tables on each node, so that requests targeting a service will be correctly routed to one of the pods backing that service. It’s commonly deployed as a static pod or as part of a DaemonSet.

There are networking solutions, such as Cilium, that could be configured to fully replace the kube-proxy.

The Culprit Is route_localnet

As part of its job, the kube-proxy configures several network parameters through sysctl files. One of those is net.ipv4.conf.all.route_localnet – the culprit behind this vulnerability. Sysctl documentation states, “route_localnet: Do not consider loopback addresses as martian source or destination while routing. This enables the use of 127/8 for local routing purposes. default FALSE.”

Let’s unpack that explanation. For IPv4, the loopback addresses consist of the 127.0.0.0/8 address block (127.0.0.1-127.255.255.255), while commonly only 127.0.0.1 is used and has the hostname “localhost” mapped to it. Those are addresses used by your machine to refer to itself. Packets targeting a local service will be sent to IP 127.0.0.1 through the loopback network interface, with their source IP set to 127.0.0.1 as well.

Setting route_localnet instructs the kernel to not define 127.0.0.1/8 IP addresses as martian. What does “martian” mean in this context? Well, some packets arrive at a network interface and make claims about their source or destination IP that just don’t make sense. For example, a packet could arrive with a source IP of 255.255.255.255. That packet shouldn’t exist: 255.255.255.255 can’t identify a host, it’s a reserved address used to indicate broadcast. So what’s going on? Your kernel can’t know for sure and has no choice but to conclude the packet came from Mars and should be dropped.

Martian packets often hint that someone malicious on the network is trying to attack you. In the example above, the attacker may want your service to respond to IP 255.255.255.255, causing routers to broadcast the response. A fishy destination IP can also cause a packet to be deemed martian, such as a packet arriving at an external network interface with a destination IP of 127.0.0.1. Again, that packet doesn’t make sense – 127.0.0.1 is used for internal communication through the loopback interface and shouldn’t arrive from a network-facing interface. For more details on martian packets, refer to RFC 1812.

In some complicated routing scenarios, you might want the kernel to let certain martian packets pass through. That’s what route_localnet is used for. It instructs the kernel not to consider 127.0.0.0/8 as martian addresses (as it normally would, like in the case discussed in the previous paragraph). The kube-proxy enables route_localnet to support a bunch of routing magic that I won’t get into, but route_localnet is disabled by default for a reason. Unless proper mitigation is set up alongside it, attackers on the local network could exploit route_localnet to perform several attacks. The most impactful is reaching localhost bound services.


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts