Pentesting the Kubernetes cluster allows for the identification of security vulnerabilities that an attacker could exploit, either from outside or inside the network.

What items should you look for as part of an initial test?**

Below is a list of items to look for in your test.

Open ports (ie API calls)

In a typical Kubernetes cluster, the API serves Port 443.

Other port indications can be gleaned either from documentation or determined from what the default configuration file will state. When conducting an nmap

From Prometheus:

When installed as a Kubernetes Addon, the router listens for metrics requests on and the Network Policy Controller listens on No other requests are served on these endpoints.

Some deprecated ports include 10251 but was used as part of kube-scheduler, 10248 and 10250 for kubelet.

Port 10256 is the port to bind the health checker for kube-proxy (see documentation).

Port 10257 is the port on which to serve HTTPS with authentication and authorization for kube-controller-manager (see documentation)

Port 10258 for the cloud-controller-manager (see documentation)

Port 6443 is the port on which to serve HTTPS with authentication and authorization by kube-apiserver.

Unsecured API ports (HTTP)

The API server serves HTTP on two ports. Knowing what the default ports are as mentioned in the documentation will provide an indication of the service running, including if there are unsecured API ports open - 8080 for localhost typically used for testing.

Misconfigured RBAC

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. In Kubernetes, RBAC is the recommended form of authorization.

Misconfiguring RBAC can lead to the following:

You can obtain tokens associated with the container. The attacker can try to get into that cluster via do kubectl descibe pod and from there you can see the mount such as whether the serviceaccount mount is mounted from the default node. This means that the default node is on this application.

If an attacker has the same access token to that application, they can try to use that token to get into from that cluster. Use cases involve obtaining the secrets and then using the token in order to create to the API server. Then the API server will respond according to the role based access. When someone compromises the application they have access to the resources there.

Finding the rights associated with application service accounts incl default service accounts

When pentesting from outside the network, it helps to reveal what an external attacker could view/access from a cluster.

If a service account policy was (mis)configured allowing all service accounts to act as cluster administrator (aka permissive RBAC permissions), this means any application running in a container receives service account credentials automatically.

To find the token:

  • Run kubectl describe pod NAME | less
  • Run kubectl exec -ti NAME bash to get to root@NAME
  • Run cat /run/secrets/

Pentesting from outside the network

When pentesting from outside of the network, such as running a scan (ie NMAP) and find the default ports associated with Kubernetes services, you can find what the opportunities are to access these services from external.

Pentesting the Kube Hunter Way

I don’t have yet the steps and screenshots for this but if there is enough interest, I can recreate and obtain screenshots

There are tools provided by different vendors. kube-hunter developed by Aqua Security is one such open source tool. kube-hunter can be run as a prod to test what a compromised container can discover in your cluster.

There are also other tools such as kube-bench which runs according to the CIS security controls / benchmark for Kubernetes.

When pentesting from the inside of the network, it will confine the pentest to revealing weaknesses available to an attacker after they have successfully broken into application.

Run ./ --remote NODE

Then the following type of log will be generated

- Started
- Discovering Open Kubernetes Services...

    type: open source
    service: Etcd
    location: NODE:2379
  Kubelet API:
    type: open service
    service: Kubelet API
    location: NODE:10250
  API Server:
    type: open service
    service: API Server
    location: NODE:6443

It will then refer to specific vulnerabilities like below:

  Possible Ping Flood Attack:
    type: vulnerability
    location: NODE:6443
      Node not patched for CVE-2019-9512. an
      attacker could cause a Denial of Service by
      sending specially crafted HTTP requests.

You can then see the for NODE on port 6443 which serves HTTP authentication and authorization for the API server is vulnerable to this attack.

There will also be a table of vulnerabilities, indication the location, the category (ie Information Disclosure, Access Risk), the actual vulnerability istelf (ie K8s Version Disclosure, Read access to pod’s service account token), description and evidence.

Collecting logs from Kubernetes cluster and from Kubernetes API server

Running Kube-Hunter will generate logs. However I am trying to see what the logs will look like on the Kubernetes API server admin providing that the Kubernetes audit flags are already set up on the API server.

Run kubectl cluster-info dump which dumps everything to stdout. I would would look int collecting logs from these, say in an output directory using --output-directory (see manpage). After a scan, you will see Kube Hunter activity in the logs.

The stderror and audit logs are not enough though. Falco is an open source project for intrusion and abnormality detection for Cloud Native platforms such as Kubernetes, Mesosphere, and Cloud Foundry. It can also run natively on Linux host.

For Kubernetes, it can support K8s Audit Events as a secondary event source, providing that the auditing is set up on the API server.

I was not able to find the exact logs that will specifically pinpoint to Kube-Hunter (and similar) activity but this can definitely be useful to alert on.