Skip to content
The Trivy Breach: Why Network Egress Controls Matter More Than Ever

The Trivy Breach: Why Network Egress Controls Matter More Than Ever

March 27, 2026
The Trivy Breach: Why Network Egress Controls Matter More Than Ever

It's tough to watch a security organization get hit by a supply chain attack, especially one that gives back so much to the community through open source tooling. Aqua Security's Trivy is the most widely adopted open-source vulnerability scanner in the cloud-native ecosystem. It runs in thousands of CI/CD pipelines. None of what follows is about blame.

What happened to Trivy

On March 19, 2026, Trivy was compromised in a supply chain attack. A threat actor used stolen credentials to push malicious releases across multiple Trivy components, including the main binary, trivy-action, and setup-trivy. The malicious code ran silently before the real scanner, so workflows appeared to complete normally. CVE-2026-33634 covers the full details, Aqua Security published their incident report, and the GitHub Security Advisory has the technical breakdown. CrowdStrike and Microsoft both published detailed analyses for defenders.

The supply chain compromise was the entry point. But the actual damage, the credential theft, happened one step later: the malware harvested secrets, cloud credentials, SSH keys, and Docker configurations from the CI/CD environment, then attempted to exfiltrate them to attacker-controlled infrastructure over the network.

That exfiltration step is where this story gets interesting from a network security perspective.

This is where network egress controls are critical

The malware could compromise the runner. It could harvest every secret in the environment. But it still needed to send those secrets somewhere. That "somewhere" is a domain or IP address that the attacker controls, and that your CI/CD runner has no legitimate reason to reach.

flowchart TD A[Malicious trivy-action runs in CI/CD pipeline] --> B[Harvests secrets, cloud credentials, SSH keys] B --> C[Encodes and stages stolen data] C --> D[Attempts outbound connection to C2 domain] D --> E[Credentials exfiltrated to attacker] style D fill:#d4534b,color:#fff style E fill:#d4534b,color:#fff

Every step before the exfiltration is preparation. The exfiltration is where the attacker actually gets paid. Block that step and the compromise becomes a noisy failure instead of a silent breach.

Default-deny egress would have stopped it

If your pipeline environments enforce outbound deny-by-default, that exfiltration call never lands. The malware runs, finds secrets, and then has nowhere to send them. The runner can only reach a defined allowlist of destinations: your package registries, your container registries, your artifact stores, and the specific APIs your build process needs. An attacker-controlled domain isn't on that list.

flowchart TD A[Malicious trivy-action runs in CI/CD pipeline] --> B[Harvests secrets, cloud credentials, SSH keys] B --> C[Encodes and stages stolen data] C --> D[Attempts outbound connection to C2 domain] D --> E[Egress firewall blocks unknown destination] E --> F[Exfiltration fails, alert fires on blocked connection] style D fill:#c9913e,color:#fff style E fill:#4a8c6f,color:#fff style F fill:#4a8c6f,color:#fff

The malware still ran. The runner was still compromised in the moment. But the attacker got nothing out. And the blocked connection attempt gives your security team a signal to investigate, rotate secrets proactively, and contain the incident before any damage is done.

This is the core argument for egress controls. You can't always prevent the initial compromise. Supply chain attacks are hard to detect because you're running code you chose to trust. But you can control what your infrastructure is allowed to talk to after the compromise happens. That's a layer most organizations have in production but not in their build environments.

Why many organizations miss egress controls

There's a practical reason most CI/CD environments have unrestricted outbound access: builds need to reach a lot of things. Package registries like npm, PyPI, and Maven Central. Container registries like Docker Hub and GitHub Container Registry. APIs for deployment, notification, and monitoring. Tools that get downloaded at build time.

Restricting outbound access feels like it would break everything. And maintaining an allowlist of legitimate destinations is real work. Every new dependency, every new integration, every new tool means updating the egress policy. That's administrative overhead that most teams don't want to own, and it's overhead that doesn't produce visible value until the day something bad happens.

So the runner gets full internet access. And because it works, nobody questions it. The build infrastructure becomes the least segmented part of the environment, even though it handles some of the most sensitive credentials in the organization. Production environments get microsegmentation, zero trust policies, and east-west traffic controls. The CI/CD pipeline that deploys to production gets allow any outbound. That asymmetry is the real gap.

The overhead is real. But it's worth comparing that ongoing maintenance cost against the cost of rotating every secret in your organization because a compromised build tool phoned home to an attacker's domain. One of those costs is predictable and manageable. The other is an incident.

The case for doing it anyway

Yes, maintaining egress allowlists takes effort. But the approach doesn't have to be all-or-nothing.

Start with logging, not blocking. Run your pipelines with egress monitoring for a week or two. Capture every outbound connection your builds make. You'll quickly see that the list of legitimate destinations is finite and relatively stable: a handful of registries, a few APIs, your cloud provider endpoints. That's your baseline.

Build your allowlist from real traffic. Once you have the baseline, create the allowlist and enforce it. New destinations will come up, and your team will need a process for reviewing and adding them. That process is the overhead. But it's the same kind of overhead as reviewing firewall rules for production, and nobody argues that production shouldn't have firewall rules.

Apply the same standards you use for production. The same firewall policies, the same segmentation principles, and the same compliance frameworks you apply to production traffic apply here. If your production environment has a documented egress policy reviewed against CIS or NIST controls, your build infrastructure should too.

Consider DNS filtering as a low-effort additional layer. DNS filtering alone wouldn't have guaranteed protection against the Trivy breach. If the malware used a hardcoded IP address or an established domain for exfiltration, DNS controls wouldn't have stopped the connection. But many C2 operations rely on newly registered or disposable domains, and that's where DNS filtering is most effective. Protective DNS resolvers that block newly registered domains, known malicious domains, and threat-intel-flagged infrastructure will catch a meaningful percentage of exfiltration attempts. Services like Cisco Umbrella, Cloudflare Gateway, or Quad9 offer this commercially, and open source tools like Pi-hole give you domain-level filtering you can run yourself with minimal effort. Domain flipping is still a real risk - attackers can rotate through clean-looking domains to slip past blocklists, which is exactly why DNS filtering alone is not enough. The answer is layered defenses: DNS filtering, egress allowlists, and SHA-pinned dependencies all working together. Each layer fails differently, and that's the point.

Pin GitHub Actions to full commit SHAs. This doesn't replace egress controls, but it eliminates the specific attack vector that made the Trivy compromise possible at scale. A mutable version tag like v1 can be force-pushed to point at any commit. A pinned SHA cannot be redirected.

If you were affected

If your pipelines touched the compromised Trivy components between March 19 and March 23, treat every secret accessible to those workflows as compromised and rotate immediately. The Aqua Security discussion thread has the latest updates, affected versions, safe versions, and indicators of compromise. Docker published guidance for users who pulled affected images from Docker Hub.

Network Control & Chill

You can't prevent every supply chain compromise. The software you depend on will occasionally get breached, and there's a limit to how much you can verify before you have to trust something. That's the nature of software supply chains.

But you can control what happens after the breach. The Trivy compromise required outbound network access to succeed. Default-deny egress would have turned it into a contained incident instead of a credential theft. The administrative overhead of maintaining egress allowlists is real, but it's predictable and manageable. The cost of not having them shows up as an incident response.

Network security controls aren't just for production. Your build pipeline deserves the same rigor. And for most organizations, that layer doesn't exist on their build infrastructure yet.