This is part 4 of a series on the hidden costs of poorly written firewall requests. Start with the overview if you haven’t read it.
The 15-port request for a 3-port deployment
A team is deploying a vendor product. The vendor’s documentation has a “Network Requirements” page listing 15 different ports. What the documentation also explains, if you read beyond the table, is that those ports cover every feature the product offers: the web UI, the REST API, the agent communication channel, the optional LDAP integration, the clustering protocol, the debug interface, the legacy connector, and so on. For this particular deployment, the team is only using the web UI and the REST API. They need three ports.
The firewall request that lands in the security team’s queue asks for all 15.
Nobody on the implementation team read far enough to understand which ports mapped to which features. They saw a table of ports in the vendor docs, figured they were all required, and copied the entire list into the request. In a way, they did the right thing by checking the documentation. They just didn’t understand what they were reading.
This happens constantly. And it happens even more often when teams don’t open the docs at all.
How it actually goes wrong
There are two flavors of this problem, and both lead to the same result.
The first is the scenario above: the team opens the vendor documentation, finds a ports table, and requests everything in it without understanding which ports apply to their specific deployment. The documentation technically exists, but it requires context to interpret. Which features are being enabled? Which integrations are being used? Which ports are only needed for clustering in a multi-node setup they’re not doing? Without that understanding, the safe move feels like requesting everything.
The second flavor is worse: the team never opens the documentation at all. The person filling out the firewall request is usually not the person who evaluated the product. They’re on the implementation team. They got handed a project plan that says “deploy Product X by March 15.” Somewhere in there is a line item that says “request firewall changes.” It doesn’t say “read the product’s network requirements documentation first.”
So the implementer opens the firewall request form and tries to fill it out from memory. They remember the product has a web interface, so port 80 and 443 seem right. They know there’s a backend component, so maybe 8080 or 8443. They think there’s a database involved, so 3306 for MySQL or 5432 for PostgreSQL, or maybe both because they’re not sure which one this product uses. And then they add a range like 8000-8100 because someone mentioned the product uses “high ports” and they want to cover their bases.
In both cases, every extra port is wrong in the direction of being too broad, because the person filling out the form is optimizing for one thing: making sure the application works on go-live day. The cost of requesting too many ports is invisible to them. The cost of requesting too few, an app that doesn’t work when the project manager is watching, is very visible.
The incentive problem
Here’s why telling people to “be more precise” doesn’t work.
From the requester’s perspective, the risk calculation is simple. If they request exactly two ports and the application doesn’t work, they’re the bottleneck. They have to go back to the firewall team, submit another request, wait for another review cycle, and explain to the project manager why the deployment is delayed. That’s painful and visible.
If they request fifteen ports and only two are needed, nothing bad happens to them. The firewall team implements the rules. The application works. The project ships on time. Nobody calls them up to say “hey, you requested thirteen ports you didn’t need.” The excess ports are invisible waste. They show up as increased attack surface in a security audit six months later, but that’s someone else’s problem.
Until you change this incentive structure, people will keep over-requesting. It’s rational behavior in a system that punishes under-requesting and ignores over-requesting.
What this looks like at scale
A single bloated request is a minor issue. At scale, it gets serious.
Consider an organization doing a cloud migration. Dozens of applications are moving, each one generating firewall requests. If each request includes five or six unnecessary ports, and there are 200 requests over the course of the migration, you’ve just punched 1,000 unnecessary holes in your network perimeter. Each one was individually approved, documented, and implemented according to process. Each one looks legitimate in an audit. And collectively, they represent a massive expansion of your attack surface that happened gradually enough that nobody noticed.
Then someone has to clean it up. Which means going back through every rule, mapping it to the application it serves, checking whether each port is actually in use, and deciding which ones can be removed without breaking anything. That’s months of work. And it’s work that was entirely avoidable if the original requests had been accurate.
The documentation is there, but it requires effort
For the vast majority of commercial products, the vendor provides documentation of required ports. It’s in the installation guide. It’s in the knowledge base. It’s often in a dedicated “Network Requirements” or “Firewall Configuration” page. The problem is that this documentation usually lists every port the product could ever use across all features and deployment modes. Reading it correctly requires understanding your specific deployment, which features you’re enabling, and which components you’re actually installing.
For internal applications, the development team knows exactly what ports their services use. It’s in the code. It’s in the Docker compose file. It’s in the Kubernetes service definitions.
The information exists. It’s just not making it into the firewall request in the right form. The gap isn’t access to documentation. It’s understanding what the documentation means for your specific case, and nobody made that interpretation an explicit step before submitting the request.
Fixing the over-request problem
Make documentation review a required step. Add a field to the firewall request form: “Link to vendor documentation showing required ports” or “Reference for requested ports.” It doesn’t have to be onerous. A URL to the vendor’s network requirements page is enough. The act of requiring a reference forces the requester to look it up, which means they’ll see the actual list instead of guessing.
Flag unusual requests automatically. This is one of the reasons I created netbobr: to help teams pre-validate their firewall requests and see the concerns a security reviewer would have before the ticket is even submitted. A request for a hundred-port range is probably wrong. A request for both MySQL and PostgreSQL default ports for the same application is worth questioning. A request that includes well-known ports for services unrelated to the stated application should trigger a warning. netbobr catches these patterns and sends them back with specific questions: “You’ve requested port range 8000-8100. Can you confirm which specific ports within this range are required?” The requester gets to see what the security team would flag, and fix it before the ticket enters the queue.
Create a consequence for over-requesting. This doesn’t mean punishing people. It means visibility. If the security team tracks how many requested ports are actually used (through traffic logging after implementation), they can report on the ratio of requested ports to utilized ports. When teams see that they’re consistently requesting five times more access than they use, it creates natural pressure to be more accurate. It also gives the security team data to push back on overly broad requests: “Your team’s last three requests included an average of eight unused ports. Can you verify this one against the documentation before we implement it?”
Build a port reference into your process. For commonly deployed products, maintain a simple internal page that lists the required ports. When a new product is being evaluated, part of the evaluation should include documenting its network requirements. This list becomes the reference that requesters use instead of guessing. It takes 15 minutes to create per product and saves hours of back-and-forth over the product’s lifetime.
It’s not about blame
Telling requesters “you should have read the docs” after the fact doesn’t help. They were under deadline pressure, they had a dozen other tasks to complete, and the firewall request was one line item among many. The problem isn’t that people are lazy. The problem is that the process doesn’t make it easy, expected, or natural to look up the right answer before submitting.
Build the lookup into the process. Make it a two-minute step, not a research project. The requests will get better immediately.
Next in this series: Part 5: How Deadline Pressure Turns Firewall Rules Into Technical Debt