Skip to content
Database Ports Don't Belong on the Open Network

Database Ports Don't Belong on the Open Network

April 11, 2026
Database Ports Don't Belong on the Open Network

The request that starts the problem

A developer submits a firewall ticket: "Please open TCP 3306 from the office subnet (10.20.0.0/16) to the MySQL server at 10.50.12.8 for testing." The justification says they need to run queries during development. The source is internal. The destination is a single server. The port is specific. It looks like a clean, well-written request.

The firewall team approves it. The developer runs their queries, finishes the testing sprint, and moves on. The rule stays. Three months later, a contractor's laptop on the same office subnet gets compromised through a phishing email. The attacker runs a port scan from the compromised machine, discovers TCP 3306 is open to 10.50.12.8, connects with default credentials that were never changed after the initial deployment, and exfiltrates 340,000 customer records. The breach notification costs alone exceed what the entire security team earns in a year.

This is not a dramatic hypothetical. It is the pattern behind a significant number of database breaches. The database was never meant to be internet-facing, and nobody thought of it as "exposed." But reachable from a /16 subnet means reachable from every device on that subnet, including compromised ones.

Why databases are different from other services

Web servers are designed to handle connections from untrusted sources. That is their entire purpose. Load balancers, API gateways, and CDN endpoints all expect traffic from anywhere. Databases are fundamentally different. They store your most sensitive data, accept powerful query languages that can read or modify anything, and were architecturally designed to sit behind application servers, never to face users or broad networks directly.

The ports involved are well-known and consistently targeted:

  • MSSQL (TCP 1433) powers most enterprise Windows environments. SQL Server supports linked servers, xp_cmdshell for OS command execution, and CLR assemblies that can run arbitrary code. An attacker with database access can often escalate to operating system access.
  • MySQL (TCP 3306) and PostgreSQL (TCP 5432) are the backbone of most web applications. Both support file read/write operations and, in certain configurations, operating system command execution through extensions or user-defined functions.
  • Oracle (TCP 1521) runs critical enterprise applications. Its TNS listener has a history of vulnerabilities, and Oracle databases often hold the most sensitive financial and HR data in the organisation.
  • MongoDB (TCP 27017) ships with no authentication enabled by default. Until version 3.6, it listened on all network interfaces with no access control. Thousands of MongoDB instances were ransomed in 2017 because they were deployed with defaults and reachable from the internet.
  • Redis (TCP 6379) is an in-memory data store that also ships with no authentication by default and no encryption. Exposed Redis instances have been used to write SSH keys to servers, deploy cryptominers, and establish persistent backdoors, all without any credentials.
  • Elasticsearch (TCP 9200) exposes a REST API over HTTP with no authentication in its default configuration. An exposed Elasticsearch instance means anyone who can reach port 9200 can query, modify, or delete every index. Billions of records have been exposed through misconfigured Elasticsearch instances.

The flat network problem

The common thread in database exposure incidents is not that organisations intentionally put databases on the internet. Most do not. The problem is the flat network, an architecture where network segmentation is minimal or absent, and most internal hosts can reach most other internal hosts on any port.

In a flat network, the office Wi-Fi, the developer workstations, the CI/CD build servers, the HR department laptops, and the production database servers all share the same broad network space. Firewalls exist at the perimeter, but internal traffic flows freely. When someone asks "can our office subnet reach the database," the answer is yes, along with everything else on the network.

This architecture made sense 20 years ago when the threat model assumed that everything inside the perimeter was trusted. It does not survive contact with modern threats. Phishing, supply chain attacks, insider threats, and compromised endpoints all originate from inside the network. If a single compromised workstation can reach every database in the environment, your perimeter firewall is protecting nothing that matters.

How breaches actually happen through database ports

The attack patterns are consistent and well-documented. Understanding them helps explain why even seemingly low-risk internal exposure is dangerous.

The Redis and MongoDB pattern. Automated scanners continuously sweep network ranges looking for Redis on 6379 and MongoDB on 27017. When they find an instance with no authentication (which is the default for both), the attack is trivial. For Redis, the attacker writes a cron job or an SSH authorised key directly through the Redis command interface. For MongoDB, they dump every collection, delete the data, and leave a ransom note. Exposed instances are typically compromised within hours of becoming reachable. In some documented cases, the time from exposure to compromise was under 30 minutes.

The Elasticsearch data harvest. Exposed Elasticsearch instances do not require exploitation in the traditional sense. The attacker simply queries the API. A single GET request to the _cat/indices endpoint lists every index. Another request dumps the contents. Researchers have found Elasticsearch instances exposing patient medical records, financial transactions, user credentials, and government identification numbers, all accessible with a web browser and no authentication.

The SQL Server escalation chain. An attacker who gains access to MSSQL with a privileged account can enable xp_cmdshell and execute operating system commands as the SQL Server service account. From there, they can move laterally through the network, dump credentials from memory, and compromise additional systems. The database is not the end goal. It is the pivot point for a broader compromise.

The slow credential attack. For databases that do require authentication, attackers use low-and-slow brute-force attacks that stay below lockout thresholds. Against MySQL, they might try 3 passwords per hour per account, cycling through common defaults: root with no password, root/root, admin/admin, dbuser/dbuser. At that rate, account lockout policies never trigger, and most monitoring systems never alert. Given enough time, and with a broad enough source range allowed by the firewall, they find an account that works.

What proper segmentation looks like

The solution is not complicated in concept, even if it takes effort to implement. Database servers should only be reachable from the application tier that legitimately needs them. Everything else should be blocked.

flowchart TD subgraph flat["Flat Network (Risky)"] direction TB W1[Web Server] --> DB1[(Database)] A1[App Server] --> DB1 D1[Developer Laptop] --> DB1 H1[HR Workstation] --> DB1 C1[Contractor VPN] --> DB1 end subgraph tiered["Tiered Architecture (Secure)"] direction TB W2[Web Server] -->|HTTPS 443| A2[App Server] A2 -->|TCP 3306 only| DB2[(Database)] D2[Developer Laptop] -.->|Blocked| DB2 H2[HR Workstation] -.->|Blocked| DB2 end style DB1 fill:#d4534b,color:#fff style D1 fill:#d4534b,color:#fff style H1 fill:#d4534b,color:#fff style C1 fill:#d4534b,color:#fff style DB2 fill:#4a8c6f,color:#fff style A2 fill:#4a8c6f,color:#fff style D2 fill:#c9913e,color:#fff style H2 fill:#c9913e,color:#fff

In the tiered model, a firewall separates each tier. The web tier can reach the application tier on specific ports. The application tier can reach the database tier on specific database ports. Nothing else can reach the database directly. Developer workstations, corporate laptops, and contractor VPNs are all blocked from direct database access.

When a developer genuinely needs to query the database for troubleshooting, they connect through a designated database administration tool in the application tier, a bastion host with session logging, or a read-only replica in a non-production segment. The key principle is that no ad-hoc direct connection from a broad source range should ever reach a production database.

The compliance dimension

Every major compliance framework treats database exposure as a significant finding, and for good reason.

netbobr maps these requirements to specific rule patterns. PCI-NET-050 flags database ports accessible from broad source ranges because PCI DSS requires that access to systems storing cardholder data be restricted to only those individuals and systems with a legitimate business need. A /16 office subnet is never a legitimate business need for database access.

CIS-NET-022 identifies direct database access from non-application sources. The CIS Controls call for strict network segmentation between different trust levels, and a workstation subnet connecting directly to a database server violates that boundary. CIS-NET-033 goes further and flags any database port exposed to the internet, which should never occur under any circumstances but still shows up in firewall rulesets more often than anyone would like to admit.

NIST-NET-043 covers database ports permitted to reach external destinations, which catches the reverse scenario: a database server with outbound access to the internet. This is a common exfiltration path. If the compromised database server can reach the internet, the attacker can send the stolen data directly out. Blocking outbound access from the database tier is just as important as restricting inbound access to it.

Why "but developers need access" is not an argument

The most common pushback against database segmentation comes from development teams who rely on direct database connections for their daily work. The argument goes: "If I can't connect to the database from my laptop, I can't do my job." This concern is real and deserves a thoughtful answer, but the answer is never "open the production database to the office network."

Local development databases. Most development work should happen against a local database instance running on the developer's machine or in a containerised environment. This requires no firewall rules at all and gives the developer full control over their test data without any risk to production.

Non-production replicas in a dev segment. For testing that requires realistic data volumes or schema complexity, provision a database replica in a dedicated development network segment. Developers can reach this replica freely. If it gets compromised, it contains synthetic or anonymised data and has no path to production systems.

Database administration tools through a bastion. When a database administrator genuinely needs to run queries against production, they should connect through a bastion host in the application tier. The bastion logs every query, enforces multi-factor authentication, and provides a controlled access path. The firewall rule allows the bastion host's IP to reach the database, not the entire DBA's subnet.

Connection pooling from the application tier. Applications should connect to databases through connection pools managed by the application server, not through individual developer connections. The application server handles authentication, connection limits, and query logging. The firewall rule for database access references the application servers' IP addresses, typically two or three hosts, not a broad subnet.

Encrypted connections are not optional

Even within a properly segmented network, database connections should be encrypted. MySQL, PostgreSQL, and MSSQL all support TLS encryption for client connections. Enabling it prevents an attacker who gains access to the application tier's network segment from sniffing database credentials and query results out of plaintext traffic.

The argument against database TLS is usually performance. And it is true that TLS adds overhead: roughly 2-5% CPU increase for most workloads, with a slightly higher latency for connection establishment. But the alternative is sending every password, every query, and every result set across the network in cleartext. In an environment where you have gone to the trouble of segmenting your database tier behind firewalls, sending cleartext traffic across that segment undermines the very protection you built.

For Redis, which historically had no TLS support at all, version 6.0 added native TLS. If you are running Redis in production with sensitive data and have not enabled TLS, the traffic between your application servers and Redis is readable by anyone with access to the network segment, including after a compromise.

Finding the rules that create the exposure

The challenge for most organisations is not understanding that database ports should be segmented. It is finding the specific firewall rules in a ruleset of hundreds or thousands that create unwanted exposure. Rules accumulate over years. Original justifications are forgotten. Server names change but the underlying IP addresses in the rules do not get updated.

This is where netbobr provides practical value. Upload your firewall ruleset and it identifies every rule that permits database port access from sources that should not have it. PCI-NET-050 finds the rules allowing MSSQL from a broad office subnet. CIS-NET-033 flags the rule someone added three years ago allowing PostgreSQL from an IP range that turned out to include an internet-facing DMZ. NIST-NET-043 catches the outbound rule that lets the database server reach the internet on any port.

The findings come prioritised by risk. A rule allowing MongoDB from the internet is critical. A rule allowing MySQL from a slightly-too-broad application subnet is medium. You get a clear picture of where to focus remediation effort rather than auditing every rule manually.

A realistic remediation path

Segmenting database access in an environment that currently runs flat takes planning. Attempting to lock down everything at once will break applications and create an emergency that results in all the new rules being rolled back. Here is a more sustainable approach.

Map your database flows first. Before changing any firewall rule, capture what is actually connecting to your databases. Enable connection logging on the database servers for two weeks. The output will show you every source IP, destination port, and connection frequency. You will almost certainly discover connections you did not know about, automated jobs, monitoring systems, backup agents, and sometimes very old applications that nobody remembers deploying.

Identify legitimate application sources. From the connection logs, separate the traffic into application servers (which should keep access), administrative tools (which should go through a bastion), and everything else (which should be blocked). "Everything else" will include developer laptops, CI/CD systems connecting to production instead of staging, and monitoring tools that could use a read-only replica instead.

Build the bastion path before closing the direct path. Give database administrators a bastion host with the tools they need before you remove their direct access. If you take away access first and provide the alternative second, you will face resistance that derails the project.

Phase the firewall changes. Start by adding explicit allow rules for the legitimate application sources. Then add the deny rules that block everything else. Monitor for a week. Fix the applications that break. Then move to the next database server. A phased rollout over 6 to 8 weeks is far more likely to succeed than a single change window that tries to segment every database at once.

The cost of leaving databases reachable

The maths on database exposure is stark. The average cost of a data breach involving database compromise is measured in millions, factoring in incident response, legal costs, regulatory fines, notification expenses, and reputational damage. The cost of proper segmentation, deploying bastion hosts, tightening firewall rules, enabling TLS, is a fraction of that.

More importantly, database exposure is one of the few risks where the fix is entirely within your control. You do not need to wait for a vendor patch. You do not need to buy new hardware. You need to change firewall rules so that database ports are only reachable from the hosts that legitimately need them, and verify with a tool like netbobr that no rule in your environment quietly contradicts that intent.

The databases hold your most valuable data. The firewall rules around them deserve to be the most carefully reviewed rules in your entire ruleset. If they are not, that is the gap to close next.