The Problem DBAs Inherit
AWS Landing Zone and AWS Control Tower have pushed most mid-size-and-up organizations into multi-account architectures: separate accounts for logging, security, shared services, production workloads, and non-production workloads. The account-per-boundary pattern is the right answer — for isolation, cost allocation, and blast radius containment.
The team that inherits the mess of this architecture is almost always the database team. Because the database has to be reachable from workloads in other accounts. And the networking team that designed the Landing Zone usually didn't optimize for database access patterns.
This article walks through the three patterns that work, when to use each, and the audit implications that separate a compliant architecture from one that will cost you a quarter of remediation work.
Pattern 1: VPC Peering
VPC peering is the simplest cross-account networking primitive. Two VPCs, one peering connection, route table entries on both sides, security group rules referencing the peer VPC CIDR.
When It's Right
- Small number of account pairs (typically ≤10 peering connections across the org)
- A DR replica in a second account needs access from the primary
- Early-stage organizations before the Landing Zone networking team has landed on a standard
- Temporary migration bridges
Why It Doesn't Scale
VPC peering is non-transitive. If account A peers with account B, and B peers with C, then A cannot reach C through B. You need a direct A-to-C peer. With N accounts, this is N×(N-1)/2 connections. At 20 accounts, that's 190 peering connections. Route tables become unmanageable. Overlap risks compound.
If your organization is growing past 10 accounts, don't double down on peering. Migrate to TGW.
Pattern 2: Transit Gateway (The Default for Landing Zones)
AWS Transit Gateway is the hub-and-spoke networking primitive that AWS Landing Zone's reference architecture deploys by default. One TGW in a central networking account, every workload VPC attaches, route tables control who can reach what.
The DBA's View of TGW
From the database team's perspective, TGW changes the provisioning pattern. Rather than building a peering connection per consumer, you request that the consumer VPC be attached to the appropriate TGW route table. Once attached, route-based access is possible.
The practical consequence: the networking team controls who can reach your database VPC via route table propagation. This is both a feature (centralized governance) and a friction point (another team's ticket to open).
TGW Route Table Strategy
A pattern that works well for database workloads:
- A "database" TGW route table that all database VPCs associate with
- A "consumer-nonprod" route table that non-prod workloads associate with; routes propagate from non-prod databases only
- A "consumer-prod" route table with stricter propagation rules for production data access
- Deny-by-default; explicit propagation adds access
Cost Considerations
TGW has two cost components: per-attachment per-hour ($0.05/hr at current pricing, roughly $36/month per attachment), and per-GB data processed ($0.02/GB). For high-throughput database workloads (think bulk ETL across accounts), the data processing charge becomes non-trivial. A 50 TB/month cross-account DB replication workload is $1,000/month in TGW data processing. Model it.
Pattern 3: PrivateLink / VPC Endpoint Services
PrivateLink exposes a service (via a Network Load Balancer in your VPC) as a VPC endpoint service that consumers in other accounts can create interface endpoints against. The consumer never sees your VPC. They only have a private connection to your specific endpoint.
When PrivateLink Is the Right Answer for DBAs
- When you want to expose a database read replica to multiple consumer accounts with minimum VPC visibility
- When the consumer accounts are not under your organization's control (customers, partners)
- When compliance or security mandates strict isolation (the consumer account cannot see your database subnets, CIDRs, or any other VPC artifact)
- When you want to simplify consumer-side networking (no peering, no TGW attachment)
How It Looks in Practice
You put an NLB in front of your database listener (port 3306, 1521, 50000, 5432, etc.). You create a VPC endpoint service backed by that NLB. You add principal ARNs to the endpoint service allow-list. Consumers in other accounts create interface endpoints referencing your service name. DNS resolves to the interface endpoint. Connection flows over AWS backbone, TLS-encrypted.
For databases that require connection pooling or proxy-side routing, the NLB can point to RDS Proxy or a PgBouncer/ProxySQL instance. PrivateLink becomes part of the standard database connection architecture.
Latency note: PrivateLink adds ~1 ms of latency for the NLB hop versus a direct VPC peer or TGW path. For most OLTP workloads that's irrelevant. For latency-sensitive HFT or real-time analytics, measure before you commit — the isolation benefit may not be worth the latency hit.
The Antipattern We See Every Week
Database replicas reached over the public internet with security-group-level IP allowlists. The justification is usually: "the networking team hasn't architected cross-account access yet, this is temporary." Three years later, it's still the architecture.
This will fail your next SOC 2 or PCI audit. Auditors are specifically trained to flag public database endpoints. The remediation work — migrating all consumer applications to an appropriate private path — is almost always larger than the work would have been to do it correctly the first time.
If your database has a public endpoint, or an SG allowlist of public IPs from partner networks, the two-day engagement to move to TGW or PrivateLink is not a luxury. It's a control you don't have yet.
Decision Matrix
| Scenario | Recommended |
|---|---|
| < 5 accounts, stable topology | VPC Peering |
| 5+ accounts, growing org | Transit Gateway |
| Cross-org / partner access | PrivateLink |
| DR replica, second account same org | VPC Peering or TGW |
| Shared read replica to multiple product teams | PrivateLink |
| Public internet + SG allowlist | Never |
Audit and Compliance Checklist
- No database has a public endpoint — verified monthly via AWS Config rule
- Security groups do not allow 0.0.0.0/0 on database ports
- Cross-account access paths are documented and reviewed quarterly
- VPC Flow Logs enabled on all database VPCs, centralized in log-archive account
- Database connections are TLS-encrypted in transit
- PrivateLink endpoint policies restrict access by principal ARN, not by IP
The Bottom Line
Cross-account database access in AWS is a solved problem — VPC peering for small footprints, Transit Gateway for the majority of multi-account orgs, PrivateLink for strict isolation or cross-org access. The only wrong answers are public endpoints and SG IP allowlists.
If your database team is still defaulting to public endpoints because "the network team hasn't gotten to us yet," that conversation is a 2-day engagement. Not a quarter of remediation. The longer it waits, the more expensive the migration becomes.
Auditing your AWS database networking?
We architect cross-account database access for regulated workloads. 30-min scoping call, written recommendation in 5–7 business days.