← Back to Blog AWS Cost Engineering

How We Cut AWS Bills by 43% Without Touching a Single Workload

8 min read March 5, 2026 Laniakea Engineering Team

Most cloud cost problems aren't architecture problems — they're configuration problems. After auditing dozens of AWS environments, we've found the same five issues showing up again and again. They're costing companies millions every year, and the best part? None of them require touching a single workload. Just configuration fixes.

This is the playbook we use to eliminate them, starting on day one.

43%
Average spend reduction
$4.2M
Total savings last 12 months
14 days
Typical time to first savings

The Five Cost Patterns We Always Find

1. Idle and Underutilized EC2 Instances

This is the most common pattern we see. Between 10-25% of most fleets are running at less than 5% CPU utilization. These are instances that were spun up for a temporary project, a test that never got cleaned up, or a "just in case" deployment that turned into the default.

The fix is ruthless: analyze 30 days of CloudWatch metrics, identify everything below 5% CPU and 10% network, and schedule for termination. Set a 14-day warning period so teams can reclaim if needed. In 90% of cases, nothing breaks.

2. Oversized RDS Instances

We regularly see databases running on db.r5.8xlarge instances that could run on db.r5.2xlarge with no performance impact. The original sizing made sense at launch, but queries got optimized, the workload pattern changed, or the team just never revisited it.

A db.r5.8xlarge in us-east-1 costs approximately $3.26 per hour on-demand. Downsize to db.r5.2xlarge and that drops to $0.815 per hour. Over a year, that's roughly $12,000 saved per instance. Add Reserved Instances on top of the right-sizing and your savings multiply.

3. Orphaned EBS Volumes and Snapshots

EC2 instances get terminated, but their attached EBS volumes and all associated snapshots persist. After an audit, we find volumes accumulating at $0.10 per GB per month. A single 2TB volume that nobody remembers attaching costs $200 per month. Across 50 unattached volumes, you're looking at thousands per month.

The audit command we use:

aws ec2 describe-volumes --filters Name=status,Values=available --query 'Volumes[*].{ID:VolumeId,Size:Size}' --output table

Run that, review the list with your infrastructure team, and delete what you don't need. Do the same with old snapshots and you'll be surprised what you find.

4. Data Transfer Costs Hiding in Plain Sight

Cross-AZ traffic within a region costs $0.01 per GB. NAT Gateway charges sit at $0.045 per GB processed. Neither looks expensive until you realize a single failing application sending traffic across AZs can cost $500+ per day in transfer alone.

Meanwhile, VPC endpoints for S3 and DynamoDB cost $7 per endpoint per month and eliminate data transfer charges entirely for those services. The math is simple: if you're moving more than 700 GB per month through NAT Gateway to reach S3, a VPC endpoint pays for itself and then some.

5. Inefficient Savings Plans and RI Coverage

Most organizations buy Savings Plans and Reserved Instances aggressively, trying to discount everything. The problem is that coverage beyond your stable baseline creates waste. You end up discounting spiky workloads that you should scale down during off-peak hours.

The rule: buy Savings Plans only against your stable baseline — the minimum capacity you run 24/7/365. That's typically 70% of peak. Everything above that should be on-demand so your scaling policies actually save money.

The Audit Process We Use

We start every engagement with a methodical audit:

  1. Pull Cost and Usage reports for the last 90 days
  2. Map spend by service, then by instance type, then by running patterns
  3. Cross-reference with actual CloudWatch utilization data
  4. Identify anomalies and low-hanging fruit
  5. Validate with your infrastructure team before making changes
  6. Implement fixes in low-risk order

Cost optimization is not a project — it's a practice. The wins we find in month one are real, but they're also the obvious ones. The sustainable 20-30% reduction comes from building cost awareness into your normal operating procedures: tagging policies that let you track spend by team, automated cleanup rules for temporary resources, and quarterly audits that catch drift before it becomes expensive.

The Typical Timeline

Day 1-3: Audit discovery. We're analyzing your actual spend and usage patterns.

Day 4-7: Validation with your team. We're confirming that each identified issue is actually safe to fix.

Day 8-14: Implementation. We're making changes in order of safety and impact.

Day 14+: Verification. We're watching CloudWatch, ensuring nothing broke, and documenting what changed and why.

Most clients see measurable savings within the first 14 days. The full 43% reduction happens over 4-8 weeks as we handle the more complex optimizations.

Curious what's hiding in your AWS bill?

We'll run the same audit we describe here on your environment, find the real opportunities, and share exactly what they are. No obligation. No sales pressure. Just honest analysis and clear next steps.

Get Your Free Cloud Audit

We'll assess your infrastructure, identify the biggest opportunities, and share our findings — no strings attached.

Request Your Free Audit