The $12,000 AWS Bill That Changed Everything

Share

TL;DR

I accidentally pushed AWS credentials to a public repository. Within 6 hours, attackers spun up dozens of high-powered EC2 instances to mine cryptocurrency. The bill hit $12,000 before I noticed. AWS eventually credited back most of it, but only after a stressful week of support tickets and security audits.

The Setup

I was building a side project, a simple image processing app. Nothing fancy. I needed AWS S3 for storage and EC2 for some compute tasks. Like many developers, I was working fast and cutting corners to ship quickly.

The credentials were stored in a config file. I had added it to .gitignore, or so I thought. One typo in the ignore pattern, and the file was being tracked.

"It's just a side project," I rationalized. "Who's going to find my random GitHub repo?"

Turns out, everyone. Or at least, every automated bot scanning for exposed AWS keys.

The Discovery

I was checking email on a Saturday morning when I saw the AWS Cost Alert. The message said my account had exceeded the billing threshold I'd set.

My heart rate doubled when I logged into the AWS Console.

$12,347.82
Current month charges

My average monthly bill was around $15. This was a thousand times that.

What the Attackers Did

I immediately went to the EC2 dashboard. What I found was terrifying:

  • 47 EC2 instances running in regions I'd never used
  • All were p3.16xlarge instances (the most expensive GPU instances)
  • They were running in us-east-1, eu-west-1, ap-northeast-1, and several other regions
  • Total running time: approximately 18 hours

Each p3.16xlarge instance costs about $24.48 per hour. Multiply that by 47 instances running for 18 hours, and you get a very expensive weekend.

Why GPU instances? Cryptocurrency mining is computationally intensive. GPU instances like the p3 series are perfect for mining operations. Attackers specifically target these because they're expensive and effective for their purposes.

The Timeline

Friday 4:23 PM

I pushed code to GitHub, including the config file with AWS credentials.

Friday 4:31 PM

First unauthorized API call detected in CloudTrail logs.

Friday 4:35 PM

First batch of EC2 instances launched in us-east-1.

Friday 5:00 PM

Instances spreading to other regions. Total: 12 instances.

Saturday 10:30 AM

I discovered the breach and terminated all instances.

The attackers found my credentials within 8 minutes of the push. That's how fast these automated systems work.

The Response

Immediate Actions

  1. Terminated all EC2 instances in every region
  2. Revoked the compromised access keys in IAM
  3. Enabled MFA on the root account (should have done this day one)
  4. Changed my AWS root password
  5. Reviewed CloudTrail logs for any other suspicious activity

Contacting AWS Support

I opened a support case explaining the situation. AWS has dealt with this before (many times), and they have a process:

  1. They reviewed my CloudTrail logs to verify unauthorized access
  2. They confirmed the activity matched known crypto-mining patterns
  3. They issued a credit for the fraudulent charges

The process took about a week. I was sweating the entire time, not knowing if I'd be stuck with a $12,000 bill.

Good news: AWS typically credits back charges from credential theft, especially for first-time incidents. But it's not guaranteed, and they may not cover the full amount if they detect negligence.

What I Did Wrong

Looking back, I made several critical mistakes:

1. No Billing Alerts

I had set a billing alert, but at $50. By the time it triggered, the charges were already in the thousands. I should have set multiple alerts at lower thresholds ($5, $10, $25).

2. Root Credentials in Code

I was using the root account's access keys. This gave attackers unlimited access to do anything in my account. I should have created an IAM user with limited permissions.

3. No MFA

Multi-factor authentication wasn't enabled. If it had been, the attackers couldn't have created new resources even with my credentials.

4. All Regions Enabled

AWS has regions I've never used and never will use. The attackers exploited this by launching instances in regions I wouldn't think to check.

The Security Overhaul

After this incident, I implemented these changes:

  • IAM users only: Never use root credentials. Create IAM users with minimum necessary permissions.
  • MFA everywhere: Required on root account and all IAM users.
  • Service Control Policies: Restricted which services and regions can be used.
  • AWS Organizations: Created separate accounts for different projects.
  • CloudWatch Alarms: Alerts for unusual EC2 activity, not just billing.
  • Pre-commit hooks: Scan for AWS credentials before any commit.
  • AWS Secrets Manager: No more credentials in code, ever.

The Silver Lining

This was an expensive lesson ($12,000 scary, even if most was credited back). But it fundamentally changed how I think about cloud security.

Before this, security felt optional. Something I'd "do later" when the project got serious. Now I know that attackers don't wait for your project to be serious. They're scanning constantly, and they'll exploit any credential they find within minutes.

Will AWS always refund fraudulent charges?

Not always. AWS reviews each case individually. They're more likely to credit first-time incidents and cases where you respond quickly. Repeated incidents or evidence of negligence may not be credited.

How can I limit the damage if my credentials are stolen?

Use IAM users with minimal permissions instead of root credentials. Enable Service Control Policies to restrict which services can be used. Set up billing alerts at multiple low thresholds. Disable unused AWS regions.

How fast do attackers find exposed credentials?

Studies show exposed AWS keys are typically discovered within 1-10 minutes. Automated bots constantly scan GitHub and other code hosting platforms for credential patterns.

What should I do immediately if I find exposed AWS credentials?

Revoke the credentials in IAM immediately. Check CloudTrail for any unauthorized activity. Terminate any resources you didn't create. Change your password and enable MFA if not already done. Contact AWS support if you see unauthorized charges.

Check Your Credentials

Scan your repositories for exposed AWS keys and other secrets.

Start Free Scan
Security Stories

The $12,000 AWS Bill That Changed Everything