The Weekend Hack Attempt I Almost Missed

Share

TL;DR

While casually checking analytics on a Sunday morning, I noticed our login endpoint had 50x normal traffic. Someone was running a credential stuffing attack against our app. Because we had rate limiting and account lockouts, the attack failed. But if I hadn't randomly checked that morning, I might never have known it happened.

A Lazy Sunday Check

I don't usually check metrics on weekends. But that Sunday, I was drinking coffee and idly opened our analytics dashboard. Something about the numbers looked off.

Our login endpoint normally gets about 200 requests per hour. The dashboard was showing 10,000+ requests in the last hour. And it was still climbing.

My first thought: "Did we get featured somewhere? Is this a traffic spike?" My second thought: "Wait, why would a traffic spike only hit the login endpoint?"

What I Found

I dug into the logs. The pattern was clear:

  • Thousands of login attempts from a few IP ranges
  • Different email addresses on each attempt
  • Passwords that looked like leaked credentials (common patterns from known breaches)
  • No successful logins from these IPs

This was a credential stuffing attack. Someone had obtained a list of email/password combinations (probably from another site's data breach) and was testing them against our app.

Why It Didn't Work

Three things saved us:

1. Rate Limiting

We had rate limiting on the login endpoint: 5 attempts per IP per minute. After that, requests got a 429 response. This slowed the attack dramatically.

2. Account Lockout

After 5 failed attempts for any email address, that account was temporarily locked for 15 minutes. Even if attackers had valid credentials, they'd trigger lockouts before succeeding.

3. Password Hashing

We use bcrypt with a high cost factor. Each password comparison takes about 100ms. This made the attack much slower than if we'd used a fast hash.

The result: Over 50,000 login attempts across 6 hours. Zero successful compromises. The security measures we'd implemented (almost as an afterthought) worked exactly as designed.

What I Did Next

Immediate Response

  1. Blocked the attacking IP ranges at the firewall level
  2. Reviewed successful logins from the past 24 hours for anything suspicious
  3. Checked for any accounts with unusual activity patterns
  4. Verified no data was accessed by any compromised accounts (there weren't any)

Longer-term Changes

  • Added alerting: Automatic notifications when login failures exceed normal thresholds
  • Implemented CAPTCHA: After 3 failed attempts, users must complete a CAPTCHA
  • Added geographic analysis: Login attempts from unusual locations require additional verification
  • Encouraged 2FA: Started prompting users more aggressively to enable two-factor authentication

The Scary Part

The attack started around 2 AM on Saturday. I didn't notice it until 10 AM on Sunday. That's 32 hours of active attack.

If we hadn't had rate limiting, if we hadn't had account lockouts, if we'd used a fast password hash... the attackers would have had 32 hours to compromise accounts. At their request rate, they could have tested millions of credentials.

The uncomfortable truth: I got lucky. I happened to check metrics on a day I normally don't. Without proper alerting, this attack could have gone unnoticed for days. Detection is just as important as prevention.

Lessons for Other Builders

Rate Limiting Is Non-Negotiable

Every endpoint that involves authentication needs rate limiting. Not just login, but password reset, API token generation, and any endpoint that could be used to enumerate users.

Slow Down Your Password Hashes

Use bcrypt, scrypt, or Argon2 with appropriate cost factors. Yes, it makes legitimate logins slightly slower. But it makes brute force attacks much harder.

Set Up Alerting

Monitoring without alerting is just data collection. Set up alerts for unusual patterns: spike in failed logins, requests from new geographic regions, traffic at unusual hours.

Assume Attacks Will Happen

It's not if, it's when. Every public-facing app gets attacked. The question is whether you're prepared.

The meta-lesson: Security features often feel like overkill when you implement them. "Do we really need rate limiting? We only have 500 users." Yes. Yes, you do. Because when an attack comes, it won't care how many users you have.

What is credential stuffing?

Credential stuffing is when attackers use lists of username/password combinations (usually from other data breaches) to try to access accounts on your service. Many people reuse passwords, so credentials from one breach often work on other sites.

How do I know if I'm being attacked?

Look for unusual patterns: spikes in failed login attempts, requests from unexpected IP ranges or geographic locations, many attempts for accounts that don't exist, or traffic at unusual hours.

What rate limits should I set for login endpoints?

Common starting points: 5-10 attempts per IP per minute, 5 failed attempts per account before lockout, with lockout periods of 15-30 minutes. Adjust based on your legitimate traffic patterns.

Should I notify users about credential stuffing attacks?

If no accounts were compromised, notification isn't usually necessary. But it's a good opportunity to remind users about password security and encourage 2FA adoption.

Test Your Defenses

Scan your app for missing rate limiting and other security gaps.

Start Free Scan
Security Stories

The Weekend Hack Attempt I Almost Missed