You asked Cursor to add Stripe payments. It generated beautiful checkout code in under a minute. But buried in that code is your live secret key, hardcoded right in the client-side JavaScript. One git push later, it's public.
This isn't a rare edge case. Exposed API keys are the #1 security finding in vibe-coded apps. AI code generators are optimized to produce code that works, not code that's secure. And the gap between "it works" and "it's safe" is where your keys get leaked.
TL;DR
AI code generators frequently hardcode API keys in client-side code, skip .gitignore patterns, and blur the line between server and browser. Five strategies stop this: use environment variables correctly, keep secrets server-side, add .gitignore rules before writing code, run secret scanning in CI, and scan your deployed app with CheckYourVibe.
Why AI Tools Get This Wrong
AI models are trained on millions of code samples — tutorials, Stack Overflow answers, README examples, and open-source projects. Most of those samples use hardcoded keys for simplicity. The AI learns this pattern and reproduces it.
There's no malicious intent. The model simply doesn't distinguish between "demo code" and "production code." When you say "add Stripe payments," it generates the most common pattern it's seen:
// AI-generated code — looks correct, works immediately
const stripe = new Stripe("sk_live_51abc123...");
const session = await stripe.checkout.sessions.create({
line_items: [{ price: "price_1234", quantity: 1 }],
mode: "payment",
success_url: "https://yourapp.com/success",
});
That sk_live_ key is your Stripe secret key. If this code ends up in a client-side bundle or a public repository, anyone can charge your customers, issue refunds, or access your financial data.
5 Common Exposure Patterns
Pattern 1: Hardcoded Keys in Source Files
The most straightforward mistake. The AI writes a key directly in the code:
// AI generates this — key is now in your git history forever
const openai = new OpenAI({ apiKey: "sk-proj-abc123..." });
Even if you replace it later, the key lives in your git history. Anyone with repo access can find it with git log -p.
Pattern 2: Secret Keys in Client-Side Bundles
AI tools often don't distinguish between server and client code. In Next.js, for example:
// AI puts this in a client component — key ships to every browser
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL,
process.env.SUPABASE_SERVICE_ROLE_KEY, // This should NEVER be public
);
Any environment variable accessed in client-side code gets bundled into JavaScript that's sent to every user's browser. View Source or browser DevTools is all it takes to extract it.
Pattern 3: .env Files Committed to Git
AI tools generate .env files with real-looking keys but don't always create or update .gitignore:
# AI creates this file
DATABASE_URL=postgresql://user:password@db.example.com:5432/prod
STRIPE_SECRET_KEY=sk_live_51abc...
OPENAI_API_KEY=sk-proj-abc...
If .env isn't in .gitignore, your next git add . commits every secret to your repository.
Pattern 4: Keys in AI Chat Context
When you paste error messages or code snippets into AI tools, you may include keys in the prompt. These can end up in:
- The AI provider's training data (depending on their data policy)
- Chat history logs that other team members can access
- Screenshot-based bug reports
Pattern 5: Overly Permissive Public Keys
AI tools often use full-access keys when scoped, limited keys would suffice:
// AI uses the admin key when a read-only key would work
const firebase = initializeApp({
apiKey: "AIza...",
// AI doesn't set up Security Rules or use restricted keys
});
Real-World Consequences
Exposed keys aren't theoretical risks. Here are real incidents that cost companies millions:
Moltbook: 1.5 million API keys exposed from a vibe-coded app. Moltbook, an AI social network built entirely through vibe coding, launched in January 2026. Within five days, Wiz researchers found a Supabase API key in client-side JavaScript that granted full unauthenticated access to the production database — no Row Level Security configured. The exposed data included 1.5 million API auth tokens, 35,000 email addresses, and private messages containing plaintext OpenAI, Anthropic, and AWS credentials.
AWS crypto mining: $45K-$89K overnight bills. This happens constantly. One founder woke up to a $45,000 AWS bill after attackers found his keys on GitHub and installed crypto miners on Lambda. Palo Alto Networks documented an organized campaign called EleKtra-Leak that automatically harvests AWS keys from GitHub within 5 minutes of being pushed.
Stripe key exposures enable direct fraud. Truffle Security identified 5 distinct attack paths from a single leaked Stripe secret key — including querying all customer PII, issuing unauthorized refunds, and manipulating pricing. A separate web skimming campaign leveraged Stripe's own API to validate stolen payment cards across 49 merchants.
OpenAI keys stolen via Chrome extensions. A fake Chrome extension called "H-Chat Assistant" harvested over 459 API keys from 10,000+ users. Stolen keys gave attackers access to billing accounts — one case involved a key with a $150,000 usage limit.
The scale: 23.8 million secrets leaked on GitHub in 2024. GitGuardian's annual State of Secrets Sprawl report found the number has quadrupled since 2021. Over 70% of secrets leaked in 2022 were still active two years later, and .env files have a 54% chance of containing at least one secret.
Even the big guys mess up. In 2016, attackers found hardcoded AWS credentials in Uber's private GitHub repos and used them to download personal data of 57 million users. The result: a $148 million settlement with all 50 US states and a criminal conviction for Uber's former CSO. If Uber's engineers can leak keys, so can anyone.
How Each AI Tool Handles Secrets
Not all AI code generators handle secrets equally. Here's how the major tools compare:
Uses your local codebase and .env files. Respects .gitignore and won't send ignored files to the AI — but it can still generate code that hardcodes keys if your prompt doesn't specify otherwise.
Generates full-stack apps entirely in-browser. No built-in secret management — keys frequently end up in client-side code. You must manually move secrets to environment variables after generation.
Generates and deploys full applications. Has basic environment variable support, but the line between server and client code is blurred. Secrets can easily leak to the client bundle.
Inline code completion only. Won't autocomplete recognized API key patterns. GitHub's push protection can block commits containing detected secrets.
Focused on UI component generation. Rarely generates backend or secret-handling code, so exposure risk is minimal.
Full IDE with built-in deployment. Has a dedicated Secrets tab for managing environment variables — but the AI assistant may still hardcode keys directly in source files.
Bolt and Lovable deserve extra caution. Because they generate entire applications including backend and frontend together, the boundary between server-side and client-side code is easily blurred. Always review generated code for secret placement before deploying.
5 Strategies to Stop Key Exposure
Set Up Environment Variables Before Writing Code
Create your .env.local and .gitignore files before you start prompting the AI. This establishes the pattern the AI will follow:
# Create .gitignore first
echo ".env\n.env.local\n.env.production" >> .gitignore
# Create .env.local with your keys
touch .env.local
# Create .env.example with placeholders (safe to commit)
echo "STRIPE_SECRET_KEY=sk_test_your_key_here\nOPENAI_API_KEY=sk-your_key_here" > .env.example
Keep Secret Keys Server-Side Only
Every framework has a convention for what reaches the browser. Know yours:
| Framework | Server-only | Client-safe (public) |
|---|---|---|
| Next.js | process.env.SECRET | process.env.NEXT_PUBLIC_* |
| Nuxt 3 | runtimeConfig.secret | runtimeConfig.public.* |
| Vite/Vue | Not accessible by default | import.meta.env.VITE_* |
| SvelteKit | $env/static/private | $env/static/public |
Tell the AI explicitly. Include "use server-side API route" or "keep this key server-side only" in your prompt. AI tools follow instructions better when security requirements are stated upfront.
Use Server-Side API Proxy Routes
Instead of calling third-party APIs directly from the browser, create a server-side route:
// app/api/generate/route.js — runs on the server, key never reaches browser
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY, // Server-only, no NEXT_PUBLIC_ prefix
});
export async function POST(request) {
const { prompt } = await request.json();
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
});
return Response.json({ result: completion.choices[0].message.content });
}
// components/Generator.jsx — browser code, no secrets here
async function generate(prompt) {
const res = await fetch("/api/generate", {
method: "POST",
body: JSON.stringify({ prompt }),
});
return res.json();
}
Add Secret Scanning to Your Workflow
Catch leaked keys before they reach production:
# .github/workflows/secret-scan.yml
name: Secret Scanning
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Also enable GitHub Push Protection in your repository settings. It blocks pushes that contain detected secrets.
Scan Your Deployed Application
Secret scanning catches keys in your source code, but some keys only appear in the built, deployed application — embedded in JavaScript bundles, visible in network requests, or leaked through error pages.
Run a CheckYourVibe scan to detect secrets in your live application. The free tier checks for exposed API keys in client-side bundles, misconfigured environment variables, and publicly accessible configuration endpoints.
API Key Security Checklist
Before Every Deploy
Do AI code generators intentionally expose API keys?
No. AI models optimize for functional code, not secure code. They generate patterns that work in development without considering production security implications like secret management.
Are public keys like Supabase anon keys safe to expose?
Supabase anon keys and Firebase public config are designed to be client-facing when used with Row Level Security (RLS). But secret keys like service_role, sk_live, or any key prefixed with "secret" must never appear in client code.
What should I do if my API key is already exposed?
Rotate the key immediately in the service's dashboard. Don't try to remove it from git history — assume it's compromised. Check service logs for unauthorized usage, update production environment variables with the new key, and set up monitoring for anomalies.
Can I use environment variables in frontend code?
Only for public keys. In Next.js, only NEXT_PUBLIC_ prefixed vars reach the browser. In Vite/Nuxt, only VITE_ or NUXT_PUBLIC_ prefixed vars are exposed. Secret keys must stay server-side and be accessed through API routes.
How do I know if my deployed app has exposed keys?
Run a free CheckYourVibe scan. It checks your deployed application for API keys in client-side JavaScript bundles, exposed environment variables, and other secret leakage patterns.
Is Your App Leaking API Keys?
AI-generated code may have exposed your secrets. Run a free CheckYourVibe scan to find API keys in your client-side bundles, misconfigured environment variables, and more.
Start Free Scan