TL;DR
Vibe hacking is how attackers target apps built with AI coding tools. Research shows 80% of AI-generated applications contain at least one exploitable vulnerability, and attackers are scanning for these at scale. This post breaks down the 6 main attack vectors, with real examples and concrete fixes for each one.
Your Bolt.new app works. Users are signing up. You're getting ready to post it on Product Hunt.
Meanwhile, someone is running a script that searches for the exact Supabase URL pattern that Bolt generates by default. They already know your database tables probably don't have Row Level Security enabled. They know because the last 50 Bolt apps they scanned didn't either.
This is vibe hacking. It's not sophisticated. It's pattern recognition at scale.
What Vibe Hacking Actually Means
The term started circulating in early 2026 as security researchers noticed something: AI-built apps fail in predictable ways. When thousands of developers use the same tools with the same prompts, the resulting code shares the same blind spots.
Vibe hacking is the practice of exploiting predictable security patterns in applications built with AI coding tools. Attackers don't need to find unique vulnerabilities. They scan for known weaknesses that AI tools consistently produce, then exploit them across many targets at once.
Traditional hacking requires finding a specific flaw in a specific app. Vibe hacking flips that: find one pattern, exploit thousands of apps that share it.
HP's 2026 threat research found that attackers are now using AI themselves to generate ready-made exploit scripts targeting these patterns. The barrier to entry has dropped to near zero.
The 6 Attack Vectors
1. Exposed API Keys in Client-Side Code
This is the most common vulnerability in AI-built apps, and the easiest to exploit.
AI coding tools frequently place API keys directly in frontend code. Supabase anon keys, OpenAI keys, Stripe secret keys, Firebase configs. The AI puts them wherever the code needs them to work, which is often the wrong place.
// Patterns that automated scanners look for:
"sk-" // OpenAI API keys
"sk_live_" // Stripe secret keys
"supabase" // Supabase URLs and keys
"eyJhbGciOiJ" // Base64-encoded JWT tokens
"NEXT_PUBLIC_" // Environment variable patterns
"firebase" // Firebase configuration objects
An attacker doesn't need to read your source code manually. Tools like trufflehog and custom scripts scan your deployed JavaScript bundles in seconds.
The fix: Move all secret keys to server-side environment variables. Your Supabase anon key is designed to be public, but your service role key and any third-party API keys must never appear in client code. If you're using Bolt or Lovable, check your security checklist before deploying.
2. Missing Database Row Level Security
This one caused the biggest real-world incidents. A misconfigured Supabase database exposed 1.5 million authentication tokens and 35,000 email addresses to the public internet. The site was entirely AI-generated, and the developer stated they never wrote a single line of code.
The problem: AI tools create Supabase tables and write queries against them, but they rarely enable Row Level Security (RLS) policies. Without RLS, anyone with your Supabase URL and anon key (which is probably in your frontend code, see attack vector #1) can read every row in every table.
In February 2026, security researchers found 170+ Lovable-built databases that were fully readable by anyone on the internet. The common thread: RLS was never enabled.
The attack is trivially simple:
// If RLS is disabled, this returns ALL users, not just the logged-in one
const { data } = await supabase
.from('users')
.select('*')
The fix: Enable RLS on every table, then write policies that restrict access. If you're using Supabase, our guide on setting up RLS policies walks through it step by step. This is the single highest-impact fix for most AI-built apps.
3. Client-Side-Only Authentication
AI tools love to generate authentication guards that only exist in the frontend. A React component checks if the user is logged in and conditionally renders the admin page. The API behind it? No auth check at all.
Here's what that looks like in practice:
// This is the ONLY thing preventing unauthorized access
function AdminPage() {
const { user } = useAuth()
if (!user?.isAdmin) return <Redirect to="/login" />
return <AdminDashboard />
}
An attacker doesn't need to bypass your React router. They just call your API directly:
# The API endpoint has no server-side auth check
curl https://yourapp.com/api/admin/users
# Returns full user list, including emails and payment data
The fix: Every API endpoint needs its own authentication and authorization check on the server. Frontend route guards are for user experience, not security. If someone removes your JavaScript entirely, your API should still reject unauthorized requests.
4. Predictable API Routes and Admin Paths
AI tools generate consistent route patterns. /api/auth/login, /api/users, /api/admin, /dashboard/admin. Attackers know these patterns and scan for them automatically.
Worse, AI tools sometimes generate admin endpoints that were never intended to be publicly accessible. A Cursor-generated Next.js app might include /api/admin/reset-database or /api/seed in development code that makes it to production.
Automated scanners check for hundreds of common API paths in seconds. If your AI tool generated a route, there's a good chance it follows a pattern that's already in the scanner's dictionary.
The fix: Audit your route files. Remove any development-only endpoints before deploying. Add authentication middleware to every admin route. Consider prefixing internal routes with an unpredictable path segment rather than the obvious /api/admin/.
5. Missing Security Headers and Rate Limits
AI tools write application code. They don't configure infrastructure. That means your AI-built app almost certainly ships without security headers (Content-Security-Policy, X-Frame-Options, Strict-Transport-Security) and without rate limiting.
Without rate limits, an attacker can:
- Brute-force login credentials at thousands of attempts per minute
- Abuse your AI-powered features (running up your OpenAI bill)
- Scrape your entire user directory through paginated API calls
Without security headers, your app is vulnerable to clickjacking, XSS via inline scripts, and protocol downgrade attacks.
The fix: Add security headers at the deployment level (Vercel, Netlify, and Cloudflare all support this in config files). Add rate limiting to your authentication and AI endpoints. Even basic rate limiting (100 requests per minute per IP) blocks most automated attacks. Check out our Supabase security best practices for database-specific configurations.
6. Hardcoded Credentials and Debug Endpoints
AI tools sometimes generate code with placeholder credentials that look like they should be replaced but never are. Default passwords for admin accounts, test API keys, debug endpoints that return internal state.
A Lovable vulnerability (CVE-2025-48757, CVSS score 9.3) allowed remote unauthenticated attackers to read or write to arbitrary database tables of generated sites. The issue: a default configuration that gave too much access out of the box.
# Search your codebase for these before deploying
grep -r "password.*=.*['\"]" --include="*.ts" --include="*.js"
grep -r "TODO\|FIXME\|HACK\|XXX" --include="*.ts" --include="*.js"
grep -r "debug\|verbose\|devMode" --include="*.ts" --include="*.js"
The fix: Before deploying, grep your entire codebase for hardcoded strings. Replace placeholder credentials with environment variables. Remove debug endpoints and development routes. This takes 10 minutes and closes a door that attackers know to check first.
Why AI Apps Are Targeted at Scale
Vibe hacking isn't about targeting your app specifically. It's about volume.
When a security researcher scans 200 sites built with Cursor, Bolt, Lovable, and v0, they find an average security score of 52 out of 100. That means the majority of AI-built apps have exploitable weaknesses.
Attackers build scanners that identify AI-built sites through telltale signatures: default Supabase URLs, boilerplate meta tags, identical error page formats, and common file structures. Once identified, the same set of exploits gets run against every match.
The economics favor the attacker. Finding one working exploit template and running it against thousands of targets is far more profitable than crafting a custom attack against a single well-defended site.
Your Defense Checklist
You don't need to become a security expert. You need to fix the patterns that vibe hackers specifically target.
Anti-Vibe-Hacking Essentials
Run a Scan Before They Do
Every exploit described in this post can be detected by an automated security scanner. The question is whether you find these issues first, or an attacker does.
A scan takes 60 seconds. Fixing the critical issues takes maybe 30 minutes. Compare that to the cost of a breach: exposed user data, regulatory headaches, and lost trust that's nearly impossible to rebuild.
What is vibe hacking?
Vibe hacking is a set of attack techniques that target applications built with AI coding tools like Cursor, Bolt.new, Lovable, and v0. Attackers exploit the predictable patterns these tools produce, such as hardcoded API keys, missing database security policies, and client-side-only authentication. Because AI tools generate similar code structures, finding one vulnerability often means the same exploit works across thousands of similar apps.
Are AI coded apps safe to deploy?
AI-coded apps can be safe to deploy, but they need a security review first. Research from Stanford shows 80% of AI-generated applications contain at least one exploitable vulnerability. The code works, but AI tools optimize for functionality, not security. A security scan before launch catches the gaps that AI tools consistently miss.
How do hackers find vibe coded apps?
Hackers identify AI-built apps through predictable signatures: default Supabase or Firebase URLs in JavaScript bundles, boilerplate HTML meta tags, identical error page formats, and common API route patterns like /api/auth, /api/users, /api/admin. Automated scanners search for these patterns across the internet and flag matching sites for further exploitation.
How do I protect my AI-built app from vibe hacking?
Start with the basics: move all API keys to server-side environment variables, enable Row Level Security on your database, add server-side authentication checks (not just frontend route guards), configure security headers, and rate-limit your API endpoints. Then run an automated security scan to catch anything you missed. Most vibe hacking exploits target low-hanging fruit that takes minutes to fix.
Find out if your AI-built app is vulnerable to vibe hacking. CheckYourVibe scans for exposed keys, missing RLS, authentication gaps, and more in 60 seconds.