TL;DR
AI coding tools generate the same vulnerability patterns across thousands of apps. Attackers know these patterns and actively hunt for them. This guide covers the eight most exploited attack vectors in AI-built apps, with concrete examples and fixes for each. Most are fixable in under an hour.
A security researcher recently published a script that scans GitHub for repositories created with popular AI coding tools. It checks for five specific patterns: exposed Supabase keys, missing RLS policies, hardcoded API tokens, default admin routes, and unprotected file upload endpoints. The script found exploitable vulnerabilities in 34% of the repositories it scanned.
That's not a hypothetical scenario. It happened in January 2026, and the researcher (responsibly) disclosed the findings to affected app owners. Many had no idea their apps were vulnerable. They'd used AI to build something that worked, shipped it, and moved on.
This guide breaks down exactly how AI-generated apps get attacked, why these specific vulnerabilities keep appearing, and what you can do to fix each one.
Why AI Code Has Predictable Vulnerabilities
Before we get into specific attacks, it helps to understand why AI-generated code is vulnerable in the first place. The root cause isn't that AI writes "bad" code. It writes code that works. The problem is that "works" and "secure" are different goals, and AI tools optimize for the first one.
Three factors create predictable vulnerability patterns:
Training data includes insecure patterns. AI models learn from millions of public repositories. A significant percentage contain outdated practices, deprecated methods, and known vulnerabilities. The AI doesn't distinguish between "this compiles" and "this is safe."
AI optimizes for the happy path. Ask for a login form, and you'll get one that handles successful logins well. Error handling, rate limiting, brute force protection, and session management are often missing or incomplete. The AI solved the problem you asked about, not the edge cases an attacker would exploit.
Default configurations favor convenience. AI tools default to the simplest working configuration: broad database permissions, permissive CORS policies, verbose error messages, no authentication middleware. These defaults are fine for local development. They're dangerous in production.
Attack 1: Exposed API Keys and Secrets
This is the most common vulnerability in AI-built apps, and the easiest to exploit.
How common is this? CheckYourVibe scans detect exposed secrets in approximately 1 out of every 3 AI-built apps. These include API keys, database connection strings, JWT signing secrets, and third-party service credentials sitting in frontend code or committed to version control.
How the attack works
AI tools routinely place API keys directly in frontend JavaScript. When you prompt "connect my app to Stripe" or "add OpenAI integration," the generated code often includes the API key as a string literal in client-side code. Anyone who opens browser DevTools can see it.
Even when the AI places keys in .env files, it frequently references them in client-side code (using NEXT_PUBLIC_ or VITE_ prefixes) or fails to add .env to .gitignore. The keys end up in your public GitHub repository, indexed by automated scanners within minutes.
What attackers do with exposed keys
The damage depends on the key. An exposed OpenAI key means someone racks up API charges on your account. An exposed Stripe secret key means they can issue refunds, access customer payment data, or create charges. An exposed Supabase service role key means full read/write access to your entire database, bypassing all row-level security.
How to fix it
Audit your frontend code for hardcoded secrets
Search your codebase for API key patterns. Look for strings starting with sk_, pk_, eyJ, supabase, or any 32+ character alphanumeric strings in .js, .ts, .vue, or .jsx files.
Move all secrets to server-side environment variables
Secrets should only exist in server-side code. If your frontend needs to call an external API, proxy the request through your backend. Never expose secret keys in client-accessible code.
Rotate compromised keys immediately
If a key was ever in your frontend code or a public repository, assume it has been compromised. Rotate it now, even if you've since moved it. Automated scanners archive GitHub history.
Attack 2: Missing Database Access Controls
AI tools love Supabase. They generate full-stack apps with Supabase backends in minutes. But they almost never set up Row Level Security (RLS) correctly.
How the attack works
Supabase exposes your database through a REST API using a public "anon key." This key is meant to be public (it's in your frontend code by design). The security model relies entirely on RLS policies to control what each user can access.
When AI generates a Supabase app, it creates tables and inserts data. But it often skips RLS entirely, or enables it without creating policies (which blocks all access, causing the AI to then disable RLS to "fix" the error).
Without RLS, anyone with your Supabase URL and anon key (both visible in your frontend) can query any table, read any row, and modify any data. That includes other users' profiles, messages, payment records, and anything else in your database.
How to fix it
Enable RLS on every table. Create policies that restrict each operation (SELECT, INSERT, UPDATE, DELETE) to the appropriate user. Test by trying to access another user's data while authenticated as a different user.
Quick RLS test: Open your browser console, paste your Supabase URL and anon key into a fetch() call that queries a table without an auth token. If you get data back, your RLS is broken.
Attack 3: Broken Authentication Flows
AI-generated authentication looks correct on the surface. You get a login page, a registration flow, maybe even a "forgot password" feature. But the implementation often has gaps that aren't visible from the UI.
Common patterns attackers exploit
No server-side session validation. The AI checks authentication in the frontend (using a JWT stored in localStorage) but doesn't verify the token on API endpoints. An attacker can call your API directly without any authentication.
Predictable password reset tokens. AI tools sometimes generate reset tokens using weak randomness or predictable patterns (timestamps, sequential IDs). An attacker can guess valid reset tokens and take over accounts.
Missing rate limiting on login. Without rate limiting, an attacker can brute-force passwords by trying thousands of combinations per minute. AI-generated login endpoints rarely include rate limiting by default.
JWT secrets in source code. The AI needs a JWT signing secret, so it generates one inline: const JWT_SECRET = "super-secret-key-change-me". If this isn't changed (and it usually isn't), anyone can forge valid authentication tokens.
How to fix it
Verify that every API endpoint checks authentication server-side. Implement rate limiting on login, registration, and password reset endpoints. Use cryptographically secure random tokens. Store JWT secrets in environment variables and make them long, random strings.
Attack 4: SQL Injection and NoSQL Injection
AI-generated database queries are often vulnerable to injection attacks because the AI constructs queries using string concatenation or template literals instead of parameterized queries.
How the attack works
When the AI generates code like:
const user = await db.query(
`SELECT * FROM users WHERE email = '${req.body.email}'`
);
An attacker can submit an email like ' OR '1'='1 and retrieve all users in the database. More sophisticated injections can drop tables, extract data from other tables, or bypass authentication entirely.
This isn't limited to SQL databases. NoSQL databases like MongoDB are vulnerable too, especially when AI generates queries that accept user-controlled operators.
How to fix it
Use parameterized queries or your ORM's built-in query builder. Never concatenate user input into database queries. If you're using Prisma, Drizzle, or another ORM (which AI tools often generate), you're mostly safe. But check any raw query calls the AI may have added for edge cases.
Attack 5: Cross-Site Scripting (XSS)
AI-generated apps frequently render user input without proper sanitization, creating XSS vulnerabilities.
How the attack works
When your app displays user-provided content (comments, profile names, form submissions) without escaping HTML, an attacker can inject malicious JavaScript. This script runs in other users' browsers, allowing the attacker to steal session tokens, redirect users to phishing pages, or perform actions on their behalf.
AI tools are especially prone to this when generating features like:
- Comment sections or chat interfaces
- User profile pages that display bios or descriptions
- Search results that echo the search query
- Admin dashboards that display user-submitted data
How to fix it
Use your framework's built-in escaping. In React, JSX escapes by default (but watch for dangerouslySetInnerHTML). In Vue, use {{ }} interpolation (never v-html with user data). Set a Content Security Policy (CSP) header as an additional layer of defense.
Attack 6: Insecure File Upload
When you ask AI to "add file upload functionality," the generated code typically accepts any file type, stores files in a publicly accessible location, and doesn't validate file size or content.
How the attack works
An attacker uploads a PHP shell, a malicious SVG with embedded JavaScript, or an HTML file that executes scripts when viewed. If the file is stored in a public directory and served with its original content type, the attacker can execute code on your server or attack other users who view the file.
Even with cloud storage (S3, Cloudflare R2), AI-generated upload code often creates public buckets or sets permissive CORS policies that allow any domain to access uploaded files.
How to fix it
Validate file types against an allowlist (not a blocklist). Rename uploaded files to random strings. Set appropriate Content-Type headers when serving files. Store uploads in a non-public bucket and serve them through a controlled endpoint. Limit file sizes. Scan uploaded files for malicious content if your app handles user uploads at scale.
Attack 7: Missing Security Headers
AI-generated apps almost never include security headers. This is one of the simplest vulnerabilities to exploit and one of the simplest to fix.
What's missing
Content-Security-Policy (CSP): Prevents XSS by restricting where scripts can load from. Without it, injected scripts run freely.
Strict-Transport-Security (HSTS): Forces HTTPS connections. Without it, attackers can intercept traffic via man-in-the-middle attacks on HTTP.
X-Frame-Options: Prevents your site from being embedded in iframes. Without it, attackers can use clickjacking to trick users into performing unintended actions.
X-Content-Type-Options: Prevents MIME type sniffing. Without it, browsers may interpret uploaded files as executable scripts.
How to fix it
Add security headers in your hosting configuration or application middleware. This is typically a 5-minute fix that significantly improves your security posture.
Good news: Security headers don't require code changes to your application logic. They're configured at the server or CDN level. Vercel, Netlify, Cloudflare Pages, and most hosting platforms support adding custom headers through configuration files.
Attack 8: Verbose Error Messages and Debug Mode
AI tools often leave debug mode enabled and return detailed error messages in production. These error messages reveal your tech stack, database schema, file paths, and sometimes even source code.
How the attack works
An attacker triggers an error (by submitting malformed data, accessing a non-existent endpoint, or sending an oversized payload) and reads the error response. A typical AI-generated error response might include:
- Full stack traces with file paths
- Database table and column names
- SQL queries that failed (revealing schema)
- Framework version numbers
- Environment variable names (sometimes values)
This information helps attackers craft targeted attacks against your specific tech stack and database structure.
How to fix it
Return generic error messages in production. Log detailed errors server-side (where only you can see them) and return user-friendly messages to the client. Disable debug mode and ensure framework-specific development flags are off in production.
The Pattern Behind the Patterns
If you read through all eight attack vectors, you'll notice a theme. AI-generated vulnerabilities aren't sophisticated. They're configuration mistakes and missing defaults. The AI gets the application logic right but skips the security infrastructure that experienced developers add automatically.
This is actually good news. It means:
- The fixes are straightforward. You don't need deep security expertise to address most of these issues. You need a checklist and the discipline to work through it.
- Automated scanning catches most of them. These are known patterns with known signatures. A security scanner can identify all eight of these vulnerability categories in your app in minutes.
- You can fix them once and move on. Unlike architectural security flaws, these are configuration issues. Fix them, verify the fix, and they stay fixed unless someone (or some AI) introduces a new instance.
Don't wait for a breach. The cost of fixing these vulnerabilities proactively is measured in hours. The cost of a data breach averages $4.88 million (IBM, 2025). For startups, a breach often means losing user trust permanently.
Why are AI-generated apps more vulnerable than hand-coded apps?
AI coding tools optimize for working functionality, not security. They learn from millions of public repositories (including insecure ones), reproduce deprecated patterns, skip input validation, use overly permissive defaults, and hardcode secrets in frontend code. The result is predictable vulnerability patterns that attackers can target at scale.
What is the most common vulnerability in AI-built apps?
Exposed API keys and secrets in frontend code. AI tools routinely place API keys, database connection strings, and third-party credentials directly in client-side JavaScript or environment files that get committed to public repositories. This gives attackers direct access to your backend services.
Can AI-built apps be made secure?
Yes. Most vulnerabilities in AI-generated code are configuration mistakes, not architectural flaws. Fixing exposed secrets, enabling database row-level security, adding authentication to API endpoints, and setting security headers addresses the majority of issues. An automated security scan identifies these problems in minutes.
How do attackers find vulnerable AI-built apps?
Attackers use automated scanners that look for signatures of AI-generated code: predictable file structures, default framework configurations, exposed .env files, and known vulnerable patterns from popular AI tools. Some scanners specifically search GitHub for repositories created with Bolt, Cursor, or Lovable to find apps with default insecure configurations.
How often should I scan my AI-built app for vulnerabilities?
At minimum, scan before every deployment and on a weekly schedule. Because AI-coded apps change rapidly (you might add features daily), continuous scanning is the gold standard. This catches issues from new code, dependency updates, and newly discovered vulnerability patterns before attackers find them.
These eight vulnerability patterns exist in thousands of AI-built apps right now. Find out if yours is one of them. A free scan takes 60 seconds and checks for every attack vector covered in this guide.