TL;DR
Veracode's research found that 45% of AI-assisted code contains security flaws. Stanford confirmed that developers using AI assistants write less secure code. The most common issues are injection vulnerabilities, hardcoded secrets, and missing access controls. You don't need to stop using AI tools. You need to scan what they produce.
The numbers are in. Multiple independent research groups have now studied the security quality of AI-generated code, and the results should concern anyone shipping AI-assisted applications without a review process.
Veracode's State of Software Security report analyzed applications built with AI assistance and found that 45% contain security flaws. That's not 45% of apps having minor warnings. That's 45% with exploitable vulnerabilities.
And Veracode isn't alone in these findings.
What the Research Shows
Stanford University researchers conducted a controlled study where developers completed security-sensitive programming tasks. One group used AI coding assistants. The other wrote code manually. The results: developers using AI assistants produced significantly less secure code than those working without AI help.
What makes this finding worse is that the AI-assisted group was also more confident in their code's security. The tools gave them a false sense of safety.
NIST and OWASP have both flagged AI-generated code as a growing concern. OWASP added "Insecure Output Handling" and "Insecure Code Generation" to their risk frameworks specifically because AI tools are introducing predictable vulnerability patterns at scale.
The Five Most Common Flaw Categories
Not all AI-generated flaws are created equal. Research and scanning data show clear patterns in what goes wrong most often.
1. Injection Vulnerabilities (SQL Injection, XSS)
AI tools frequently generate code that concatenates user input directly into queries or HTML output. This is the textbook definition of an injection vulnerability.
# AI often generates this pattern
def get_user(username):
query = f"SELECT * FROM users WHERE username = '{username}'"
return db.execute(query)
The AI produces code that works. Enter a username, get back a user. But an attacker entering ' OR '1'='1 gets back every user in your database.
# Parameterized query prevents injection
def get_user(username):
query = "SELECT * FROM users WHERE username = :username"
return db.execute(query, {"username": username})
2. Hardcoded Secrets and API Keys
This one shows up constantly. Ask an AI to integrate a third-party API, and it often places the API key directly in the source code.
AI tools routinely hardcode API keys, database credentials, and JWT secrets in frontend JavaScript or configuration files. These end up in public Git repositories and are trivially discoverable by automated scanners.
3. Missing or Broken Access Control
AI generates endpoints that work but skips authorization checks. A user creation endpoint might not verify that the requester has admin privileges. A data retrieval endpoint might return records belonging to other users.
// AI generates working CRUD but forgets ownership checks
app.get('/api/documents/:id', async (req, res) => {
const doc = await Document.findById(req.params.id);
res.json(doc); // Any logged-in user can access any document
});
4. Insecure Default Configurations
AI tools default to the most permissive settings: CORS: *, verbose error messages in production, debug mode enabled, no rate limiting. These defaults make development easy, but they also make attacks easy.
5. Missing Input Validation
AI-generated form handlers and API endpoints frequently skip input validation entirely. No length checks, no type validation, no sanitization. This opens the door to buffer overflows, XSS, and data corruption.
Why AI Code Generators Produce Insecure Code
Understanding the root causes helps you know what to watch for.
They learned from insecure examples. AI models train on millions of public repositories. A huge percentage of that training data contains security mistakes. Stack Overflow answers with SQL injection. Tutorial code with hardcoded keys. The AI reproduces what it learned.
They optimize for "works," not "safe." When you prompt an AI to build a login form, it generates code that successfully logs users in. Rate limiting, brute force protection, secure session handling, and account lockout are secondary concerns that the AI doesn't add unless you specifically ask.
They lack security context. An AI doesn't know your threat model. It doesn't know whether your app handles financial data, health records, or public blog posts. It generates the same code regardless of the security requirements, because it treats every prompt as an isolated coding task.
They can't see the full picture. Even when generating a single function, the AI doesn't understand how that function fits into your application's security architecture. It can't check whether authentication middleware exists upstream or whether the database connection uses least-privilege permissions.
The confidence trap: Stanford's research found that developers using AI tools were more likely to believe their code was secure, even when it wasn't. AI-generated code looks clean, well-structured, and professional. That polish masks the security gaps underneath.
Patterns to Watch For in Cursor, Bolt, and Claude Output
If you're using AI coding tools (and you probably should be, for productivity), here are the specific patterns to check every time.
AI Code Security Review Checklist
- No API keys, tokens, or secrets in source files
- Database queries use parameterized statements, not string concatenation
- Every API endpoint has authentication and authorization checks
- User input is validated and sanitized before use
- CORS is restricted to specific origins, not wildcard
- Error messages don't leak stack traces or internal details in production
- File uploads validate type, size, and content
- Rate limiting exists on authentication and sensitive endpoints
What to Ask Your AI Tool
You can improve AI output by being explicit about security in your prompts. Instead of "build a login form," try "build a login form with rate limiting, bcrypt password hashing, and CSRF protection." The AI will include those features if you ask for them. It just won't add them on its own.
Prompt pattern: After generating any feature, follow up with "Review this code for security vulnerabilities, specifically checking for injection, broken access control, exposed secrets, and missing input validation." The AI is actually good at reviewing code for security. It's just not good at writing secure code unprompted.
The Fix: Scan What AI Produces
The research is clear. AI-generated code has a measurably higher flaw rate than manually written code. But the solution isn't to abandon AI tools. They're too useful for that.
The solution is to add a security check to your workflow. Treat AI output like code from a prolific but security-unaware junior developer. It writes fast, it writes a lot, and it needs review.
Use Cursor, Bolt, Claude, or whatever tool fits your workflow. Let the AI handle the boilerplate and feature logic.
Check authentication, authorization, data handling, and API integrations manually. These are where AI flaws concentrate.
An automated scanner catches the patterns that slip past manual review: missing headers, exposed endpoints, insecure configurations, leaked secrets.
Address the findings and verify the fixes. Most AI-introduced flaws take minutes to fix once identified.
The 45% flaw rate in AI-assisted code is a measurement problem, not a talent problem. The code works. It just needs a security pass that the AI didn't provide. Automated scanning fills that gap.
Is AI-generated code less secure than human-written code?
Research suggests yes. Veracode's State of Software Security report found a 45% flaw rate in AI-assisted code. Stanford researchers found that developers using AI coding assistants produced less secure code than those coding without AI help. The core issue is that AI models optimize for functionality, not security.
What are the most common security flaws in AI-generated code?
The most frequent categories are injection vulnerabilities (SQL injection, XSS), hardcoded secrets and API keys, missing or broken access control checks, and insecure default configurations. These are all patterns the AI learned from public repositories that contain the same mistakes.
Should I stop using AI coding tools because of security risks?
No. AI coding tools are genuinely useful for productivity. The solution is to treat AI output the way you'd treat code from a junior developer: review it, test it, and scan it for security issues before deploying. Automated security scanning catches the most common AI-introduced flaws in minutes.
How do I scan AI-generated code for vulnerabilities?
Use an automated security scanner designed for web applications. CheckYourVibe scans your deployed app for the exact vulnerability patterns that AI tools commonly introduce, including exposed secrets, missing auth, injection flaws, and insecure headers. A free scan takes about 60 seconds.
45% of AI-assisted code contains security flaws. Find out if yours does. A free CheckYourVibe scan takes 60 seconds and checks for the exact vulnerability patterns that AI tools commonly introduce.