Every AI code editor has a blind spot: it optimizes for code that works, not code that's secure. Cursor, Windsurf, GitHub Copilot, and the rest will happily generate SQL injection vulnerabilities, hardcoded API keys, and endpoints with zero authentication if you don't tell them otherwise.
The fix is surprisingly simple. A rules file in your project root acts as a persistent system prompt, shaping every piece of code the AI generates. Set it up once, and your editor starts producing secure code by default instead of by accident.
TL;DR
A security-focused rules file (.cursorrules, .windsurfrules, or copilot-instructions.md) reduces AI-generated vulnerabilities by 60-70%. This guide gives you a complete, copy-paste template covering authentication, input validation, SQL injection prevention, secrets management, and error handling. Takes about 20 minutes to customize for your stack.
What Rules Files Are and Why They Matter
Every major AI code editor supports project-level instruction files:
| Editor | File | Location |
|---|---|---|
| Cursor | .cursorrules | Project root |
| Windsurf | .windsurfrules | Project root |
| GitHub Copilot | .github/copilot-instructions.md | .github/ directory |
| Aider | .aider.conf.yml | Project root |
| Cody | .sourcegraph/cody.json | .sourcegraph/ directory |
These files get injected into the AI's context window before every code generation request. Without them, the AI defaults to whatever patterns it learned from training data, which includes plenty of insecure Stack Overflow answers, outdated tutorials, and prototype code that was never meant for production.
With a well-crafted rules file, you shift the baseline. Instead of hoping the AI remembers to use parameterized queries, you make it the default behavior.
Rules files are not a silver bullet. They reduce the rate of insecure patterns, but they don't replace code review or security scanning. Think of them as the first layer in a defense-in-depth approach.
The Complete Security Rules Template
Here's the full template. The sections below break down each part and explain the reasoning.
# Security Rules for AI Code Generation
# Copy to: .cursorrules, .windsurfrules, or .github/copilot-instructions.md
## Authentication & Authorization
- Every API endpoint MUST include authentication middleware unless explicitly marked as public
- Always verify the authenticated user owns the resource they are requesting (authorization != authentication)
- Use bcrypt or argon2 for password hashing, never MD5 or SHA-256 alone
- Session tokens must be cryptographically random, minimum 256 bits
- Set httpOnly, secure, and sameSite flags on all authentication cookies
- Implement rate limiting on login, registration, and password reset endpoints (max 5 attempts per 15 minutes)
## Input Validation
- Validate ALL user input on the server side, even if client validation exists
- Use allowlists over denylists for input validation
- Sanitize user input before rendering in HTML to prevent XSS
- Validate file uploads: check MIME type, enforce size limits, sanitize filenames
- Reject unexpected fields in request bodies (use strict schema validation)
## Database Security
- ALWAYS use parameterized queries or prepared statements, never string concatenation
- Apply the principle of least privilege to database user permissions
- Never expose raw database errors to clients
- Use an ORM's built-in query builder instead of raw SQL when possible
- Implement row-level security or ownership checks in queries (WHERE user_id = $1)
## Secrets Management
- NEVER hardcode API keys, tokens, passwords, or connection strings in source code
- Load all secrets from environment variables
- Add .env, .env.*, *.pem, *.key to .gitignore
- Validate that required environment variables exist at application startup
## Error Handling
- Return generic error messages to clients (e.g., "Something went wrong")
- Log detailed errors server-side with request context for debugging
- Never expose stack traces, file paths, or internal state in API responses
- Use consistent error response format across all endpoints
## HTTP Security
- Set Content-Security-Policy, X-Content-Type-Options, X-Frame-Options headers
- Configure CORS with specific allowed origins, never use wildcard (*) in production
- Enforce HTTPS for all endpoints
- Set appropriate Cache-Control headers for sensitive responses (no-store)
## Dependencies
- Do not suggest deprecated or unmaintained packages
- Prefer well-maintained packages with active security advisories
- Pin dependency versions in production lockfiles
Section-by-Section Breakdown
Authentication and Authorization
This section targets the most common vulnerability in AI-generated code: missing or incomplete access control. AI editors frequently generate API endpoints that accept any request, authenticated or not.
// AI-generated endpoint: works fine, completely unprotected
app.get('/api/invoices/:id', async (req, res) => {
const invoice = await Invoice.findById(req.params.id);
res.json(invoice);
});
// Same prompt, but with security rules active
app.get('/api/invoices/:id', authenticate, async (req, res) => {
const invoice = await Invoice.findOne({
_id: req.params.id,
userId: req.user.id // ownership check
});
if (!invoice) {
return res.status(404).json({ error: 'Invoice not found' });
}
res.json(invoice);
});
The difference is significant. The second version includes authentication middleware, an ownership check to prevent users from accessing other users' invoices, and a generic error message that doesn't leak information about what exists in the database.
Pro tip: Add framework-specific middleware names to your rules. If you use Passport.js, write "use passport.authenticate('jwt') middleware." If you use NextAuth, write "use getServerSession() to verify authentication." The more specific you are, the better the AI follows through.
Input Validation
AI editors tend to trust incoming data. They'll parse req.body.email without checking if it's actually an email, or pass req.query.limit directly into a database query without validating it's a reasonable number.
The input validation rules force the AI to generate defensive code:
import { z } from 'zod';
const CreateUserSchema = z.object({
email: z.string().email().max(255),
name: z.string().min(1).max(100).trim(),
role: z.enum(['user', 'editor']), // allowlist, not free text
});
app.post('/api/users', authenticate, async (req, res) => {
const result = CreateUserSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({
error: 'Invalid input',
details: result.error.flatten(),
});
}
const user = await createUser(result.data);
res.status(201).json(user);
});
Database Security
SQL injection remains one of the top attack vectors on the web. AI models have seen countless tutorials that use string interpolation for SQL queries, and they reproduce those patterns readily.
This is the single highest-impact rule in the entire file. The difference between db.query(\SELECT * FROM users WHERE id = ${id}`)anddb.query('SELECT * FROM users WHERE id = $1', id)` is the difference between a secure app and a data breach.
The database rules also push the AI toward ORM query builders over raw SQL. This matters because ORMs parameterize by default, while raw SQL requires developers to remember parameterization every time.
Secrets Management
Without rules, AI editors will occasionally suggest patterns like:
# AI-generated config file
STRIPE_KEY = "sk_live_abc123..."
DATABASE_URL = "postgresql://admin:password@prod-db:5432/myapp"
The secrets management rules redirect the AI toward environment variables and validation:
import os
STRIPE_KEY = os.environ["STRIPE_SECRET_KEY"]
DATABASE_URL = os.environ["DATABASE_URL"]
# Validate at startup
required_vars = ["STRIPE_SECRET_KEY", "DATABASE_URL", "SESSION_SECRET"]
missing = [v for v in required_vars if v not in os.environ]
if missing:
raise RuntimeError(f"Missing environment variables: {', '.join(missing)}")
Customizing for Your Stack
The template above is framework-agnostic. For real projects, you should add stack-specific rules. Here are additions for popular stacks:
Next.js / React
## Next.js Specific
- Use Server Actions or API Routes for data mutations, never expose database calls in client components
- Validate all Server Action inputs with zod before processing
- Use next/headers for reading cookies, never parse them manually
- Apply middleware.ts for route-level authentication checks
- Never pass sensitive data through searchParams or client-side state
- Use Content-Security-Policy headers via next.config.js security headers
Python / FastAPI / Django
## Python Specific
- Use Pydantic models for all request/response validation (FastAPI)
- Use Django Forms or DRF Serializers for input validation (Django)
- Apply @login_required or Depends(get_current_user) to all protected views
- Use Django ORM or SQLAlchemy ORM instead of raw SQL
- Set SECURE_BROWSER_XSS_FILTER, SECURE_CONTENT_TYPE_NOSNIFF in Django settings
- Never use pickle for deserializing untrusted data
Supabase / Firebase
## Supabase/Firebase Specific
- ALWAYS define Row Level Security (RLS) policies before creating tables
- Never disable RLS for convenience during development
- Use service_role key ONLY in server-side code, never in client bundles
- Validate that storage bucket policies restrict file access by user
- Test RLS policies with different user contexts before deployment
Setting Up for Multiple Editors
If your team uses different editors, maintain a single source file and copy it:
# In your Makefile or as a package.json script
sync-rules:
cp .cursorrules .windsurfrules
mkdir -p .github
cp .cursorrules .github/copilot-instructions.md
Add this to your CI pipeline. A simple check that .cursorrules, .windsurfrules, and .github/copilot-instructions.md have identical content prevents drift when someone updates one file but forgets the others.
Measuring the Impact
After adding security rules to your project, run a security scan to establish a baseline. Then track how vulnerability counts change over subsequent sprints.
| Metric | Before Rules | After Rules (typical) |
|---|---|---|
| Hardcoded secrets in PRs | 2-3 per week | Near zero |
| Missing auth on new endpoints | 40-60% of endpoints | Under 10% |
| Raw SQL queries | Common | Rare (ORM preferred) |
| Generic error handling | Inconsistent | Standardized |
These numbers come from testing across 50 projects. Your mileage will vary based on team size, how strictly you enforce reviews, and the complexity of your application. The point is that rules files create a meaningful, measurable improvement.
Pairing Rules Files with Continuous Scanning
Rules files handle prevention. Scanning handles detection. You need both.
Even with perfect rules, developers can override suggestions, copy code from external sources, or introduce vulnerabilities through dependency updates. A continuous monitoring approach catches what slips through the rules.
The workflow looks like this:
- Rules file prevents most insecure patterns at generation time
- Code review catches issues the rules missed
- Automated scanning finds vulnerabilities across the full codebase, including dependencies
- Continuous monitoring detects new vulnerabilities as they emerge post-deployment
Each layer catches things the previous one missed. Skipping any layer means relying on the others to catch everything, which they won't.
Common Mistakes to Avoid
Writing rules that are too vague. "Write secure code" tells the AI nothing useful. Be specific: "Use parameterized queries for all database operations" gives the AI a concrete instruction it can follow.
Making the rules file too long. AI context windows are finite. A 500-line rules file might get truncated or diluted. Keep rules concise and focused on the patterns that matter most for your project. The template above hits the sweet spot at around 40 rules.
Forgetting to update rules. When you add a new framework, database, or authentication provider, update the rules file. If a security scan reveals a new pattern of vulnerabilities, add a rule that prevents it. Treat the rules file like a living document, not a one-time setup.
Not combining with .cursorignore. Rules tell the AI how to write code. The .cursorignore file tells it what not to read. Both matter. Without .cursorignore, your AI editor might read and reproduce patterns from .env files, credentials, or legacy code you're trying to phase out.
What is a .cursorrules file?
A .cursorrules file sits in your project root and gives Cursor persistent instructions for every code generation request. Think of it as a system prompt for your AI editor. Every time Cursor generates or modifies code, it reads these rules first. Other editors have equivalents: .windsurfrules for Windsurf, .github/copilot-instructions.md for GitHub Copilot.
Do rules files actually prevent vulnerabilities?
They reduce vulnerabilities significantly but don't eliminate them entirely. In testing across 50 projects, security-focused rules files reduced common vulnerability patterns (hardcoded secrets, SQL injection, missing auth checks) by roughly 60-70%. They work best as one layer of a defense-in-depth approach alongside code review and scanning.
Can I use the same rules file for Cursor and Windsurf?
The content is interchangeable, but the filenames differ. Cursor reads .cursorrules, Windsurf reads .windsurfrules, and GitHub Copilot reads .github/copilot-instructions.md. Keep a single source of truth and copy it to each filename your team uses.
Should I commit my rules file to version control?
Yes. Rules files contain no secrets and should be shared across your team. Committing them to git ensures every developer gets the same security guardrails, and you can track changes over time through pull requests.
How often should I update my security rules?
Review your rules file quarterly, or whenever you add a new framework, database, or authentication provider. After a security scan reveals new patterns, update the rules to prevent those patterns from recurring. Treat it like a living document.
See What Your Rules File Missed
Security rules reduce vulnerabilities, but they can't catch everything. Run a free scan to find what slipped through.