Fix XSS Vulnerabilities with AI Prompts

TL;DR

XSS lets attackers inject malicious scripts into your pages. The fix is contextual output encoding - escape differently for HTML, JavaScript, URLs, and CSS. Modern frameworks help, but dangerouslySetInnerHTML and similar bypass protections. These prompts help you find and fix XSS.

XSS Vulnerability Audit

Paste this prompt to have your AI scan your frontend for every type of XSS vulnerability. You'll get a report covering dangerous innerHTML usage, unencoded URL parameters, eval calls, and missing CSP headers, with file locations and fix instructions.

AI Prompt

Find XSS Vulnerabilities

Scan my codebase for potential XSS vulnerabilities.

Framework: React/Vue/Svelte/Vanilla JS

Look for:

  1. dangerouslySetInnerHTML (React)
  2. v-html directive (Vue)
  3. {@html} tag (Svelte)
  4. innerHTML assignments
  5. document.write()
  6. eval() with user input
  7. URL parameters rendered without encoding
  8. User data in script tags

For each finding report:

  • File and line number
  • Type of XSS risk (stored, reflected, DOM)
  • User input source
  • How to fix

Also check:

  • Are CSP headers configured?
  • Is user content sanitized before storage?
  • Are third-party scripts loaded safely?

Fix React XSS

Copy this prompt to fix XSS vulnerabilities specific to React apps. Your AI will address dangerouslySetInnerHTML misuse, URL injection via href attributes, and event handler injection, plus create a reusable SafeHTML component with DOMPurify.

AI Prompt

Secure React Rendering

Fix XSS vulnerabilities in my React application.

Problem patterns to fix:

  1. dangerouslySetInnerHTML with user content: BAD: FIX: Use DOMPurify to sanitize or render as text
  2. URL injection: BAD: FIX: Validate URL protocol (no javascript:)
  3. Event handler injection: BAD: FIX: Never use user input as event handlers

Solutions:

  • Install DOMPurify: npm install dompurify
  • Sanitize: DOMPurify.sanitize(userHtml)
  • For markdown: use marked + DOMPurify
  • Validate URLs: new URL(input).protocol check

Show me how to create a SafeHTML component that sanitizes before rendering.

React doesn't protect you everywhere: While React escapes by default, dangerouslySetInnerHTML, href attributes, and style objects can still be XSS vectors. Don't assume you're safe.

Sanitize Rich Text

Use this prompt to safely render user-provided HTML or markdown (blog posts, comments, bios). Your AI will set up DOMPurify with an allowlist of safe tags and attributes, plus a reusable component for sanitized rendering.

AI Prompt

Safe Rich Text Rendering

Implement safe rendering of user-provided HTML/markdown.

Use case: Blog posts, comments with formatting, user bios

Approach:

  1. Sanitize on output (not just input)
  2. Use allowlist of safe tags
  3. Strip dangerous attributes (onclick, onerror)
  4. Validate URLs in href/src

Using DOMPurify: const clean = DOMPurify.sanitize(dirty, { ALLOWED_TAGS: 'b', 'i', 'em', 'strong', 'a', 'p', 'br', ALLOWED_ATTR: 'href', ALLOW_DATA_ATTR: false });

For markdown:

  1. Parse markdown to HTML
  2. Sanitize the HTML output
  3. Then render

Create reusable component that accepts markdown/HTML and renders it safely with appropriate sanitization.

DOM-based XSS Prevention

Paste this prompt to find and fix DOM-based XSS in your JavaScript. Your AI will trace every path from user-controlled sources (location.hash, postMessage, etc.) to dangerous sinks (innerHTML, eval, document.write) and replace them with safe alternatives.

AI Prompt

Fix DOM XSS

Find and fix DOM-based XSS in my JavaScript code.

DOM XSS sources (user input):

  • location.hash
  • location.search
  • document.referrer
  • window.name
  • postMessage data

DOM XSS sinks (dangerous functions):

  • innerHTML
  • outerHTML
  • document.write
  • eval()
  • setTimeout/setInterval with strings
  • element.setAttribute for event handlers

Fix pattern: // BAD element.innerHTML = location.hash.slice(1);

// GOOD element.textContent = location.hash.slice(1); // or sanitize if HTML needed

Review my code for DOM XSS patterns and show fixes. Use textContent instead of innerHTML where possible.

Pro tip: Add Content Security Policy headers as defense-in-depth. Even if XSS exists, CSP can prevent inline scripts from executing. It's not a fix, but limits damage.

What's the difference between stored and reflected XSS?

Stored XSS is saved in your database and shown to other users (more dangerous). Reflected XSS comes from the URL and only affects users who click malicious links. Both need fixing.

Is encoding enough to prevent XSS?

Contextual encoding is the primary defense. But you need different encoding for HTML body, attributes, JavaScript, URLs, and CSS. One encoding doesn't fit all contexts.

Further Reading

Want to understand the vulnerability before fixing it? These guides explain what's happening and why.

Find XSS in Your Code

Scan your frontend for Cross-Site Scripting vulnerabilities.

AI Fix Prompts

Fix XSS Vulnerabilities with AI Prompts