Fix XSS Vulnerabilities with AI Prompts

Share

TL;DR

XSS lets attackers inject malicious scripts into your pages. The fix is contextual output encoding - escape differently for HTML, JavaScript, URLs, and CSS. Modern frameworks help, but dangerouslySetInnerHTML and similar bypass protections. These prompts help you find and fix XSS.

XSS Vulnerability Audit

Find XSS Vulnerabilities

Scan my codebase for potential XSS vulnerabilities.

Framework: React/Vue/Svelte/Vanilla JS

Look for:

  1. dangerouslySetInnerHTML (React)
  2. v-html directive (Vue)
  3. {@html} tag (Svelte)
  4. innerHTML assignments
  5. document.write()
  6. eval() with user input
  7. URL parameters rendered without encoding
  8. User data in script tags

For each finding report:

  • File and line number
  • Type of XSS risk (stored, reflected, DOM)
  • User input source
  • How to fix

Also check:

  • Are CSP headers configured?
  • Is user content sanitized before storage?
  • Are third-party scripts loaded safely?

Fix React XSS

Secure React Rendering

Fix XSS vulnerabilities in my React application.

Problem patterns to fix:

  1. dangerouslySetInnerHTML with user content: BAD: FIX: Use DOMPurify to sanitize or render as text
  2. URL injection: BAD: FIX: Validate URL protocol (no javascript:)
  3. Event handler injection: BAD: FIX: Never use user input as event handlers

Solutions:

  • Install DOMPurify: npm install dompurify
  • Sanitize: DOMPurify.sanitize(userHtml)
  • For markdown: use marked + DOMPurify
  • Validate URLs: new URL(input).protocol check

Show me how to create a SafeHTML component that sanitizes before rendering.

  React doesn't protect you everywhere: While React escapes by default, dangerouslySetInnerHTML, href attributes, and style objects can still be XSS vectors. Don't assume you're safe.


Sanitize Rich Text



    Safe Rich Text Rendering
    Copy

  Implement safe rendering of user-provided HTML/markdown.

Use case: Blog posts, comments with formatting, user bios

Approach:

  1. Sanitize on output (not just input)
  2. Use allowlist of safe tags
  3. Strip dangerous attributes (onclick, onerror)
  4. Validate URLs in href/src

Using DOMPurify: const clean = DOMPurify.sanitize(dirty, { ALLOWED_TAGS: 'b', 'i', 'em', 'strong', 'a', 'p', 'br', ALLOWED_ATTR: 'href', ALLOW_DATA_ATTR: false });

For markdown:

  1. Parse markdown to HTML
  2. Sanitize the HTML output
  3. Then render

Create reusable component that accepts markdown/HTML and renders it safely with appropriate sanitization.

DOM-based XSS Prevention



    Fix DOM XSS
    Copy

  Find and fix DOM-based XSS in my JavaScript code.

DOM XSS sources (user input):

  • location.hash
  • location.search
  • document.referrer
  • window.name
  • postMessage data

DOM XSS sinks (dangerous functions):

  • innerHTML
  • outerHTML
  • document.write
  • eval()
  • setTimeout/setInterval with strings
  • element.setAttribute for event handlers

Fix pattern: // BAD element.innerHTML = location.hash.slice(1);

// GOOD element.textContent = location.hash.slice(1); // or sanitize if HTML needed

Review my code for DOM XSS patterns and show fixes. Use textContent instead of innerHTML where possible.

  Pro tip: Add Content Security Policy headers as defense-in-depth. Even if XSS exists, CSP can prevent inline scripts from executing. It's not a fix, but limits damage.



  Frequently Asked Questions

    What's the difference between stored and reflected XSS?
    Stored XSS is saved in your database and shown to other users (more dangerous). Reflected XSS comes from the URL and only affects users who click malicious links. Both need fixing.


    Is encoding enough to prevent XSS?
    Contextual encoding is the primary defense. But you need different encoding for HTML body, attributes, JavaScript, URLs, and CSS. One encoding doesn't fit all contexts.




  Related Prompts

    Sanitize User InputInput validation patterns
    Add CSP HeadersDefense in depth




  Find XSS in Your Code
  Scan your frontend for Cross-Site Scripting vulnerabilities.
  Start Free Scan
AI Fix Prompts

Fix XSS Vulnerabilities with AI Prompts