How to Secure Image Uploads
Handle user images safely without exposing your app to attacks
TL;DR
TL;DR (20 minutes)
Validate images using magic bytes, not just extensions. Check dimensions before processing to prevent decompression bombs. Strip EXIF metadata to protect user privacy. Reprocess images through sharp to generate clean files. Use cloud storage with a CDN for secure delivery.
Prerequisites
- Node.js 18+ installed
- A Next.js, Express, or similar Node.js project
- Cloud storage account (S3, R2, or similar) - recommended
- npm or yarn package manager
Why Image Security Matters
Images can contain more than meets the eye: embedded scripts, malicious metadata, polyglot files that are valid as multiple formats, and decompression bombs that crash your server. EXIF data can leak user location and device information.
Real Attack Examples:
- EXIF data leak: Photos contain GPS coordinates revealing user's home address
- Polyglot attack: A file that's both a valid JPEG and valid JavaScript
- Decompression bomb: A 42KB PNG that expands to 4.5GB when decompressed
- XSS in SVG: SVG files can contain embedded JavaScript
Step-by-Step Guide
Install required libraries
Install sharp for image processing and file-type for validation:
npm install sharp file-type uuid
Sharp is a high-performance image processing library that handles resizing, format conversion, and metadata stripping.
Create image validation utilities
Validate images thoroughly before processing:
// lib/image-validation.ts
import { fileTypeFromBuffer } from 'file-type';
import sharp from 'sharp';
const ALLOWED_IMAGE_TYPES = [
'image/jpeg',
'image/png',
'image/webp',
'image/gif',
] as const;
const MAX_FILE_SIZE = 5 * 1024 * 1024; // 5MB
const MAX_DIMENSION = 4096; // Prevent decompression bombs
const MIN_DIMENSION = 10;
interface ImageValidationResult {
valid: boolean;
error?: string;
mimeType?: string;
width?: number;
height?: number;
}
export async function validateImage(buffer: Buffer): Promise<ImageValidationResult> {
// Check file size first (fast check)
if (buffer.length > MAX_FILE_SIZE) {
return {
valid: false,
error: `Image exceeds maximum size of ${MAX_FILE_SIZE / 1024 / 1024}MB`,
};
}
if (buffer.length === 0) {
return { valid: false, error: 'File is empty' };
}
// Detect actual file type using magic bytes
const detectedType = await fileTypeFromBuffer(buffer);
if (!detectedType) {
return { valid: false, error: 'Could not determine file type' };
}
// Check if it's an allowed image type
if (!ALLOWED_IMAGE_TYPES.includes(detectedType.mime as any)) {
return {
valid: false,
error: `File type ${detectedType.mime} is not allowed. Allowed: JPEG, PNG, WebP, GIF`,
};
}
// Get image dimensions (also validates it's a real image)
try {
const metadata = await sharp(buffer).metadata();
if (!metadata.width || !metadata.height) {
return { valid: false, error: 'Could not read image dimensions' };
}
// Check for decompression bombs
if (metadata.width > MAX_DIMENSION || metadata.height > MAX_DIMENSION) {
return {
valid: false,
error: `Image dimensions exceed maximum of ${MAX_DIMENSION}x${MAX_DIMENSION}`,
};
}
if (metadata.width < MIN_DIMENSION || metadata.height < MIN_DIMENSION) {
return {
valid: false,
error: `Image dimensions must be at least ${MIN_DIMENSION}x${MIN_DIMENSION}`,
};
}
return {
valid: true,
mimeType: detectedType.mime,
width: metadata.width,
height: metadata.height,
};
} catch (error) {
return { valid: false, error: 'Invalid or corrupted image file' };
}
}
Create image processing functions
Process images to strip metadata and generate clean files:
// lib/image-processing.ts
import sharp from 'sharp';
interface ProcessedImage {
buffer: Buffer;
width: number;
height: number;
format: 'jpeg' | 'png' | 'webp';
}
interface ProcessingOptions {
maxWidth?: number;
maxHeight?: number;
quality?: number;
format?: 'jpeg' | 'png' | 'webp' | 'original';
}
export async function processImage(
buffer: Buffer,
options: ProcessingOptions = {}
): Promise<ProcessedImage> {
const {
maxWidth = 1920,
maxHeight = 1080,
quality = 85,
format = 'webp',
} = options;
let sharpInstance = sharp(buffer)
// Remove ALL metadata including EXIF, ICC profiles, etc.
.rotate() // Auto-rotate based on EXIF before stripping
.withMetadata({ orientation: undefined }) // Strip EXIF
.removeAlpha(); // Remove alpha channel if not needed
// Resize if larger than max dimensions
sharpInstance = sharpInstance.resize(maxWidth, maxHeight, {
fit: 'inside',
withoutEnlargement: true,
});
// Convert to desired format
let outputBuffer: Buffer;
let outputFormat: 'jpeg' | 'png' | 'webp';
switch (format) {
case 'jpeg':
outputBuffer = await sharpInstance.jpeg({ quality, mozjpeg: true }).toBuffer();
outputFormat = 'jpeg';
break;
case 'png':
outputBuffer = await sharpInstance.png({ quality, compressionLevel: 9 }).toBuffer();
outputFormat = 'png';
break;
case 'webp':
default:
outputBuffer = await sharpInstance.webp({ quality }).toBuffer();
outputFormat = 'webp';
break;
}
const metadata = await sharp(outputBuffer).metadata();
return {
buffer: outputBuffer,
width: metadata.width!,
height: metadata.height!,
format: outputFormat,
};
}
// Generate multiple sizes for responsive images
export async function generateImageVariants(buffer: Buffer): Promise<{
thumbnail: ProcessedImage;
medium: ProcessedImage;
large: ProcessedImage;
}> {
const [thumbnail, medium, large] = await Promise.all([
processImage(buffer, { maxWidth: 150, maxHeight: 150, quality: 80 }),
processImage(buffer, { maxWidth: 600, maxHeight: 600, quality: 85 }),
processImage(buffer, { maxWidth: 1200, maxHeight: 1200, quality: 90 }),
]);
return { thumbnail, medium, large };
}
// Profile-specific processing with square crop
export async function processProfileImage(buffer: Buffer): Promise<ProcessedImage> {
const outputBuffer = await sharp(buffer)
.rotate()
.withMetadata({ orientation: undefined })
.resize(400, 400, {
fit: 'cover',
position: 'centre',
})
.webp({ quality: 85 })
.toBuffer();
return {
buffer: outputBuffer,
width: 400,
height: 400,
format: 'webp',
};
}
Implement the upload API route
Combine validation and processing in your API:
// app/api/images/upload/route.ts
import { validateImage } from '@/lib/image-validation';
import { processImage, generateImageVariants } from '@/lib/image-processing';
import { uploadToStorage } from '@/lib/storage';
import { getServerSession } from 'next-auth';
import { v4 as uuidv4 } from 'uuid';
export async function POST(request: Request) {
const session = await getServerSession();
if (!session?.user) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
try {
const formData = await request.formData();
const file = formData.get('image') as File | null;
if (!file) {
return Response.json({ error: 'No image provided' }, { status: 400 });
}
const buffer = Buffer.from(await file.arrayBuffer());
// Validate the image
const validation = await validateImage(buffer);
if (!validation.valid) {
return Response.json({ error: validation.error }, { status: 400 });
}
// Process and generate variants
const variants = await generateImageVariants(buffer);
// Generate unique ID for this upload
const imageId = uuidv4();
const basePath = `images/${session.user.id}/${imageId}`;
// Upload all variants
const [thumbnailUrl, mediumUrl, largeUrl] = await Promise.all([
uploadToStorage(variants.thumbnail.buffer, `${basePath}/thumb.webp`, 'image/webp'),
uploadToStorage(variants.medium.buffer, `${basePath}/medium.webp`, 'image/webp'),
uploadToStorage(variants.large.buffer, `${basePath}/large.webp`, 'image/webp'),
]);
// Store metadata
const image = await db.image.create({
data: {
id: imageId,
userId: session.user.id,
originalName: file.name,
thumbnailUrl,
mediumUrl,
largeUrl,
width: variants.large.width,
height: variants.large.height,
},
});
return Response.json({
id: image.id,
urls: {
thumbnail: thumbnailUrl,
medium: mediumUrl,
large: largeUrl,
},
});
} catch (error) {
console.error('Image upload error:', error);
return Response.json({ error: 'Upload failed' }, { status: 500 });
}
}
Handle SVG safely (if needed)
SVGs require special handling because they can contain scripts:
// lib/svg-sanitization.ts
import DOMPurify from 'isomorphic-dompurify';
const SVG_ALLOWED_TAGS = [
'svg', 'circle', 'ellipse', 'line', 'path', 'polygon', 'polyline',
'rect', 'g', 'defs', 'use', 'symbol', 'text', 'tspan',
'linearGradient', 'radialGradient', 'stop', 'clipPath', 'mask',
];
const SVG_ALLOWED_ATTRS = [
'viewBox', 'width', 'height', 'fill', 'stroke', 'stroke-width',
'd', 'cx', 'cy', 'r', 'rx', 'ry', 'x', 'y', 'x1', 'y1', 'x2', 'y2',
'points', 'transform', 'opacity', 'id', 'class', 'clip-path', 'mask',
'offset', 'stop-color', 'stop-opacity', 'href', 'xlink:href',
];
export function sanitizeSvg(svgString: string): string | null {
// First, check for obvious script content
if (/
Recommendation: If possible, convert SVGs to raster images (PNG) on upload. This completely eliminates the XSS risk while preserving the visual content.
Security Checklist
- Validate image type using magic bytes, not file extension
- Check image dimensions before processing to prevent decompression bombs
- Strip ALL EXIF metadata to protect user privacy
- Reprocess images through sharp to generate clean files
- Generate multiple sizes for responsive delivery
- Convert SVGs to raster or sanitize thoroughly
- Use cloud storage with CDN for secure delivery
- Set proper Content-Type headers when serving images
- Implement rate limiting on upload endpoints
- Never serve user-uploaded images from your main domain (use CDN)
How to Verify It Worked
Test your image upload security:
// Test script for image upload security
async function testImageSecurity() {
// Test 1: Valid JPEG
const validJpeg = await fetch('/test-images/valid.jpg').then(r => r.arrayBuffer());
const result1 = await validateImage(Buffer.from(validJpeg));
console.assert(result1.valid, 'Valid JPEG should pass');
// Test 2: PHP file renamed to .jpg
const phpContent = Buffer.from('<?php echo "hacked"; ?>');
const result2 = await validateImage(phpContent);
console.assert(!result2.valid, 'PHP file should be rejected');
// Test 3: Oversized dimensions (potential bomb)
// Create a test image with large dimensions metadata
const result3 = await validateImage(hugeImageBuffer);
console.assert(!result3.valid, 'Huge image should be rejected');
// Test 4: Verify EXIF is stripped
const imageWithExif = await fetch('/test-images/with-gps.jpg').then(r => r.arrayBuffer());
const processed = await processImage(Buffer.from(imageWithExif));
const metadata = await sharp(processed.buffer).metadata();
console.assert(!metadata.exif, 'EXIF should be stripped');
// Test 5: SVG with script tag
const maliciousSvg = '<svg><script>alert("xss")</script></svg>';
const sanitized = sanitizeSvg(maliciousSvg);
console.assert(sanitized === null || !sanitized.includes('script'), 'SVG script should be removed');
console.log('All tests passed!');
}
// Run in your test suite
testImageSecurity();
Pro Tip: After processing an image, use an EXIF viewer tool like exiftool to verify all metadata has been removed: exiftool processed-image.jpg
Common Errors and Troubleshooting
Error: sharp fails to process image
// Problem: Corrupted or unsupported image format
try {
const processed = await sharp(buffer).toBuffer();
} catch (error) {
// "Input file contains unsupported image format"
}
// Solution: Validate before processing
const validation = await validateImage(buffer);
if (!validation.valid) {
return { error: validation.error };
}
// Only then process the image
Error: Memory issues with large images
// Problem: Processing very large images exhausts memory
// Solution: Limit input size and use streaming
import sharp from 'sharp';
// Configure sharp to limit memory usage
sharp.cache({ memory: 50, files: 20, items: 100 });
sharp.concurrency(1); // Process one at a time
// Validate dimensions before processing
if (metadata.width > 4096 || metadata.height > 4096) {
return { error: 'Image too large to process' };
}
Error: EXIF rotation not applied
// Problem: Image appears rotated incorrectly
// Solution: Call rotate() before stripping metadata
const processed = await sharp(buffer)
.rotate() // Auto-rotate based on EXIF orientation
.withMetadata({ orientation: undefined }) // Then strip EXIF
.toBuffer();
Error: Animated GIFs become static
// Problem: sharp converts animated GIF to single frame
// Solution: Use dedicated GIF processing for animations
import { Jimp } from 'jimp';
// Check if GIF is animated
const metadata = await sharp(buffer).metadata();
if (metadata.pages && metadata.pages > 1) {
// Handle animated GIF differently
// Or reject animated GIFs:
return { error: 'Animated GIFs are not supported' };
}
Frequently Asked Questions
Why do I need to reprocess images? Can't I just strip EXIF?
Reprocessing does more than strip EXIF. It regenerates the image pixels, eliminating any malicious content embedded in the image structure (polyglot attacks, steganography). It also normalizes the format and optimizes file size.
Is it safe to accept SVG uploads?
SVGs are risky because they can contain JavaScript. If you must accept SVGs, either sanitize them thoroughly with a strict allowlist, or better yet, convert them to raster images (PNG) on upload.
What's a decompression bomb?
A decompression bomb is a small compressed file that expands to an enormous size when processed. For images, a 42KB PNG could decompress to 4.5GB of pixel data, crashing your server. Always check dimensions before processing.
Should I serve images from my main domain?
No. Serve user-uploaded images from a separate domain or CDN. This prevents cookie theft if an attacker manages to get executable content served as an image (via MIME type confusion).
Why convert to WebP?
WebP offers better compression than JPEG/PNG (25-35% smaller) with comparable quality. It also provides a consistent output format, simplifying your storage and CDN configuration. All modern browsers support it.