[{"data":1,"prerenderedAt":410},["ShallowReactive",2],{"blog-how-to/database-backups":3},{"id":4,"title":5,"body":6,"category":391,"date":392,"dateModified":392,"description":393,"draft":394,"extension":395,"faq":396,"featured":394,"headerVariant":397,"image":396,"keywords":396,"meta":398,"navigation":399,"ogDescription":400,"ogTitle":396,"path":401,"readTime":396,"schemaOrg":402,"schemaType":403,"seo":404,"sitemap":405,"stem":406,"tags":407,"twitterCard":408,"__hash__":409},"blog/blog/how-to/database-backups.md","How to Set Up Secure Database Backups",{"type":7,"value":8,"toc":375},"minimark",[9,13,17,21,27,30,43,48,51,54,58,100,121,134,147,169,196,212,241,245,278,282,287,290,294,297,301,308,312,315,337,356],[10,11],"category-badge",{"category":12},"How-To Guide",[14,15,5],"h1",{"id":16},"how-to-set-up-secure-database-backups",[18,19,20],"p",{},"Protect your data against disasters, ransomware, and human error",[22,23,24],"tldr",{},[18,25,26],{},"TL;DR (30 minutes):\nUse your managed database's automatic backups (Supabase, Neon, RDS all include them). Enable point-in-time recovery. Store additional backups in a separate cloud account/region. Encrypt everything. Test restores monthly. Keep 30 days of daily backups minimum.",[18,28,29],{},"Prerequisites:",[31,32,33,37,40],"ul",{},[34,35,36],"li",{},"Database admin access",[34,38,39],{},"Storage location for backups (S3, GCS, etc.)",[34,41,42],{},"Basic command line skills",[44,45,47],"h2",{"id":46},"why-this-matters","Why This Matters",[18,49,50],{},"Backups are your last line of defense against data loss. Without them, a ransomware attack, accidental deletion, or infrastructure failure could end your business. In 2023, the average cost of data loss for small businesses exceeded $100,000.",[18,52,53],{},"The 3-2-1 rule: Keep 3 copies of data, on 2 different media types, with 1 copy off-site. Most managed databases handle this automatically, but you should verify.",[44,55,57],{"id":56},"step-by-step-guide","Step-by-Step Guide",[59,60,62,67],"step",{"number":61},"1",[63,64,66],"h3",{"id":65},"understand-backup-types","Understand backup types",[31,68,69,76,82,88,94],{},[34,70,71,75],{},[72,73,74],"strong",{},"Full backup:"," Complete copy of all data. Largest, but simplest to restore.",[34,77,78,81],{},[72,79,80],{},"Incremental backup:"," Only changes since last backup. Smaller, but requires chain to restore.",[34,83,84,87],{},[72,85,86],{},"Point-in-time recovery (PITR):"," Continuous backup allowing restore to any moment. Best protection.",[34,89,90,93],{},[72,91,92],{},"Logical backup:"," SQL dump that can be read/edited. Portable but slower.",[34,95,96,99],{},[72,97,98],{},"Physical backup:"," Raw data files. Faster but less portable.",[59,101,103,107,110],{"number":102},"2",[63,104,106],{"id":105},"managed-database-backups-recommended","Managed database backups (recommended)",[18,108,109],{},"Most managed databases include automatic backups:",[111,112,117],"pre",{"className":113,"code":115,"language":116},[114],"language-text","# Supabase\n# - Daily automatic backups included (Pro plan)\n# - Point-in-time recovery up to 7 days\n# - Access via Dashboard → Database → Backups\n\n# Neon\n# - Automatic branching acts as backups\n# - Point-in-time restore to any LSN\n# neon branches create --name backup-2024-01-15\n\n# PlanetScale\n# - Automatic daily backups\n# - Restore via Dashboard or CLI\n# pscale backup create mydb --name manual-backup\n\n# AWS RDS\n# - Automated backups enabled by default\n# - Configure retention period (up to 35 days)\naws rds modify-db-instance \\\n  --db-instance-identifier mydb \\\n  --backup-retention-period 30\n","text",[118,119,115],"code",{"__ignoreMap":120},"",[59,122,124,128],{"number":123},"3",[63,125,127],{"id":126},"postgresql-manual-backups","PostgreSQL manual backups",[111,129,132],{"className":130,"code":131,"language":116},[114],"# Logical backup with pg_dump\npg_dump -h localhost -U postgres -d mydb \\\n  --format=custom \\\n  --file=backup_$(date +%Y%m%d_%H%M%S).dump\n\n# Compressed backup\npg_dump -h localhost -U postgres -d mydb | gzip > backup.sql.gz\n\n# Backup specific tables\npg_dump -h localhost -U postgres -d mydb \\\n  -t users -t orders \\\n  --file=partial_backup.dump\n\n# For large databases, use parallel dump\npg_dump -h localhost -U postgres -d mydb \\\n  --format=directory \\\n  --jobs=4 \\\n  --file=backup_dir/\n",[118,133,131],{"__ignoreMap":120},[59,135,137,141],{"number":136},"4",[63,138,140],{"id":139},"encrypt-backups-before-storage","Encrypt backups before storage",[111,142,145],{"className":143,"code":144,"language":116},[114],"# Encrypt with GPG\npg_dump mydb | gzip | gpg --symmetric --cipher-algo AES256 \\\n  --output backup_$(date +%Y%m%d).sql.gz.gpg\n\n# Or use openssl\npg_dump mydb | gzip | openssl enc -aes-256-cbc -salt \\\n  -pass file:/path/to/keyfile \\\n  -out backup.sql.gz.enc\n\n# Decrypt when needed\ngpg --decrypt backup.sql.gz.gpg | gunzip | psql mydb\n\n# AWS S3 with server-side encryption\naws s3 cp backup.dump s3://my-backups/ \\\n  --sse aws:kms \\\n  --sse-kms-key-id alias/my-backup-key\n",[118,146,144],{"__ignoreMap":120},[59,148,150,154,160,163],{"number":149},"5",[63,151,153],{"id":152},"set-up-automated-backup-script","Set up automated backup script",[111,155,158],{"className":156,"code":157,"language":116},[114],"#!/bin/bash\n# backup.sh - Automated PostgreSQL backup\n\nset -e\n\n# Configuration\nDB_HOST=\"localhost\"\nDB_NAME=\"mydb\"\nDB_USER=\"postgres\"\nS3_BUCKET=\"s3://my-company-backups/database\"\nRETENTION_DAYS=30\n\n# Generate filename with timestamp\nTIMESTAMP=$(date +%Y%m%d_%H%M%S)\nBACKUP_FILE=\"backup_${DB_NAME}_${TIMESTAMP}.dump.gz.gpg\"\n\n# Create backup\necho \"Starting backup...\"\nPGPASSWORD=$DB_PASSWORD pg_dump \\\n  -h $DB_HOST -U $DB_USER -d $DB_NAME \\\n  --format=custom | gzip | gpg --symmetric \\\n  --batch --passphrase-file /etc/backup-key \\\n  --output /tmp/$BACKUP_FILE\n\n# Upload to S3\necho \"Uploading to S3...\"\naws s3 cp /tmp/$BACKUP_FILE $S3_BUCKET/$BACKUP_FILE\n\n# Clean up local file\nrm /tmp/$BACKUP_FILE\n\n# Delete old backups from S3\necho \"Cleaning old backups...\"\naws s3 ls $S3_BUCKET/ | while read -r line; do\n  BACKUP_DATE=$(echo $line | awk '{print $1}')\n  BACKUP_NAME=$(echo $line | awk '{print $4}')\n  if [[ $(date -d \"$BACKUP_DATE\" +%s) -lt $(date -d \"-$RETENTION_DAYS days\" +%s) ]]; then\n    aws s3 rm \"$S3_BUCKET/$BACKUP_NAME\"\n  fi\ndone\n\necho \"Backup complete: $BACKUP_FILE\"\n",[118,159,157],{"__ignoreMap":120},[18,161,162],{},"Schedule with cron:",[111,164,167],{"className":165,"code":166,"language":116},[114],"# Run daily at 2 AM\n0 2 * * * /path/to/backup.sh >> /var/log/backup.log 2>&1\n",[118,168,166],{"__ignoreMap":120},[59,170,172,176,190],{"number":171},"6",[63,173,175],{"id":174},"store-backups-securely","Store backups securely",[31,177,178,181,184,187],{},[34,179,180],{},"Use a separate cloud account or region from production",[34,182,183],{},"Enable versioning on your S3 bucket",[34,185,186],{},"Use Object Lock for ransomware protection",[34,188,189],{},"Restrict access with IAM policies",[111,191,194],{"className":192,"code":193,"language":116},[114],"# S3 bucket policy for backup security\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Deny\",\n      \"Principal\": \"*\",\n      \"Action\": \"s3:DeleteObject\",\n      \"Resource\": \"arn:aws:s3:::my-backups/*\",\n      \"Condition\": {\n        \"NumericLessThan\": {\n          \"s3:object-lock-remaining-retention-days\": \"30\"\n        }\n      }\n    }\n  ]\n}\n",[118,195,193],{"__ignoreMap":120},[59,197,199,203,206],{"number":198},"7",[63,200,202],{"id":201},"test-your-restores","Test your restores",[18,204,205],{},"A backup you've never tested is not a backup:",[111,207,210],{"className":208,"code":209,"language":116},[114],"# Monthly restore test procedure\n# 1. Download recent backup\naws s3 cp s3://my-backups/latest.dump.gz.gpg /tmp/\n\n# 2. Decrypt\ngpg --decrypt /tmp/latest.dump.gz.gpg | gunzip > /tmp/restore.dump\n\n# 3. Restore to test database\ncreatedb test_restore\npg_restore -d test_restore /tmp/restore.dump\n\n# 4. Verify data integrity\npsql test_restore -c \"SELECT COUNT(*) FROM users;\"\npsql test_restore -c \"SELECT MAX(created_at) FROM orders;\"\n\n# 5. Clean up\ndropdb test_restore\nrm /tmp/restore.dump /tmp/latest.dump.gz.gpg\n",[118,211,209],{"__ignoreMap":120},[213,214,215,218],"warning-box",{},[18,216,217],{},"Backup Security Checklist:",[31,219,220,223,226,229,232,235,238],{},[34,221,222],{},"Encrypt all backups before storing",[34,224,225],{},"Store encryption keys separately from backups",[34,227,228],{},"Use separate credentials for backup access",[34,230,231],{},"Enable MFA delete on S3 buckets",[34,233,234],{},"Monitor backup job success/failure",[34,236,237],{},"Document your recovery procedure",[34,239,240],{},"Test restores at least monthly",[44,242,244],{"id":243},"how-to-verify-it-worked","How to Verify It Worked",[246,247,248,254,260,266,272],"ol",{},[34,249,250,253],{},[72,251,252],{},"Check backup exists:"," Verify the file was created and uploaded",[34,255,256,259],{},[72,257,258],{},"Check backup size:"," Should be reasonable for your data volume",[34,261,262,265],{},[72,263,264],{},"Test decryption:"," Ensure you can decrypt without errors",[34,267,268,271],{},[72,269,270],{},"Test restore:"," Actually restore to a test database",[34,273,274,277],{},[72,275,276],{},"Verify data:"," Check row counts and recent data exists",[44,279,281],{"id":280},"common-errors-troubleshooting","Common Errors & Troubleshooting",[283,284,286],"h4",{"id":285},"backup-file-is-empty-or-very-small","Backup file is empty or very small",[18,288,289],{},"Check database connection and permissions. Verify the database name is correct.",[283,291,293],{"id":292},"error-could-not-connect-to-database","Error: \"could not connect to database\"",[18,295,296],{},"Check hostname, port, credentials, and that the database is accepting connections.",[283,298,300],{"id":299},"restore-fails-with-role-does-not-exist","Restore fails with \"role does not exist\"",[18,302,303,304,307],{},"Create the roles first, or use ",[118,305,306],{},"--no-owner"," flag to skip ownership restoration.",[283,309,311],{"id":310},"out-of-disk-space-during-backup","Out of disk space during backup",[18,313,314],{},"Stream directly to S3 instead of local disk, or use incremental backups.",[316,317,318,325,331],"faq-section",{},[319,320,322],"faq-item",{"question":321},"How often should I back up?",[18,323,324],{},"Daily minimum. For active databases, use continuous PITR. Consider your Recovery Point Objective (RPO) - how much data can you afford to lose? If the answer is \"less than a day,\" you need more frequent backups.",[319,326,328],{"question":327},"How long should I keep backups?",[18,329,330],{},"30 days minimum for operational recovery. Keep monthly backups for 1 year for compliance. Some regulations require 7+ years - check your requirements.",[319,332,334],{"question":333},"Should I back up to the same cloud provider?",[18,335,336],{},"Use different regions at minimum. For true disaster recovery, backup to a different cloud provider. If AWS has a major outage, you want backups accessible from elsewhere.",[18,338,339,342,347,348,347,352],{},[72,340,341],{},"Related guides:",[343,344,346],"a",{"href":345},"/blog/how-to/database-encryption","Database Encryption"," ·\n",[343,349,351],{"href":350},"/blog/how-to/postgresql-roles","PostgreSQL Roles",[343,353,355],{"href":354},"/blog/how-to/database-audit-logs","Database Audit Logs",[357,358,359,365,370],"related-articles",{},[360,361],"related-card",{"description":362,"href":363,"title":364},"Step-by-step guide to hiding API keys in your web app. Use environment variables, .gitignore, and platform secrets to ke","/blog/how-to/hide-api-keys","How to Hide API Keys - Secure Your Secrets",[360,366],{"description":367,"href":368,"title":369},"Step-by-step guide to securing API keys in your vibe-coded app. Learn environment variables, .gitignore, and platform-sp","/blog/how-to/how-to-hide-api-keys","How to Hide Your API Keys (The Right Way)",[360,371],{"description":372,"href":373,"title":374},"Complete guide to HSTS setup. Configure Strict-Transport-Security header, understand max-age, includeSubDomains, preload","/blog/how-to/hsts-setup","How to Set Up HSTS (HTTP Strict Transport Security)",{"title":120,"searchDepth":376,"depth":376,"links":377},2,[378,379,389,390],{"id":46,"depth":376,"text":47},{"id":56,"depth":376,"text":57,"children":380},[381,383,384,385,386,387,388],{"id":65,"depth":382,"text":66},3,{"id":105,"depth":382,"text":106},{"id":126,"depth":382,"text":127},{"id":139,"depth":382,"text":140},{"id":152,"depth":382,"text":153},{"id":174,"depth":382,"text":175},{"id":201,"depth":382,"text":202},{"id":243,"depth":376,"text":244},{"id":280,"depth":376,"text":281},"how-to","2026-01-12","Step-by-step guide to implementing secure database backups. Automated backups, encryption, retention policies, and disaster recovery testing.",false,"md",null,"yellow",{},true,"Implement secure, automated database backups with encryption and testing.","/blog/how-to/database-backups","[object Object]","HowTo",{"title":5,"description":393},{"loc":401},"blog/how-to/database-backups",[],"summary_large_image","EsGWUwr7M7lIWSVlnz6Rb9atMy-Kwxicelc1su4KkKU",1775843928702]