How to Set Up Secure Database Backups

Share
How-To Guide

How to Set Up Secure Database Backups

Protect your data against disasters, ransomware, and human error

TL;DR

TL;DR (30 minutes): Use your managed database's automatic backups (Supabase, Neon, RDS all include them). Enable point-in-time recovery. Store additional backups in a separate cloud account/region. Encrypt everything. Test restores monthly. Keep 30 days of daily backups minimum.

Prerequisites:

  • Database admin access
  • Storage location for backups (S3, GCS, etc.)
  • Basic command line skills

Why This Matters

Backups are your last line of defense against data loss. Without them, a ransomware attack, accidental deletion, or infrastructure failure could end your business. In 2023, the average cost of data loss for small businesses exceeded $100,000.

The 3-2-1 rule: Keep 3 copies of data, on 2 different media types, with 1 copy off-site. Most managed databases handle this automatically, but you should verify.

Step-by-Step Guide

1

Understand backup types

  • Full backup: Complete copy of all data. Largest, but simplest to restore.
  • Incremental backup: Only changes since last backup. Smaller, but requires chain to restore.
  • Point-in-time recovery (PITR): Continuous backup allowing restore to any moment. Best protection.
  • Logical backup: SQL dump that can be read/edited. Portable but slower.
  • Physical backup: Raw data files. Faster but less portable.
2

Most managed databases include automatic backups:

# Supabase
# - Daily automatic backups included (Pro plan)
# - Point-in-time recovery up to 7 days
# - Access via Dashboard → Database → Backups

# Neon
# - Automatic branching acts as backups
# - Point-in-time restore to any LSN
# neon branches create --name backup-2024-01-15

# PlanetScale
# - Automatic daily backups
# - Restore via Dashboard or CLI
# pscale backup create mydb --name manual-backup

# AWS RDS
# - Automated backups enabled by default
# - Configure retention period (up to 35 days)
aws rds modify-db-instance \
  --db-instance-identifier mydb \
  --backup-retention-period 30
3

PostgreSQL manual backups

# Logical backup with pg_dump
pg_dump -h localhost -U postgres -d mydb \
  --format=custom \
  --file=backup_$(date +%Y%m%d_%H%M%S).dump

# Compressed backup
pg_dump -h localhost -U postgres -d mydb | gzip > backup.sql.gz

# Backup specific tables
pg_dump -h localhost -U postgres -d mydb \
  -t users -t orders \
  --file=partial_backup.dump

# For large databases, use parallel dump
pg_dump -h localhost -U postgres -d mydb \
  --format=directory \
  --jobs=4 \
  --file=backup_dir/
4

Encrypt backups before storage

# Encrypt with GPG
pg_dump mydb | gzip | gpg --symmetric --cipher-algo AES256 \
  --output backup_$(date +%Y%m%d).sql.gz.gpg

# Or use openssl
pg_dump mydb | gzip | openssl enc -aes-256-cbc -salt \
  -pass file:/path/to/keyfile \
  -out backup.sql.gz.enc

# Decrypt when needed
gpg --decrypt backup.sql.gz.gpg | gunzip | psql mydb

# AWS S3 with server-side encryption
aws s3 cp backup.dump s3://my-backups/ \
  --sse aws:kms \
  --sse-kms-key-id alias/my-backup-key
5

Set up automated backup script

#!/bin/bash
# backup.sh - Automated PostgreSQL backup

set -e

# Configuration
DB_HOST="localhost"
DB_NAME="mydb"
DB_USER="postgres"
S3_BUCKET="s3://my-company-backups/database"
RETENTION_DAYS=30

# Generate filename with timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${DB_NAME}_${TIMESTAMP}.dump.gz.gpg"

# Create backup
echo "Starting backup..."
PGPASSWORD=$DB_PASSWORD pg_dump \
  -h $DB_HOST -U $DB_USER -d $DB_NAME \
  --format=custom | gzip | gpg --symmetric \
  --batch --passphrase-file /etc/backup-key \
  --output /tmp/$BACKUP_FILE

# Upload to S3
echo "Uploading to S3..."
aws s3 cp /tmp/$BACKUP_FILE $S3_BUCKET/$BACKUP_FILE

# Clean up local file
rm /tmp/$BACKUP_FILE

# Delete old backups from S3
echo "Cleaning old backups..."
aws s3 ls $S3_BUCKET/ | while read -r line; do
  BACKUP_DATE=$(echo $line | awk '{print $1}')
  BACKUP_NAME=$(echo $line | awk '{print $4}')
  if [[ $(date -d "$BACKUP_DATE" +%s) -lt $(date -d "-$RETENTION_DAYS days" +%s) ]]; then
    aws s3 rm "$S3_BUCKET/$BACKUP_NAME"
  fi
done

echo "Backup complete: $BACKUP_FILE"

Schedule with cron:

# Run daily at 2 AM
0 2 * * * /path/to/backup.sh >> /var/log/backup.log 2>&1
6

Store backups securely

  • Use a separate cloud account or region from production
  • Enable versioning on your S3 bucket
  • Use Object Lock for ransomware protection
  • Restrict access with IAM policies
# S3 bucket policy for backup security
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:DeleteObject",
      "Resource": "arn:aws:s3:::my-backups/*",
      "Condition": {
        "NumericLessThan": {
          "s3:object-lock-remaining-retention-days": "30"
        }
      }
    }
  ]
}
7

Test your restores

A backup you've never tested is not a backup:

# Monthly restore test procedure
# 1. Download recent backup
aws s3 cp s3://my-backups/latest.dump.gz.gpg /tmp/

# 2. Decrypt
gpg --decrypt /tmp/latest.dump.gz.gpg | gunzip > /tmp/restore.dump

# 3. Restore to test database
createdb test_restore
pg_restore -d test_restore /tmp/restore.dump

# 4. Verify data integrity
psql test_restore -c "SELECT COUNT(*) FROM users;"
psql test_restore -c "SELECT MAX(created_at) FROM orders;"

# 5. Clean up
dropdb test_restore
rm /tmp/restore.dump /tmp/latest.dump.gz.gpg

Backup Security Checklist:

  • Encrypt all backups before storing
  • Store encryption keys separately from backups
  • Use separate credentials for backup access
  • Enable MFA delete on S3 buckets
  • Monitor backup job success/failure
  • Document your recovery procedure
  • Test restores at least monthly

How to Verify It Worked

  1. Check backup exists: Verify the file was created and uploaded
  2. Check backup size: Should be reasonable for your data volume
  3. Test decryption: Ensure you can decrypt without errors
  4. Test restore: Actually restore to a test database
  5. Verify data: Check row counts and recent data exists

Common Errors & Troubleshooting

Backup file is empty or very small

Check database connection and permissions. Verify the database name is correct.

Error: "could not connect to database"

Check hostname, port, credentials, and that the database is accepting connections.

Restore fails with "role does not exist"

Create the roles first, or use --no-owner flag to skip ownership restoration.

Out of disk space during backup

Stream directly to S3 instead of local disk, or use incremental backups.

How often should I back up?

Daily minimum. For active databases, use continuous PITR. Consider your Recovery Point Objective (RPO) - how much data can you afford to lose? If the answer is "less than a day," you need more frequent backups.

How long should I keep backups?

30 days minimum for operational recovery. Keep monthly backups for 1 year for compliance. Some regulations require 7+ years - check your requirements.

Should I back up to the same cloud provider?

Use different regions at minimum. For true disaster recovery, backup to a different cloud provider. If AWS has a major outage, you want backups accessible from elsewhere.

Related guides:Database Encryption · PostgreSQL Roles · Database Audit Logs

How-To Guides

How to Set Up Secure Database Backups