likwid/docs/admin/backup.md

5.3 KiB

Backup & Recovery

Protecting your Likwid data.

What to Backup

Component Location Priority
PostgreSQL database Database server Critical
Uploaded files /uploads (if configured) High
Configuration .env files High
SSL certificates Reverse proxy Medium

Database Backup

Likwid's recommended backup mechanism is a logical PostgreSQL dump (via pg_dump).

Store backups under the deploy user, next to the repo:

mkdir -p ~/likwid/backups

Retention guidance:

  • Keep at least 7 daily backups.
  • For production instances, also keep at least 4 weekly backups.
  • Keep at least one offsite copy.

Production compose (compose/production.yml)

The production database container is named likwid-prod-db.

ts=$(date +%Y%m%d_%H%M%S)
podman exec -t likwid-prod-db pg_dump -U likwid -F c -d likwid_prod > ~/likwid/backups/likwid_prod_${ts}.dump

Demo compose (compose/demo.yml)

The demo database container is named likwid-demo-db.

ts=$(date +%Y%m%d_%H%M%S)
podman exec -t likwid-demo-db pg_dump -U likwid_demo -F c -d likwid_demo > ~/likwid/backups/likwid_demo_${ts}.dump

Notes

  • The -F c format is recommended because it is compact and supports pg_restore --clean.
  • If you are using a shell that does not handle binary stdout redirection well, write the dump inside the container and use podman cp.

Recovery

Restore into a fresh environment (containerized)

This procedure is designed to work for a brand new server (or a clean slate on the same server).

  1. Ensure you have backups of:

    • compose/.env.production (or compose/.env.demo)
    • Reverse proxy config
    • The database dump file (*.dump)
  2. If you are restoring over an existing instance, stop the stack.

    Production:

    cd ~/likwid
    podman compose --env-file compose/.env.production -f compose/production.yml down
    

    Demo:

    cd ~/likwid
    podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml down
    
  3. If you need an empty database, remove the database volume (destructive).

    Production (removes the likwid_prod_data volume):

    cd ~/likwid
    podman compose --env-file compose/.env.production -f compose/production.yml down -v
    

    Demo (removes the likwid_demo_data volume):

    cd ~/likwid
    podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml down -v
    
  4. Start only the database container so Postgres recreates the database.

    Production:

    cd ~/likwid
    podman compose --env-file compose/.env.production -f compose/production.yml up -d postgres
    

    Demo:

    cd ~/likwid
    podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml up -d postgres
    
  5. Restore from the dump:

    • Production restore:

      podman exec -i likwid-prod-db pg_restore -U likwid -d likwid_prod --clean --if-exists < /path/to/likwid_prod_YYYYMMDD_HHMMSS.dump
      
    • Demo restore:

      podman exec -i likwid-demo-db pg_restore -U likwid_demo -d likwid_demo --clean --if-exists < /path/to/likwid_demo_YYYYMMDD_HHMMSS.dump
      
  6. Verify the restore:

    podman exec -t likwid-prod-db psql -U likwid -d likwid_prod -c "SELECT now();"
    
  7. Start the full stack again (backend + frontend):

    Production:

    cd ~/likwid
    podman compose --env-file compose/.env.production -f compose/production.yml up -d
    

    Demo:

    cd ~/likwid
    podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml up -d
    

Restore notes

  • pg_restore --clean --if-exists drops existing objects before recreating them.
  • If you are restoring between different versions, run the matching app version first, then upgrade normally.

Point-in-Time Recovery

For critical installations, configure PostgreSQL WAL archiving:

# postgresql.conf
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/archive/%f'

Demo Instance Reset

The demo instance can be reset to initial state:

# Windows
.\scripts\demo-reset.ps1

# Linux
./scripts/demo-reset.sh

This is destructive and removes all demo data by recreating the demo database volume; on startup the backend runs core migrations and demo seed migrations to restore the initial demo dataset. This is not a backup mechanism.

Disaster Recovery Plan

Preparation

  1. Document backup procedures
  2. Test restores regularly (monthly)
  3. Keep offsite backup copies
  4. Document recovery steps

Recovery Steps

  1. Provision new server if needed
  2. Install Likwid dependencies
  3. Restore database from backup
  4. Restore configuration files
  5. Start services
  6. Verify functionality
  7. Update DNS if server changed

Recovery Time Objective (RTO)

Target: 4 hours for full recovery

Recovery Point Objective (RPO)

Target: 24 hours of data loss maximum (with daily backups)

Testing Backups

Monthly backup test procedure:

  1. Create test database
  2. Restore backup to test database
  3. Run verification queries
  4. Document results
  5. Delete test database