5.3 KiB
Backup & Recovery
Protecting your Likwid data.
What to Backup
| Component | Location | Priority |
|---|---|---|
| PostgreSQL database | Database server | Critical |
| Uploaded files | /uploads (if configured) |
High |
| Configuration | .env files |
High |
| SSL certificates | Reverse proxy | Medium |
Database Backup
Likwid's recommended backup mechanism is a logical PostgreSQL dump (via pg_dump).
Where backups live (recommended)
Store backups under the deploy user, next to the repo:
mkdir -p ~/likwid/backups
Retention guidance:
- Keep at least 7 daily backups.
- For production instances, also keep at least 4 weekly backups.
- Keep at least one offsite copy.
Backup now (containerized, recommended)
Production compose (compose/production.yml)
The production database container is named likwid-prod-db.
ts=$(date +%Y%m%d_%H%M%S)
podman exec -t likwid-prod-db pg_dump -U likwid -F c -d likwid_prod > ~/likwid/backups/likwid_prod_${ts}.dump
Demo compose (compose/demo.yml)
The demo database container is named likwid-demo-db.
ts=$(date +%Y%m%d_%H%M%S)
podman exec -t likwid-demo-db pg_dump -U likwid_demo -F c -d likwid_demo > ~/likwid/backups/likwid_demo_${ts}.dump
Notes
- The
-F cformat is recommended because it is compact and supportspg_restore --clean. - If you are using a shell that does not handle binary stdout redirection well, write the dump inside the container and use
podman cp.
Recovery
Restore into a fresh environment (containerized)
This procedure is designed to work for a brand new server (or a clean slate on the same server).
-
Ensure you have backups of:
compose/.env.production(orcompose/.env.demo)- Reverse proxy config
- The database dump file (
*.dump)
-
If you are restoring over an existing instance, stop the stack.
Production:
cd ~/likwid podman compose --env-file compose/.env.production -f compose/production.yml downDemo:
cd ~/likwid podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml down -
If you need an empty database, remove the database volume (destructive).
Production (removes the
likwid_prod_datavolume):cd ~/likwid podman compose --env-file compose/.env.production -f compose/production.yml down -vDemo (removes the
likwid_demo_datavolume):cd ~/likwid podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml down -v -
Start only the database container so Postgres recreates the database.
Production:
cd ~/likwid podman compose --env-file compose/.env.production -f compose/production.yml up -d postgresDemo:
cd ~/likwid podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml up -d postgres -
Restore from the dump:
-
Production restore:
podman exec -i likwid-prod-db pg_restore -U likwid -d likwid_prod --clean --if-exists < /path/to/likwid_prod_YYYYMMDD_HHMMSS.dump -
Demo restore:
podman exec -i likwid-demo-db pg_restore -U likwid_demo -d likwid_demo --clean --if-exists < /path/to/likwid_demo_YYYYMMDD_HHMMSS.dump
-
-
Verify the restore:
podman exec -t likwid-prod-db psql -U likwid -d likwid_prod -c "SELECT now();" -
Start the full stack again (backend + frontend):
Production:
cd ~/likwid podman compose --env-file compose/.env.production -f compose/production.yml up -dDemo:
cd ~/likwid podman compose --env-file compose/.env.demo -f compose/demo.yml -f compose/demo.vps.override.yml up -d
Restore notes
pg_restore --clean --if-existsdrops existing objects before recreating them.- If you are restoring between different versions, run the matching app version first, then upgrade normally.
Point-in-Time Recovery
For critical installations, configure PostgreSQL WAL archiving:
# postgresql.conf
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/archive/%f'
Demo Instance Reset
The demo instance can be reset to initial state:
# Windows
.\scripts\demo-reset.ps1
# Linux
./scripts/demo-reset.sh
This is destructive and removes all demo data by recreating the demo database volume; on startup the backend runs core migrations and demo seed migrations to restore the initial demo dataset. This is not a backup mechanism.
Disaster Recovery Plan
Preparation
- Document backup procedures
- Test restores regularly (monthly)
- Keep offsite backup copies
- Document recovery steps
Recovery Steps
- Provision new server if needed
- Install Likwid dependencies
- Restore database from backup
- Restore configuration files
- Start services
- Verify functionality
- Update DNS if server changed
Recovery Time Objective (RTO)
Target: 4 hours for full recovery
Recovery Point Objective (RPO)
Target: 24 hours of data loss maximum (with daily backups)
Testing Backups
Monthly backup test procedure:
- Create test database
- Restore backup to test database
- Run verification queries
- Document results
- Delete test database