Backups and recovery

Backing up control-plane data (for example MongoDB) and recovery expectations when self-hosting.

Written By Zoro

Last updated 3 days ago

Self-hosted operators own backup scope for the control plane.

Application data living on Worker Nodes (app disks, database Services) uses the product Backups under Deployments and Operations in the sidebar flows and your node policies; see also Backups under Core Concepts in the sidebar for how the product thinks about scope.

What to back up on the Compose host

AssetWhy it matters
MongoDB volume (`mongo-data`)Organisations, users, metadata for the dashboard
`acme.json`Let's Encrypt account and certs (if you use file storage)
Traefik `dynamic/`Generated routes if you cannot rebuild from config-generator
Environment secretsVault export or sealed secrets (not just the running container)
Redis (`redis-data`)Mostly cache and queues; confirm whether you need point-in-time copy for your policy

MongoDB backup pattern (operator)

Use any MongoDB backup tool your org standardizes on (mongodump, Percona, cloud disk snapshots of the volume, etc.). Minimal example while Compose is up:

docker compose exec mongodb mongodump --out /tmp/dump docker compose cp mongodb:/tmp/dump ./mongo-dump-$(date +%F)

Copy the archive off-host. Test a restore into a scratch instance quarterly.

Recovery expectations

  • RPO and RTO are your numbers; dFlow does not replace enterprise backup software.
  • After restoring MongoDB, restart `payload-app` and verify login, Organisation list, and Worker Node records.
  • If Worker Nodes were deleted only in the database, you may need to re-onboard servers; keep infra Terraform / inventory separate from DB restores.

Encryption and retention

Encrypt backups at rest, restrict access to restore roles, and align retention with legal hold requirements. Application teams may ask for guarantees; document what you actually snapshot.

Related