You lost the data because the backup script ran but the disk was full
You set up a PostgreSQL instance for your project six months ago. It works. You forgot about it. Today you need to restore a table, and the only backup is a manual dump you took during the initial install. Or worse, your backup script ran every night for months, but the disk filled up silently and the last successful dump is from three weeks ago. Fedora gives you the tools to automate this reliably, but the default setup requires a few explicit steps to keep the data safe and the schedule consistent.
What's actually happening
Database engines like PostgreSQL and MariaDB do not back themselves up by default. They write data to disk, but they do not create point-in-time snapshots you can restore later without a dedicated tool. You need a dump utility to export the schema and data into a portable format, and you need a scheduler to run that export at a predictable interval.
Think of the database as a live ledger. The dump tool is a clerk who photocopies the ledger at a specific moment. The systemd timer is the manager who tells the clerk when to work. If the manager is reliable, you have a chain of photocopies you can use to reconstruct the ledger if the original gets damaged. systemd timers are more robust than cron for this task. They integrate with the system state. If the machine sleeps, the timer catches up. If the service fails, the journal logs the error. The dump tool handles the serialization. It connects to the database, streams the contents, and writes them to a file.
Export PostgreSQL with pg_dump
PostgreSQL includes pg_dump, which exports a single database to a SQL script or a custom binary format. The SQL format is portable and readable. Compress the output immediately to save disk space.
Here's how to export a PostgreSQL database to a compressed file using the native dump tool.
# Connect as the postgres user and dump the target database
# Pipe the output to gzip to reduce storage usage immediately
# Use date formatting to create a unique filename for each run
pg_dump -U postgres mydb | gzip > /var/backups/db/mydb-$(date +%F).sql.gz
Compress the output. Uncompressed dumps waste disk space and slow down transfers.
Export MariaDB with mysqldump
MariaDB requires credentials for the dump tool. Store them in a config file to avoid exposing passwords in the command line or script. The ~/.my.cnf file is the standard location for client credentials.
Here's how to configure secure credentials for the MariaDB client tools.
# ~/.my.cnf stores credentials securely for the current user
# The client section applies to mysqldump and mysql commands
# Permissions must be restrictive so other users cannot read the password
[client]
user = backup_user
password = secure_password_here
Here's how to run the dump using the config file for authentication.
# Read credentials from the config file instead of prompting
# Dump the database and compress the output stream
mysqldump --defaults-file=~/.my.cnf mydb | gzip > /var/backups/db/mydb-$(date +%F).sql.gz
Lock down the config file permissions. Other users on the system can read world-readable files and steal your database password.
# Restrict access to the credentials file so only the owner can read it
chmod 600 ~/.my.cnf
Automate with a systemd timer
Manual commands fail when you forget to run them. A wrapper script handles the dump and manages retention. Place the script in /usr/local/bin so it is available system-wide.
Here's how to create a backup script that exports the database and removes old files.
#!/bin/bash
# Exit immediately if any command fails to prevent partial backups
set -e
# Define the backup directory and retention period in days
BACKUP_DIR="/var/backups/db"
RETENTION_DAYS=14
# Ensure the backup directory exists with correct permissions
# The root user runs the service, so root owns the directory
mkdir -p "$BACKUP_DIR"
# Run the database dump command
# Adjust the command based on your database engine
pg_dump -U postgres mydb | gzip > "$BACKUP_DIR/mydb-$(date +%F).sql.gz"
# Remove backups older than the retention period
# This keeps disk usage predictable and prevents filling the volume
find "$BACKUP_DIR" -name '*.sql.gz' -mtime +$RETENTION_DAYS -delete
Make the script executable and test it manually before wiring it to systemd.
# Grant execute permission to the script
sudo chmod +x /usr/local/bin/db-backup.sh
# Run the script once to verify it completes without errors
sudo /usr/local/bin/db-backup.sh
Define a systemd service unit to execute the backup script. The service runs once and exits, which is the correct pattern for scheduled tasks. Edit configuration in /etc, never in /usr/lib.
Here's how to define the service unit for the backup script.
# /etc/systemd/system/db-backup.service
# Edit configuration in /etc, never in /usr/lib
# The oneshot type tells systemd the service runs a command and exits
[Unit]
Description=Automated database backup script
[Service]
Type=oneshot
# Run the script located in the local binary directory
ExecStart=/usr/local/bin/db-backup.sh
Create a timer unit to trigger the service on a schedule. The timer persists across reboots and catches up if the system was off.
Here's how to configure the timer to run the backup daily.
# /etc/systemd/system/db-backup.timer
# The persistent flag ensures the timer fires immediately if the system missed a schedule
[Unit]
Description=Run database backup daily
[Timer]
# Schedule the backup for 02:00 AM every day
OnCalendar=*-*-* 02:00:00
Persistent=true
[Install]
# Attach the timer to the timers target so it starts on boot
WantedBy=timers.target
Reload the systemd manager configuration and enable the timer. Systemd ignores new files until you tell it to look.
# Reload systemd to recognize the new unit files
sudo systemctl daemon-reload
# Enable the timer to start on boot and start it now
sudo systemctl enable --now db-backup.timer
Reload the daemon. Systemd ignores new files until you tell it to look.
Verify the schedule and logs
Check the timer schedule and review the service logs to confirm the backup completed successfully. systemctl status shows recent log lines and state in one view. Always check status before restarting.
Here's how to inspect the timer schedule and service logs.
# List the timer to see the next scheduled run and last trigger time
systemctl list-timers db-backup.timer
# Check the service status and recent logs in one view
# The -u flag filters logs to the specific unit
systemctl status db-backup.service
Run journalctl -xe if the service fails. The x flag adds explanatory text and the e flag jumps to the end. Most sysadmins type journalctl -xeu <unit> muscle-memory style.
Check the logs. A silent failure is worse than a loud error.
Common pitfalls and error messages
Backups fail in predictable ways. Knowing the error messages saves time when the restore is urgent.
The script fails with Permission denied if you forgot to make it executable.
Failed at step EXEC spawning /usr/local/bin/db-backup.sh: Permission denied
Fix this by running chmod +x on the script.
PostgreSQL rejects the connection if the socket path is wrong or the service is down.
pg_dump: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed
Check the database service status. Ensure pg_hba.conf allows the connection method you are using.
The timer does not fire if the calendar syntax is invalid. systemd-analyze calendar validates the expression.
# Validate the calendar syntax used in the timer unit
systemd-analyze calendar "*-*-* 02:00:00"
The backup fills the disk if retention is too long or the database grows rapidly. Monitor disk usage with df -h. Add a check in the script to abort if free space drops below a threshold.
Test the restore. A backup you cannot restore is just a file taking up space.
Test the restore
A backup file is useless until you verify it restores. Run a test restore to a temporary database or a scratch directory. This catches corruption and permission issues before the real emergency.
Here's how to test a PostgreSQL restore to a temporary database.
# Create a temporary database for the test restore
createdb mydb_test
# Restore the compressed dump to the test database
# The gunzip command decompresses the stream on the fly
gunzip < /var/backups/db/mydb-2024-05-20.sql.gz | psql -U postgres mydb_test
# Verify the table count matches the source
psql -U postgres -d mydb_test -c "\dt"
Drop the test database after verification. Keep the environment clean.
# Remove the temporary database
dropdb mydb_test
Decision matrix
Choose the backup strategy based on your recovery requirements and database size.
Use pg_dump and mysqldump when you need portable SQL dumps you can restore on a different server or version. Use systemd timers when you want backups to integrate with the system state and catch up after sleep. Use rsync or rclone when you need to push backups to a remote server or cloud storage immediately after creation. Use pg_basebackup or xtrabackup when you require hot backups with minimal locking for large production databases. Stick to manual dumps when the database is small and you only need occasional snapshots for development.
Choose the tool that matches your recovery goal. Dump for portability, basebackup for speed.