How to Create and Restore Btrfs Snapshots on Fedora

Fedora uses Btrfs by default, and you can take instant snapshots of subvolumes with the btrfs command and restore them by swapping subvolumes at boot.

You broke the system and need to undo it

You edited /etc/fstab and the system dropped to an emergency shell. Or you installed a third-party kernel module and the graphics stack collapsed. Or you ran a script that deleted critical configuration files. You need to revert the filesystem to a working state without reinstalling. Btrfs snapshots let you roll back to a known-good state in seconds.

What's actually happening

Btrfs uses copy-on-write semantics. When you create a snapshot, the filesystem records the state of the block pointers at that moment. It does not duplicate the data. The snapshot consumes zero extra space initially. As you modify files, Btrfs copies the original blocks to the snapshot and writes the new data to the live volume. The snapshot preserves the old state while the live volume continues to change.

Fedora organizes the filesystem into subvolumes. The root filesystem lives in a subvolume named @. Your home directory lives in @home. These are separate namespaces with independent mount points. You snapshot them separately. The top-level Btrfs volume contains all subvolumes. It is identified internally by subvolid=5. This is the root of the Btrfs filesystem, not the root of the operating system. You must mount subvolid=5 to access all subvolumes for management operations.

Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/. Snapshots protect /etc/ drift and allow you to recover from misconfiguration instantly.

Inspect the layout

Here's how to identify the block device and verify the subvolume structure before making changes.

# Capture the device path for the root filesystem
# findmnt queries the mount table to get the correct block device
DEVICE=$(findmnt -n -o SOURCE /)

# List all subvolumes on the device to verify the layout
# The output shows ID, path, and read-only status
sudo btrfs subvolume list "$DEVICE"

The output lists subvolumes with their IDs. The root subvolume usually has ID 256. The home subvolume usually has ID 257. The top-level volume is always ID 5.

Verify the subvolume IDs. The root subvolume usually has ID 256. The home subvolume usually has ID 257. Knowing the IDs helps when mount options fail.

Create a snapshot

Here's how to mount the top-level volume and create read-only snapshots of the root and home subvolumes.

# Create a mount point for the top-level Btrfs volume
# This directory acts as the gateway to all subvolumes
sudo mkdir -p /mnt/btrfs-root

# Mount the top-level Btrfs volume using the discovered device
# subvolid=5 accesses the internal root of the Btrfs filesystem
sudo mount -o subvolid=5 "$DEVICE" /mnt/btrfs-root

# Create a read-only snapshot of the root subvolume
# The -r flag prevents accidental writes to the backup
sudo btrfs subvolume snapshot -r /mnt/btrfs-root/@ /mnt/btrfs-root/@root-backup-$(date +%F)

# Create a read-only snapshot of the home subvolume
# Home data is separate and should be backed up independently
sudo btrfs subvolume snapshot -r /mnt/btrfs-root/@home /mnt/btrfs-root/@home-backup-$(date +%F)

The -r flag creates a read-only snapshot. This is safer for backups because the snapshot cannot drift over time. Omit -r only if you need a writable clone for testing.

Use the -r flag for backups. A writable snapshot drifts over time and becomes a liability.

Verify the snapshot

Here's how to confirm the snapshot exists and check space usage.

# List subvolumes to verify the snapshot was created
# Look for the 'ro' flag in the output to confirm read-only status
sudo btrfs subvolume list /mnt/btrfs-root

# Check filesystem usage to see shared and unshared space
# This shows how much space snapshots are actually consuming
sudo btrfs filesystem usage /mnt/btrfs-root

The btrfs filesystem usage output shows Data, single and Data, DUP. Btrfs deduplicates blocks across snapshots. If multiple snapshots share the same blocks, the space is counted once. The Unallocated line shows free space on the device.

Check the ro flag. A writable snapshot is a liability, not a backup.

Restore a snapshot

Here's how to swap the broken root subvolume with a snapshot to recover the system.

If the system still boots to a terminal, you can restore from the running session. If the system is unbootable, boot from a Fedora Live USB and mount the volume. The commands are the same.

# Rename the current root subvolume to preserve it for debugging
# This moves the broken state out of the way without deleting it
sudo mv /mnt/btrfs-root/@ /mnt/btrfs-root/@root-broken

# Create a writable snapshot from the backup to become the new root
# Omit -r to allow the system to write updates and logs normally
sudo btrfs subvolume snapshot /mnt/btrfs-root/@root-backup-2026-04-18 /mnt/btrfs-root/@

# Unmount the volume to apply changes
# The bootloader will mount the new @ subvolume on reboot
sudo umount /mnt/btrfs-root

After confirming the restore works, delete the old broken subvolume.

# Delete the broken subvolume to reclaim space
# This removes the metadata and any unique blocks owned only by the broken state
sudo btrfs subvolume delete /mnt/btrfs-root/@root-broken

Reboot before you debug. Half the time the symptom is gone.

Common pitfalls and errors

You cannot rename or delete a subvolume while it is mounted. If you run the rename command on a running system without unmounting first, you will see ERROR: unable to rename subvolume - Device or resource busy. You must unmount the filesystem or operate from a Live USB.

If you see ERROR: path is not a btrfs filesystem, you mounted the wrong device or the wrong subvolume. Verify the device path with findmnt.

Snapshots are point-in-time. If you snapshot @ then @home, there is a tiny window where they are inconsistent. For most users, this is fine. For databases or critical services, this is bad. Snapshot both @ and @home together. Restoring only the root leaves your home directory out of sync with system libraries.

Snapshot both @ and @home together. Restoring only the root leaves your home directory out of sync with system libraries.

Automate with Snapper

Snapper automates snapshot creation and cleanup. It integrates with dnf to create snapshots before and after package updates.

Here's how to install Snapper and configure it to manage snapshots for the root filesystem.

# Install the snapper package and the DNF plugin
# The plugin triggers snapshots automatically during package transactions
sudo dnf install snapper python3-dnf-plugin-snapper

# Create a configuration profile for the root filesystem
# This sets up the default retention rules and snapshot locations
sudo snapper -c root create-config /

Here's how to create a manual snapshot and undo changes using Snapper.

# Create a labeled snapshot for a specific action
# The description helps you identify the state later
sudo snapper -c root create --description "before-kernel-module-install"

# List snapshots to find the numeric ID of the target state
sudo snapper -c root list

# Undo changes between the current state and snapshot 1
# This restores files from snapshot 1 to the current filesystem
sudo snapper -c root undochange 1..0

journalctl -xe reads better than journalctl alone. The x flag adds explanatory text and the e flag jumps to the end. Most sysadmins type journalctl -xeu <unit> muscle-memory style. If a restore fails, check the journal.

Run snapper list after every major change. Automation works best when you verify the retention policy.

When to use this vs alternatives

Use manual btrfs subvolume snapshot when you need a one-off backup before a risky change and want zero overhead. Use snapper when you want automated snapshots before and after every package update with automatic cleanup of old snapshots. Use timeshift when you prefer a graphical interface and standardized retention policies for desktop users. Use an external backup tool like borg or restic when you need off-site protection against drive failure or ransomware. Stay on manual snapshots if you only perform occasional upgrades and prefer full control over retention.

Where to go next