You have spare drives and want a simple storage pool
You have two spare NVMe drives sitting in your case. You want to pool them together for a media library or VM storage without wrestling with LVM commands that make you sweat. You hear about Stratis. You install it, run a command, and suddenly your drives are gone from lsblk or the pool won't start. Or maybe you just want to know if Stratis is the right tool before you wipe your disks. Stratis is Fedora's answer to simple local storage pooling. It wraps device-mapper and XFS in a clean CLI so you can create pools, filesystems, and snapshots without touching pvcreate or vgcreate.
What's actually happening
Stratis sits between your block devices and your mount points. Think of it as a storage manager that handles the plumbing. You hand Stratis whole disks. Stratis formats them, groups them into a pool, and lets you carve out filesystems on demand. Under the hood, it uses device-mapper for the block layer and XFS for the filesystem layer. It does not merge data across disks like a RAID array. It aggregates capacity. If one disk fails, the filesystems on that disk are lost. Stratis is not a redundancy tool. It is a capacity and management tool.
The stratis command talks to a daemon called stratisd. The daemon manages the device-mapper tables and XFS mounts. This separation means the CLI is just a client. You can manage Stratis from a remote machine if the daemon is running and accessible. Stratis uses thin provisioning. This means filesystems do not pre-allocate space. They start small and grow as you write data. You can create multiple filesystems that sum to more than the pool size. This is overcommitment. It is safe as long as the total used space stays below the pool capacity. If the pool fills up, all writes fail. Stratis does not have automatic reclaim for deleted files. You manage capacity by adding devices.
Stratis uses XFS for the filesystem layer. XFS is a high-performance journaling filesystem. It handles large files and parallel writes well. Stratis does not expose XFS tuning options directly. You get the defaults. This keeps the interface simple. If you need specific XFS mount options, you can add them to fstab, but Stratis manages the underlying device-mapper mapping.
Install the daemon before you touch disks. A missing service breaks every subsequent command.
Install and start the daemon
Here's how to install the client and daemon, then start the service.
sudo dnf install stratis-cli stratisd -y
# stratis-cli provides the command you type. stratisd is the background daemon.
# Install both to ensure the CLI can talk to the storage engine.
sudo systemctl enable --now stratisd
# Enable ensures the daemon starts on boot. --now starts it immediately.
# Stratis requires the daemon to be running before any pool operations.
Verify the daemon is running.
sudo systemctl status stratisd
# Check the active state. Look for "active (running)" in green.
# If the service fails, check journalctl -u stratisd for dependency errors.
Run systemctl status stratisd before you create a pool. The CLI will fail silently or hang if the daemon is down.
Create a storage pool
A pool groups one or more block devices. The devices are consumed entirely by Stratis. Do not use devices that already contain data you want to keep. Stratis wipes the device headers and takes full ownership.
Here's how to consume block devices into a Stratis pool.
sudo stratis pool create mypool /dev/nvme1n1 /dev/sdb
# Create a pool named mypool using two block devices.
# Stratis wipes the device headers and takes full ownership.
# Any existing data on these devices is destroyed immediately.
sudo stratis pool list
# List all pools managed by stratisd.
# The output shows total size, used space, and device count.
The stratis pool create command will fail with Error: Device /dev/sdb is already in use if the device has a mount point, an active partition table, or an existing LVM signature. Stratis requires clean devices. Wipe the device with wipefs -a /dev/sdb if you are sure the data is gone. Do not wipe a device that is currently mounted.
stratis pool list is your dashboard. Run this often. It shows health and capacity at a glance.
Verify the pool exists before creating filesystems. An empty pool does nothing.
Create a filesystem and mount it
Stratis filesystems are thin-provisioned. They do not immediately use all pool space. You create a filesystem inside a pool, then mount it like any other filesystem.
Here's how to create a thin-provisioned filesystem and mount it persistently.
sudo stratis filesystem create mypool myfs
# Create a filesystem named myfs inside the mypool pool.
# Stratis uses thin provisioning. The filesystem starts small and grows as you write data.
# It does not pre-allocate the full pool size.
sudo mkdir -p /mnt/mydata
sudo mount /dev/stratis/mypool/myfs /mnt/mydata
# Mount the filesystem using the device-mapper path.
# The path format is /dev/stratis/<pool>/<filesystem>.
# This mount is temporary and disappears after a reboot.
To make the mount persistent, you need the UUID. Device names change. UUIDs stay put.
stratis filesystem list mypool
# Find the UUID of the filesystem.
# The UUID is stable across reboots and safer than device paths.
echo "UUID=YOUR-UUID-HERE /mnt/mydata xfs defaults,x-systemd.requires=stratisd.service 0 0" | sudo tee -a /etc/fstab
# Add the entry to fstab using the UUID.
# The x-systemd.requires option ensures stratisd starts before the mount attempt.
# Without this dependency, the boot may hang waiting for the filesystem.
Edit /etc/fstab carefully. A bad entry can drop you to emergency mode. Always test with sudo mount -a before rebooting.
Use UUIDs in fstab. Device names like /dev/sdb change. UUIDs stay put.
Take a snapshot
Snapshots capture the current state of a filesystem for backup or testing. They are space-efficient. They only store changes from the original filesystem.
Here's how to take a point-in-time snapshot of a filesystem.
sudo stratis filesystem snapshot mypool myfs myfs-snap-$(date +%F)
# Create a snapshot named myfs-snap-YYYY-MM-DD from the current state of myfs.
# Snapshots are space-efficient. They only store changes from the original.
# Use snapshots before risky operations or to preserve a known-good state.
sudo stratis filesystem list mypool
# List filesystems and snapshots.
# Snapshots appear with a snapshot column marked as true.
Stratis keeps an internal JSON report of all state. You can dump this with stratis report. This is useful for debugging or backup scripts. The report contains the pool configuration and filesystem metadata. It does not contain the data.
Snapshot before you upgrade or modify configs. Restoring is easier than recovering from a mistake.
Expand a pool and monitor usage
You can add devices to a pool at any time. Stratis integrates the device immediately. No rebuild or migration is needed.
Here's how to add capacity to a pool and monitor usage levels.
sudo stratis pool add-data mypool /dev/sdc
# Add a new block device to the pool.
# Stratis integrates the device immediately. No rebuild or migration is needed.
# The pool capacity increases by the size of the new device.
sudo stratis pool list
# Check the updated total size.
# The used column shows space allocated to filesystems and metadata.
df -h /mnt/mydata
# Check filesystem usage.
# df reports usage relative to the filesystem, not the pool.
# A filesystem can show 90% used while the pool has plenty of free space.
Stratis pools have a hard limit. If the pool reaches 100% used, all writes to all filesystems in that pool fail. Thin provisioning means the pool usage can grow faster than individual filesystem reports. Monitor the pool level, not just the mount point.
Watch the pool usage, not just the filesystem. A full pool kills every filesystem inside it.
Destroy a filesystem or pool
You can remove filesystems and pools. Stratis protects you from accidental data loss by enforcing order. You must unmount before destroying. You must remove all filesystems before destroying the pool.
Here's how to remove filesystems and pools safely.
sudo umount /mnt/mydata
# Unmount the filesystem before destroying it.
# Stratis will refuse to destroy a mounted filesystem.
sudo stratis filesystem destroy mypool myfs
# Remove the filesystem from the pool.
# All data on the filesystem is lost.
# Snapshots of this filesystem are also destroyed.
sudo stratis pool destroy mypool
# Destroy the pool and release the block devices.
# You can only destroy a pool after all filesystems are removed.
# The devices return to their raw state and are available for other use.
Unmount everything before destruction. Stratis protects you from accidental data loss, but only if you follow the order.
Troubleshooting and pitfalls
If stratis pool list returns empty but you know you created a pool, the daemon might have restarted and lost state, or the devices were unplugged. Stratis requires the devices to be present at startup. If a device is missing, the pool enters a degraded state or fails to start.
Stratis works with SELinux. If you see access denied errors, do not disable SELinux. Check the audit log.
Here's how to diagnose common Stratis failures.
journalctl -u stratisd -n 50
# View recent logs from the Stratis daemon.
# Look for errors related to device access or XFS mounts.
sudo ausearch -m avc -ts recent
# Search for SELinux denials in the audit log.
# Stratis operations should be allowed by default.
# If you see denials, file a bug or check for custom contexts.
getenforce
# Verify SELinux is enforcing.
# Stratis is designed to work with SELinux enabled.
# Disabling SELinux is not a valid fix for Stratis issues.
journalctl -xeu stratisd is your best friend. The x flag adds explanatory text. The e flag jumps to the end. Most errors show up here before they break the CLI.
Run journalctl -xeu stratisd first. The daemon log tells you exactly why a command failed.
When to use Stratis vs alternatives
Use Stratis when you want simple local storage pooling with snapshots and thin provisioning on Fedora. Use LVM when you need fine-grained control over physical volumes or compatibility with legacy scripts. Use ZFS when you require built-in data integrity checksums and RAID-Z redundancy. Use Btrfs when you need subvolume snapshots integrated with the filesystem and transparent compression. Stay with standard partitions when you only have one disk and no pooling requirements.