How to Set Up and Manage LVM (Logical Volume Manager) on Fedora

Set up LVM on Fedora by creating physical volumes, volume groups, and logical volumes using standard lvm commands.

You just installed a second hard drive in your Fedora machine

The old way of carving up disks into fixed partitions feels rigid. You want a single storage pool that you can expand or shrink without wiping data. You hear about LVM, but the three-layer naming scheme and the command-line tools look intimidating. You need a clear path from a raw block device to a mounted, resizable filesystem.

What's actually happening

LVM stands for Logical Volume Manager. It sits between your physical disks and your filesystems. Instead of formatting a partition directly, you tell LVM to treat the disk as a raw storage block. LVM pools those blocks together into a volume group. You then carve logical volumes out of that pool. The filesystem lives on the logical volume.

Think of it like a water system. Physical drives are the reservoirs. The volume group is the main pipe network that combines water from multiple reservoirs. Logical volumes are the taps you turn on to get water. You can move water between taps, add new reservoirs to the network, or change the pipe diameter without shutting down the whole house. The filesystem just sees a steady stream of storage. It does not care where the bytes actually live on the disk.

This abstraction gives you three practical benefits. You can resize volumes while the system runs. You can span multiple physical drives into one logical space. You can take block-level snapshots for backups. The trade-off is an extra layer of indirection. If you skip a step or run a command in the wrong order, LVM will refuse to proceed. That refusal is a safety feature. Read the error message before forcing anything.

Fedora uses LVM by default during installation. Your root filesystem and swap space usually live on logical volumes inside a volume group named fedora. The installer does this so you can resize /home or /var later without repartitioning the disk. The same tools apply to your custom drives.

The fix or how-to

Start by identifying your new drive. Run lsblk to see the device name. Assume it is /dev/sdb. Do not run LVM commands on a drive that already contains data you want to keep. LVM will overwrite the partition table without asking.

Here is how to initialize the physical drive for LVM use.

sudo pvcreate /dev/sdb # Erases existing partition tables and writes LVM metadata to the first few sectors
# This marks the drive as a Physical Volume. LVM now recognizes it as raw storage.

Next, create a volume group. This is the storage pool that will hold your logical volumes.

sudo vgcreate vg_data /dev/sdb # Creates a volume group named vg_data using the initialized PV
# You can add more drives later with vgextend. The group acts as a single addressable space.

Now carve out a logical volume from that pool. Specify the size and a name.

sudo lvcreate -L 10G -n lv_store vg_data # Allocates 10 gigabytes from vg_data and names it lv_store
# The -L flag sets the exact size. Use -l 100%FREE to consume the entire remaining pool.

LVM has created a block device at /dev/vg_data/lv_store. It is currently empty. Format it with a filesystem.

sudo mkfs.ext4 /dev/vg_data/lv_store # Writes an ext4 superblock and journal to the logical volume
# ext4 is the default for most Fedora setups. Use xfs if you need heavy parallel writes.

Create a mount point and attach the filesystem.

sudo mkdir -p /mnt/data # Creates the directory tree if it does not exist
sudo mount /dev/vg_data/lv_store /mnt/data # Attaches the filesystem to the directory tree

Make the mount persistent across reboots. Edit /etc/fstab and add a line for the volume. Use the UUID instead of the device path to avoid issues if drive letters shift.

sudo blkid /dev/vg_data/lv_store # Prints the UUID and filesystem type for the volume
# Copy the UUID value. You will paste it into fstab in the next step.

Open /etc/fstab in your editor and append the following line.

UUID=your-actual-uuid-here  /mnt/data  ext4  defaults  0  2
# The first field is the UUID. The second is the mount point. The third is the filesystem type.
# defaults gives standard mount options. The 0 means no dump backups. The 2 means fsck runs after root.

Test the fstab entry before rebooting. A typo here will drop you into emergency mode.

sudo mount -a # Attempts to mount all filesystems listed in fstab
# If this returns silently, the syntax is correct. If it prints an error, fix fstab immediately.

LVM also supports snapshots, which are block-level copies that capture the state of a volume at a specific moment. Snapshots are read-write and grow as the original volume changes. They are useful for testing package upgrades or database migrations.

Here is how to create a snapshot of your mounted volume.

sudo lvcreate -L 2G -s -n lv_store_snap /dev/vg_data/lv_store # Creates a 2GB snapshot of lv_store
# The -s flag tells LVM this is a snapshot. The size only needs to hold changed blocks.

Mount the snapshot to a different directory to inspect it.

sudo mkdir -p /mnt/data_snap
sudo mount -o ro /dev/vg_data/lv_store_snap /mnt/data_snap # Mounts read-only to prevent metadata corruption
# Snapshots degrade if they fill up. Monitor usage with lvs -o +snap_percent.

Reboot before you debug. Half the time the symptom is gone.

Verify it worked

Run the LVM query tools to confirm the stack is intact. Each tool maps to one layer of the architecture.

sudo pvs # Lists all Physical Volumes and their total size
sudo vgs # Lists all Volume Groups and free space remaining in the pool
sudo lvs # Lists all Logical Volumes, their size, and current state

Check the filesystem layer separately.

sudo tune2fs -l /dev/vg_data/lv_store | grep -E "Block count|Free blocks" # Shows raw block allocation
# Compare the free blocks here with df output. LVM and the filesystem track space independently.

Verify the device mapper nodes are active. LVM relies on the kernel device-mapper subsystem to present logical volumes as block devices.

sudo dmsetup ls # Lists all active device-mapper targets
# You should see vg_data-lv_store and vg_data-lv_store_snap in the output.

Run journalctl first. Read the actual error before guessing.

Common pitfalls and what the error looks like

Resizing is where most mistakes happen. Growing a volume is safe. Shrinking is destructive if done incorrectly. You must shrink the filesystem first, then shrink the logical volume. Reverse the order and you will lose data.

If you try to shrink an ext4 volume without unmounting it, you will see this:

e2fsck: The filesystem is mounted.
e2fsck: Cannot continue, aborting.
resize2fs: The filesystem is mounted.

Unmount the drive, run a filesystem check, shrink the filesystem, then shrink the LVM volume.

sudo umount /mnt/data # Detaches the filesystem so blocks can be safely moved
sudo e2fsck -f /dev/vg_data/lv_store # Forces a full consistency check before resizing
sudo resize2fs /dev/vg_data/lv_store 5G # Shrinks the ext4 filesystem to 5 gigabytes
sudo lvreduce -L 5G /dev/vg_data/lv_store # Shrinks the logical volume to match the filesystem
sudo mount /dev/vg_data/lv_store /mnt/data # Reattaches the resized volume

XFS cannot be shrunk. If you format a logical volume with XFS, you can only grow it. Plan your initial allocation carefully. If you need to shrink XFS, back up the data, delete the volume, recreate it at the smaller size, and restore the data.

Another common trap is forgetting to update /etc/fstab after renaming a volume group or logical volume. The system will hang at boot waiting for a device that no longer exists. You will see a prompt like Give root password for maintenance or drop into emergency mode. Press Ctrl+D to continue booting, fix the UUID in /etc/fstab, and reboot.

LVM also stores metadata backups in /etc/lvm/backup/ and /etc/lvm/archive/. If you accidentally run vgremove or pvremove on the wrong drive, you can often restore the metadata from these files. Never delete them manually. The lvm package manages them automatically. Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/.

When a drive fails, LVM marks the volume group as partial. You will see vgchange refuse to activate volumes. Run vgreduce --removemissing vg_data to discard the dead drive and bring the remaining volumes online. Restore from backup immediately. LVM is not a RAID replacement.

Trust the package manager. Manual file edits drift, snapshots stay.

When to use this vs alternatives

Use LVM when you need to span multiple physical drives into a single resizable pool without filesystem limitations. Use standard partitions when you want maximum simplicity and will never resize the disk layout. Use btrfs when you need built-in compression, transparent snapshots, and copy-on-write semantics at the filesystem level. Use ZFS when you require enterprise-grade data integrity, self-healing checksums, and hardware RAID replacement. Stay on plain ext4 partitions if your workload is static and you prefer zero abstraction layers.

Where to go next