How to Install Fedora on a RAID Array
You plugged in two drives, booted the Fedora installer, and expected a RAID button. There is no RAID button. The installer asks you to pick a disk, and picking one leaves the other empty. You want your data mirrored so a single drive failure does not wipe your system. The standard Anaconda installer does not create software RAID arrays automatically. You must build the array before the installer touches the disks.
This is not a limitation of Fedora. It is a design choice. Fedora trusts you to manage storage topology explicitly rather than hiding it behind a GUI checkbox. You create the RAID device in the live environment, then point Anaconda at that device. The result is a robust, redundant system that you control from the ground up.
Build the array first. Anaconda only fills what you give it.
What is actually happening
Anaconda is the installation program that writes files, configures the bootloader, and sets up the user environment. It expects a block device to write to. A software RAID array is a virtual block device created by the kernel using mdadm. If the array does not exist when Anaconda runs, Anaconda cannot see it.
Think of the RAID array as a foundation and Anaconda as the construction crew. The crew cannot build a house until the foundation is poured. You must pour the foundation using mdadm in the terminal, then hand the foundation to Anaconda to build the system on top.
The boot process relies on the initramfs to assemble the array before the root filesystem mounts. If the initramfs lacks the mdadm configuration, the kernel will see the raw disks but not the RAID device. The system drops to an emergency shell. Fedora uses dracut to build the initramfs. The installer runs dracut automatically, but verifying the configuration prevents the emergency shell scenario.
Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/mdadm.conf. Never edit /usr/lib/mdadm.conf. The package manager will overwrite /usr/lib/ on updates.
Create the partitions and RAID array
Boot from the Fedora installation media. Select "Troubleshooting" and then "Rescue a Fedora system", or press Ctrl+Alt+F2 to open a terminal from the installer menu. You need a root shell to manage disks.
You need partitions on the raw disks before mdadm can use them. The partition type must be set to Linux RAID so the kernel recognizes the device during boot.
# Open the first disk in parted. Replace sdb with your actual disk identifier.
sudo parted /dev/sdb
# Create a GPT label if the disk is empty. GPT is required for UEFI systems.
(parted) mklabel gpt
# Create a partition spanning the whole disk. Start at 1MiB for alignment.
(parted) mkpart primary 1MiB 100%
# Set the partition type to Linux RAID. This writes the correct GUID so the kernel recognizes the device.
(parted) set 1 raid on
(parted) quit
# Repeat the process for the second disk. Replace sdc with your second disk.
sudo parted /dev/sdc mklabel gpt mkpart primary 1MiB 100% set 1 raid on
Create the mirror array with mdadm. The metadata version controls where the superblock lives. Version 1.2 places the superblock at the end of the partition. This is safe for most setups but can confuse older BIOS bootloaders. Version 1.0 places the superblock at the beginning. This is required if you put /boot on the RAID array and use a legacy BIOS. UEFI systems generally handle 1.2 fine. Choose the metadata version based on your bootloader, not your preference.
# Create a RAID 1 mirror using the two partitions.
# --metadata=1.0 ensures the superblock is placed early enough for BIOS boot compatibility.
# --homehost=any allows the array to assemble on different machines without warnings.
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 --metadata=1.0 --homehost=any /dev/sdb1 /dev/sdc1
Verify the array is syncing before proceeding. The kernel reports RAID status in /proc/mdstat. If the array is not syncing, the installation will fail or the system will be unstable.
# Check the kernel RAID status. Look for 'sync' or 'resync' progress.
# If the output shows 'idle' or 'inactive', the array creation failed.
cat /proc/mdstat
Run cat /proc/mdstat first. Read the actual state before guessing.
Install Fedora on the RAID device
Return to the Anaconda installer. If you used the rescue shell, reboot into the installer. Select "Custom" partitioning. Do not select the individual disks. Select the RAID device /dev/md0.
Create your partitions on top of /dev/md0. A standard setup includes /boot, /, and swap. GRUB2 on Fedora can boot from RAID, but UEFI systems require an EFI System Partition. If you need an ESP, create a small FAT32 partition on the RAID device or create a separate non-RAID ESP on both disks. Anaconda can handle ESPs on RAID if you select the RAID device, but separate ESPs are safer for dual-boot scenarios.
Create the root partition as xfs or btrfs. Create swap as swap. Assign the mount points. Proceed with the installation. Anaconda will write the filesystems to /dev/md0 and configure the bootloader to target the RAID device.
After installation, the system reboots. If the boot stops at a dracut emergency shell, the initramfs does not contain the RAID configuration. You must regenerate the initramfs.
# Regenerate the initramfs to include mdadm configuration.
# This ensures the system can assemble the array before the root filesystem mounts.
sudo dracut --force
Reboot before you debug. Half the time the symptom is gone.
Verify the installation
Check that the array is active and the filesystems are mounted. The lsblk command shows the device hierarchy. The mdadm --detail command shows array health.
# List block devices to confirm /dev/md0 is present and mounted.
# The RAID column should show '1' for the active array.
lsblk -o NAME,FSTYPE,MOUNTPOINT,RAID
# Show detailed array status including sync progress and device health.
# Look for 'State : clean' to confirm the array is healthy.
sudo mdadm --detail /dev/md0
Ensure the mdadm configuration is saved. The installer usually writes /etc/mdadm.conf, but verifying prevents boot failures after kernel updates.
# Append the current array configuration to the mdadm config file.
# This ensures the array assembles automatically on boot.
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
# Update the initramfs again to embed the new configuration.
sudo dracut --force
Trust the package manager. Manual file edits drift, snapshots stay.
Common pitfalls and errors
Old filesystem signatures block RAID creation. If you reuse disks, mdadm refuses to create the array because it detects existing data. The error stops the process.
mdadm: /dev/sdb1 appears to contain an ext2fs file system.
mdadm: /dev/sdb1 appears to be part of a raid array.
mdadm: cannot open /dev/sdb1: Device or resource busy
Wipe the signatures to remove the conflict. This destroys the filesystem metadata. Back up before wiping.
# Wipe all filesystem signatures from the partition.
# This allows mdadm to create the array without interference.
sudo wipefs -a /dev/sdb1
sudo wipefs -a /dev/sdc1
SELinux denials show up in journalctl -t setroubleshoot with a one-line summary. Read those before disabling SELinux. RAID management tools do not trigger SELinux denials on local arrays. If you see SELinux errors, they are unrelated to the RAID setup.
If the boot menu is gone, GRUB rescue is your friend, not your enemy. The bootloader might have installed to the wrong disk. Reinstall GRUB to both disks.
# Reinstall GRUB to the MBR of both disks.
# This ensures the system can boot if either disk fails.
sudo grub2-install /dev/sdb
sudo grub2-install /dev/sdc
Run journalctl -xe first. Read the actual error before guessing.
When to use this vs alternatives
Use software RAID when you need redundancy without hardware controllers and want full control over metadata versions. Use LVM on top of RAID when you need flexible resizing and snapshots alongside redundancy. Use Btrfs RAID when you want copy-on-write features and integrated checksumming, but verify your use case against the Btrfs RAID documentation. Use hardware RAID when your server provides a dedicated controller and you want the CPU offloaded. Use a single drive when redundancy is not required and simplicity is the priority.
Pick the tool that matches your failure mode. RAID protects against disk death, not user error.