How to Enable GPU Passthrough (VFIO) on Fedora for a Windows VM

GPU passthrough on Fedora lets a Windows VM use your physical graphics card directly via the VFIO kernel driver and KVM/QEMU, delivering near-native gaming and GPU compute performance.

You upgraded to Fedora and the VM game runs at 10 FPS

You installed Fedora, set up a Windows VM, and launched a game. The frame rate is terrible, or worse, the host desktop freezes and the VM crashes with a black screen. You want the VM to use the dedicated GPU directly, not the host's software renderer. You need GPU passthrough. This guide walks you through binding a physical GPU to a KVM guest using VFIO so the Windows VM sees the card as bare metal.

What's actually happening

Normally, the host kernel drives your GPU. The display manager talks to the driver, which talks to the hardware. A VM gets a virtual GPU that emulates basic graphics. Passthrough changes the ownership model. You tell the host kernel to ignore the GPU and hand the physical device directly to the virtual machine.

The IOMMU (Input-Output Memory Management Unit) acts as a traffic cop. It ensures the VM can only access the memory and devices you explicitly assign. Without IOMMU, the VM could read or write anywhere in the host's RAM, which is a security and stability disaster. IOMMU isolates the device so the VM gets exclusive access without breaking the host.

Prerequisites and risks

Check your hardware first. You need a CPU and motherboard that support IOMMU. Intel calls this VT-d. AMD calls this AMD-Vi. Most modern desktop CPUs support this, but some budget boards disable it in the BIOS. Enable IOMMU in the BIOS settings before proceeding.

You also need two graphics adapters. One stays with Fedora for your desktop session. The other goes to the VM. If you have a CPU with integrated graphics, use that for the host and pass through the discrete card. If you only have one GPU and no integrated graphics, you cannot do passthrough without a complex headless setup. Passing the only GPU makes your host display go black immediately. You must manage the host via SSH or a secondary display. Test your SSH access before rebooting with the binding active.

Verify the virtualization stack is installed.

# Install the virtualization group and start libvirt
sudo dnf install @virtualization virt-manager
sudo systemctl enable --now libvirtd

Reboot if the system was just installed. A fresh boot ensures the kernel and modules are loaded cleanly.

Enable IOMMU in the bootloader

Enable IOMMU in the bootloader. Fedora uses grubby to manage kernel arguments. This is safer than editing GRUB files by hand. grubby updates all installed kernels, keeping your boot entries consistent across updates. Never edit /etc/default/grub and run grub2-mkconfig on Fedora unless you are doing something exotic. grubby is the standard tool.

# Intel: Enable VT-d and set IOMMU to passthrough mode
sudo grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"

# AMD: Enable AMD-Vi and set IOMMU to passthrough mode
# sudo grubby --update-kernel=ALL --args="amd_iommu=on iommu=pt"

# Reboot to apply the kernel parameters
sudo reboot

After rebooting, verify IOMMU is active.

# Check dmesg for IOMMU initialization messages
dmesg | grep -e DMAR -e IOMMU | head -10

Look for lines mentioning DMAR or IOMMU performance counters. If you see nothing, IOMMU is disabled in the BIOS or the kernel parameter failed. Check the BIOS settings and verify the grubby output.

Reboot before you check. The kernel parameters only take effect after a fresh boot.

Identify the GPU PCI IDs

Find the PCI IDs of the GPU you want to pass through. You need the vendor and device IDs. The GPU usually has a VGA function and an audio function. Both must be bound to VFIO, or the VM will fail to initialize the device.

# List VGA controllers and display the numeric PCI IDs
lspci -nn | grep -i vga

# List audio devices to find the GPU's audio function
lspci -nn | grep -i audio

Look for the bracketed numbers like [10de:2204]. 10de is the vendor ID. 2204 is the device ID. Note the IDs for the VGA controller and the matching audio device. The audio device usually shares the same vendor ID and has a device ID close to the VGA ID.

Check the IOMMU groups. Devices in the same IOMMU group must be passed through together. If the GPU shares a group with a device you cannot pass, such as a USB controller or PCIe root port, passthrough will fail.

# Show IOMMU group assignments for the GPU
lspci -vnn | grep -i iommu

If the GPU is in a group with other devices, you must pass all devices in that group. Some motherboards have poor IOMMU grouping. You may need to enable ACS override in the kernel parameters if grouping is broken, but verify the BIOS settings first.

Bind the GPU to VFIO

Bind the GPU to the vfio-pci driver. The host graphics driver will try to claim the GPU on boot. You must intercept that process and assign the device to vfio-pci before the graphics driver loads. Config files in /etc/modprobe.d/ persist across updates. Files in /usr/lib/modprobe.d/ ship with packages. Edit /etc/. Never edit /usr/lib/.

# Create a modprobe config to claim the GPU by PCI ID
# Replace 10de:2204 and 10de:1aef with your actual IDs
echo "options vfio-pci ids=10de:2204,10de:1aef" | sudo tee /etc/modprobe.d/vfio.conf

# Ensure the VFIO modules load early in the boot process
printf "vfio\nvfio_iommu_type1\nvfio_pci\n" | sudo tee /etc/modules-load.d/vfio.conf

# Rebuild the initramfs to include the new module configuration
sudo dracut --force

# Reboot to apply the driver binding
sudo reboot

dracut --force rebuilds the initial ramdisk. The initramfs contains the modules needed to mount the root filesystem and load early drivers. If you add a module to modules-load.d, you must rebuild the initramfs, or the module won't load early enough to beat the graphics driver.

Verify the binding after reboot.

# Check which kernel driver is using the GPU
lspci -k | grep -A3 -E 'VGA|3D controller'

Look for Kernel driver in use: vfio-pci. If you see nvidia or amdgpu, the binding failed. Check /etc/modprobe.d/vfio.conf for typos. Rebuild the initramfs and reboot.

Check the driver binding before creating the VM. If the host still claims the GPU, the VM will crash the host on startup.

Add the GPU to the VM

Add the GPU to the VM. You can use virt-manager or virsh. The XML approach is more reliable for complex devices. Ensure the VM uses the Q35 chipset and UEFI firmware. Older PC-i440fx chipsets lack the PCIe support required for modern GPU passthrough.

# Edit the VM configuration to add the host device
virsh edit windows-vm

# Add this block inside the <devices> section
# Replace 0000:01:00.0 with your GPU's PCI address
<hostdev mode='subsystem' type='pci' managed='yes'>
  <source>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
</hostdev>

The managed='yes' attribute tells libvirt to handle the driver binding automatically. Libvirt will unbind the device from vfio-pci and attach it to the VM when the guest starts, then rebind it when the guest stops. This is safer than manual binding. If you use managed='no', you must write scripts to bind and unbind the device, which is error-prone.

Start the VM and install the guest drivers. The VM will see the GPU as native hardware once the XML is saved and the VM is powered on.

Start the VM and check the guest device manager. The GPU should appear without error codes.

Common pitfalls and errors

Passthrough fails in specific ways. Know the symptoms.

Error: Unable to read from monitor: Connection reset by peer. This usually means the host kernel panicked because the VM accessed a resource it shouldn't have, or the host driver tried to access the GPU while the VM owned it. Check journalctl -xe on the host. Look for vfio-pci errors or kernel oops messages.

Black screen in VM. The GPU is bound, but the VM doesn't boot. Check the VM console. If the GPU has a ROM VBIOS, you may need to dump the VBIOS and inject it into the VM. Some cards require a secondary GPU for initialization. Dump the VBIOS using nvflash or amdvbflash on the host, then add a <rom> element to the VM XML pointing to the file.

vfio-pci: failed to set iommu group. The IOMMU grouping is wrong. Some motherboards group devices incorrectly. Check lspci -vnn | grep -i iommu. If the GPU shares a group with a device you cannot pass through, passthrough will fail. You may need to enable ACS override or change the BIOS settings.

Read the error message. journalctl -xe on the host shows the crash reason. virsh console windows-vm shows the guest boot output.

When to use passthrough

Use VFIO passthrough when you need near-native performance for gaming, CUDA workloads, or hardware encoding in a VM. Use a virtual GPU when you only need basic display output and want to share the host GPU across multiple VMs. Use a container when you need GPU access for a Linux application and can tolerate sharing the host kernel. Stay on the host OS when the workload requires direct driver access and zero virtualization overhead.

Where to go next