GPU Passthrough in KVM

GPU passthrough lets a KVM virtual machine claim a physical GPU as if it were native hardware, delivering near-bare-metal graphics performance inside the VM.

You installed Fedora, set up a Windows VM for gaming, and noticed the frame rates crawl

The hypervisor is translating every graphics call through software rendering, and the bottleneck is obvious. You want the guest to talk directly to the silicon. GPU passthrough hands the physical card straight to the virtual machine, bypassing the host entirely. The host loses access to that card while the VM runs, so you need a secondary display adapter or integrated graphics to keep your desktop alive. A botched configuration can leave you staring at a black screen with no host display. Plan this on a test machine or keep a live USB handy before you start.

What is actually happening

Think of the IOMMU as a traffic cop for hardware devices. Normally, the host kernel claims every PCIe device it sees and routes all memory requests through itself. The IOMMU intercepts those requests and enforces strict boundaries. When you enable passthrough, you tell the traffic cop to hand the GPU memory addresses directly to the guest. The guest OS installs its own drivers, talks to the card at full speed, and the host stays out of the way.

The catch is that the host kernel must not load its own GPU driver first. If nvidia or amdgpu claims the device during boot, the passthrough fails. You have to intercept the device early in the boot sequence using the VFIO subsystem. VFIO stands for Virtual Function I/O. It is a framework that isolates hardware devices from the host and exposes them to virtual machines through a secure, mediated path. The vfio-pci driver acts as a placeholder. It claims the hardware, prevents the host from using it, and hands control to the virtualization stack.

Run journalctl -k after every boot change. Read the actual kernel messages before guessing.

Prepare the host kernel and bootloader

Fedora ships with the IOMMU disabled by default to avoid breaking systems that do not need it. You need to flip the switch in the bootloader. The grubby tool is the standard way to modify kernel parameters on Fedora. It updates the GRUB configuration and rebuilds the boot menu automatically. Do not edit /etc/default/grub manually. Manual edits drift across kernel updates and break the boot chain.

Run the appropriate command for your processor architecture. The iommu=pt flag enables pass-through mode, which reduces overhead by allowing the guest to map host memory directly instead of bouncing through the IOMMU translation tables.

# Intel CPUs require the intel_iommu flag. The iommu=pt flag enables pass-through mode for better performance.
sudo grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"

# AMD CPUs use the amd_iommu flag. The same pass-through mode applies here.
sudo grubby --update-kernel=ALL --args="amd_iommu=on iommu=pt"

Reboot the system immediately. The kernel parameters only take effect on a fresh boot. Verify the IOMMU is actually active by checking the kernel ring buffer. The initialization messages will confirm whether the hardware support is working.

# Check kernel logs for IOMMU initialization. The x flag adds explanatory text and the e flag jumps to the end.
journalctl -k | grep -iE "DMAR|IOMMU" | head -10

If you see lines mentioning DMAR: IOMMU performance counters supported or AMD-Vi: Interrupt remapping enabled, the hardware is ready. Move to the next step. If the output is empty, your CPU or motherboard does not support IOMMU, or the BIOS setting is disabled. Check your firmware configuration before proceeding.

Reboot before you debug. Half the time the symptom is gone.

Claim the GPU before the host driver grabs it

After the reboot, you need to find the exact PCI identifiers for your graphics card. The vendor and device IDs are required to tell the kernel which hardware to intercept.

# List PCIe devices with vendor and device IDs in hexadecimal format.
lspci -nn | grep -iE "vga|3d|display"

The output will show something like 01:00.0 VGA compatible controller [10de:2204] and 01:00.1 Audio device [10de:1aef]. Note both the graphics function and the audio function. Modern GPUs expose their HDMI and DisplayPort audio as a separate PCIe function. You must pass both through, or the guest will have no sound.

Now you need to bind those IDs to the vfio-pci driver before the host graphics driver loads. Create a modprobe configuration file to load the driver with the correct IDs. Configuration files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Always edit /etc/ to survive package updates.

# Tell vfio-pci to claim the specific vendor and device IDs on boot.
echo "options vfio-pci ids=10de:2204,10de:1aef" | sudo tee /etc/modprobe.d/vfio-pci.conf

# Ensure the VFIO modules load early in the boot sequence.
printf "vfio\nvfio_iommu_type1\nvfio_pci\n" | sudo tee /etc/modules-load.d/vfio.conf

Fedora uses dracut to build the initial ramdisk. The initramfs contains the early-boot drivers and modules. If you do not rebuild it, the new modprobe rules will not apply until the second boot. The initramfs is where the kernel hands control to userspace before mounting the root filesystem. Your VFIO rules must live there.

# Force dracut to rebuild the initramfs for the currently running kernel.
sudo dracut -f --kver $(uname -r)

Reboot again. The host will now boot with a degraded graphics stack if you passed through your primary display adapter. Connect your monitor to the secondary GPU or integrated graphics before powering on. Verify that vfio-pci successfully claimed the device.

# Check which kernel driver is currently bound to the GPU.
lspci -nnk -d 10de:2204 | grep -i "kernel driver"

The output should read Kernel driver in use: vfio-pci. If it still shows nvidia or amdgpu, the host driver loaded too fast. Check your initramfs rebuild and try again.

Trust the package manager. Manual file edits drift, snapshots stay.

Attach the device to the virtual machine

Open virt-manager and edit your target virtual machine. Navigate to the hardware list, click the plus icon, and select PCI Host Device. You will see the VGA and Audio functions listed under the vfio-pci driver. Add both of them. Libvirt will automatically handle the XML configuration and manage the device state when the VM starts and stops.

If you prefer the command line, edit the domain XML directly. The managed='yes' attribute tells libvirt to detach the device from the host automatically when the VM boots and reattach it when the VM shuts down. This prevents manual driver reloading and reduces human error.

<!-- Add this block inside the <devices> section of the VM XML. -->
<hostdev mode='subsystem' type='pci' managed='yes'>
  <source>
    <!-- Replace with your actual bus, slot, and function values. -->
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
</hostdev>

Save the configuration and start the virtual machine. Install the standard GPU drivers inside the guest operating system. Windows will detect the hardware automatically. Linux guests may require the proprietary driver stack depending on the card. Run journalctl -xeu libvirtd if the VM refuses to start. The daemon logs will tell you exactly which step failed.

Run journalctl first. Read the actual error before guessing.

Common pitfalls and error patterns

Passthrough fails in predictable ways. The most common error is a driver race condition during boot.

vfio-pci: probe of 0000:01:00.0 failed with error -16

Error -16 means EBUSY. Another driver already claimed the device. This happens when dracut was not rebuilt, or when the host GPU driver is blacklisted incorrectly. The vfio-pci options file handles the binding, but you may also need to blacklist the host driver in /etc/modprobe.d/blacklist.conf if it persists. Add blacklist nvidia and blacklist amdgpu to that file, then rebuild the initramfs again.

IOMMU group conflicts are the next hurdle. The IOMMU groups hardware devices into isolated sets based on PCIe topology. If your GPU shares a group with a USB controller, a SATA port, or a network card, you must pass through the entire group. Libvirt will refuse to start the VM if you only select the GPU.

error: internal error: qemu unexpectedly closed the monitor: ... vfio: Cannot open group /dev/vfio/12: Permission denied

Check the group membership with find /sys/kernel/iommu_groups/ -name "12" -exec ls -la {} \;. If other devices are listed, add them to the VM or use a kernel with the ACS override patch. The Fedora COPR repository hosts kernels with pci-stub and ACS override patches compiled in. Switching to that kernel resolves most group isolation issues by forcing the IOMMU to split shared groups.

A black screen inside the guest usually means the GPU is trying to load its VBIOS ROM and failing. Modern hypervisors sometimes struggle with ROM shadowing. Disable the ROM bar in the XML configuration to force the guest to use the host firmware copy.

<hostdev mode='subsystem' type='pci' managed='yes'>
  <source>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
  <!-- Disable ROM shadowing to prevent guest VBIOS load failures. -->
  <rom bar='off'/>
</hostdev>

Always check journalctl -xeu libvirtd before guessing. The daemon logs will tell you exactly which step failed.

Snapshot the system before the upgrade. Future-you will thank you.

When to use passthrough versus alternatives

Use GPU passthrough when you need native performance for gaming, video editing, or machine learning workloads inside a virtual machine. Use virtual GPU or GPU sharing when you need to run multiple VMs that all require light graphics acceleration simultaneously. Use software rendering or VirtIO-GPU when you are running headless servers or lightweight desktop sessions that do not demand hardware acceleration. Stay on standard VirtIO-GPU if you only need basic window management and terminal access.

Where to go next