How to Enable NVIDIA CUDA on Fedora for GPU Computing

Install NVIDIA drivers and the CUDA toolkit on Fedora using RPM Fusion and the official NVIDIA CUDA repository to enable GPU computing workloads.

You have the GPU, but the toolkit is missing

You installed Fedora, plugged in your NVIDIA GPU, and everything looked fine until you tried to compile a CUDA kernel or run a PyTorch model. The terminal spat out nvcc: command not found or the application crashed with CUDA error: no kernel image is available for execution on the device. You have the hardware. You have the OS. The missing link is the proprietary driver stack and the CUDA toolkit. Fedora doesn't ship these by default. You need to bring them in from RPM Fusion and NVIDIA's repositories.

What's actually happening

Fedora adheres to strict free software guidelines. Proprietary blobs like the NVIDIA kernel module and the CUDA toolkit cannot live in the official repositories. The ecosystem splits the work. RPM Fusion hosts the kernel module, which talks to the hardware. NVIDIA hosts the CUDA toolkit, which provides the compilers and libraries for your applications. You must bridge these two sources.

The kernel module loads into the running kernel. The toolkit sits in /usr/local/cuda and exposes headers and binaries to your build system. If the kernel module version doesn't match the running kernel, the GPU stays dark. If the toolkit is missing, your code won't compile.

The akmod package handles the version matching. It watches the kernel directory. When a new kernel appears, akmod triggers a build of the NVIDIA module against the new headers. This keeps the driver working across updates without manual intervention. Think of akmod as a background worker that ensures the driver never falls behind the kernel. Fedora's release cadence is six months. The N-2 release goes EOL when N+1 ships. Plan upgrades on that cycle to avoid breaking the driver chain.

The fix

Start by enabling RPM Fusion. This repository provides the akmod-nvidia package, which builds the kernel module against your current kernel.

sudo dnf install -y \
  https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
  https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
# WHY: Adds the free and nonfree RPM Fusion repos. The nonfree repo contains the NVIDIA driver.
# WHY: Uses rpm -E %fedora to auto-detect your release version. Hardcoding the version breaks on upgrades.

Install the akmod-nvidia package. This triggers a kernel module build in the background.

sudo dnf install -y akmod-nvidia
# WHY: Installs the automatic kernel module builder for NVIDIA.
# WHY: akmod watches for kernel updates and rebuilds the module automatically.
# WHY: Avoids the manual dkms setup that breaks on every kernel upgrade.

Wait for the module to compile. The build can take several minutes on slower hardware. Check the status of the build service.

sudo systemctl status akmods
# WHY: Checks if the module build service is running.
# WHY: The build can take several minutes on slower hardware. Wait for "inactive (dead)" or "active" with success.
# WHY: If the service is failed, check journalctl -xe for compilation errors.

Reboot to load the new kernel module. The nvidia module cannot be loaded while the nouveau driver is active.

sudo systemctl reboot
# WHY: Loads the new kernel module into the running kernel.
# WHY: The nvidia module cannot be loaded while the nouveau driver is active.
# WHY: Rebooting ensures a clean state for the driver initialization.

Add NVIDIA's repository for the CUDA toolkit. This repo is specific to your Fedora version and architecture.

sudo dnf config-manager --add-repo \
  https://developer.download.nvidia.com/compute/cuda/repos/fedora$(rpm -E %fedora)/x86_64/cuda-fedora$(rpm -E %fedora).repo
# WHY: Adds the official NVIDIA CUDA repository for your Fedora release.
# WHY: The URL includes the release number to ensure package compatibility.
sudo dnf clean all
# WHY: Clears the cache so dnf picks up the new repository metadata immediately.

Install the CUDA toolkit. This includes nvcc, the compiler, and the runtime libraries.

sudo dnf install -y cuda-toolkit
# WHY: Installs the full CUDA development toolkit.
# WHY: Pulls in dependencies like gcc, make, and the NVIDIA driver if not already present.
# WHY: The toolkit installs to /usr/local/cuda by default.

Configure your shell to find the CUDA binaries. The toolkit installs to /usr/local/cuda, which is not in the default PATH.

echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
# WHY: Adds the CUDA bin directory to your executable search path.
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
# WHY: Tells the dynamic linker where to find CUDA shared libraries at runtime.
source ~/.bashrc
# WHY: Applies the changes to the current shell session without logging out.

Reboot before you debug. Half the time the symptom is gone after the module loads cleanly.

Verify it worked

Confirm the driver and toolkit are operational. Run nvidia-smi first. If the driver is dead, the toolkit is useless.

nvidia-smi
# WHY: Queries the NVIDIA driver for GPU status and memory usage.
# WHY: If this fails, the kernel module is not loaded or Secure Boot is blocking it.
# WHY: Look for the GPU name and driver version in the output.
nvcc --version
# WHY: Prints the CUDA compiler version.
# WHY: Confirms the toolkit is installed and accessible in your PATH.
# WHY: The version should match the toolkit package installed via dnf.

Compile and run a sample to verify the GPU can execute kernels.

cd /usr/local/cuda/samples/1_Utilities/deviceQuery
make
# WHY: Compiles the sample using the CUDA compiler.
# WHY: Uses the Makefile provided by the toolkit to build the binary.
./deviceQuery
# WHY: Runs the binary to verify the GPU can execute CUDA kernels.
# WHY: Look for "Result = PASS" at the end of the output.

Run journalctl -xe before guessing. The log usually tells you exactly which module failed to insert.

Common pitfalls and what the error looks like

Secure Boot requires signing the kernel module. The akmod-nvidia module is not signed by a key trusted by your UEFI firmware. You will see a boot failure or the module will refuse to load.

modprobe: ERROR: could not insert 'nvidia': Operation not permitted

The kernel rejects the module because it lacks a signature trusted by the firmware. Disable Secure Boot in UEFI settings for development machines. For production systems, follow the RPM Fusion Secure Boot guide to sign the module with a Machine Owner Key.

SELinux can also block CUDA applications if the context is wrong. Check journalctl -t setroubleshoot for denials. Do not disable SELinux. Use audit2why to generate the correct policy or restore the context with restorecon. Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/.

If you updated the kernel but didn't reboot, akmod might still be building. Or akmod failed silently. Check the status of the build service.

sudo systemctl status akmods
# WHY: Checks if the module build service is running.
# WHY: The build can take several minutes on slower hardware. Wait for "inactive (dead)" or "active" with success.
# WHY: If the service is failed, check journalctl -xe for compilation errors.

Trust the package manager. Manual file edits drift, snapshots stay.

When to use this vs alternatives

Use akmod-nvidia when you want the driver to rebuild automatically after every kernel update. Use kmod-nvidia when you are running a static kernel and never update the base system. Use the CUDA toolkit from NVIDIA's repo when you need the latest compiler and libraries for development. Use pre-built wheels from PyTorch or TensorFlow when you only need inference and don't want to manage the full toolkit. Use the cuda-toolkit package when you are compiling custom CUDA kernels or building applications from source. Stay on the driver provided by RPM Fusion for desktop stability. Switch to the NVIDIA repo driver only if you require a feature not yet backported.

Where to go next