The scenario
You boot your Fedora desktop and click Firefox. The window takes four seconds to appear. You open your IDE. Another three seconds. You are not running a slow machine. You have a modern CPU and plenty of RAM. The delay comes from disk I/O and the kernel loading shared libraries into memory on demand. You want those binaries resident before you click the icon. You want the system to learn your habits and prefetch the right files automatically. That is exactly what preload does.
How preload actually works
Linux does not keep every installed program in RAM. That would waste memory on applications you never run. Instead, the kernel loads shared libraries and executable segments into the page cache only when a process requests them. The first launch pays the I/O tax. Subsequent launches are faster because the files sit in RAM. preload removes the guesswork from this process. It runs as a background daemon, watches which binaries and shared objects you execute most frequently, and calculates a probability score for each file. When your system is idle, it reads those high-probability files into the page cache ahead of time.
Think of it like a stage manager before a play. Instead of waiting for the actor to ask for a prop, the manager studies the script, notes which items appear in the next scene, and places them on the wings. When the cue hits, the prop is already there. preload studies your usage patterns over a few days, builds a statistical model, and stages the files in memory. The learning period is intentional. The daemon needs real usage data to avoid filling RAM with libraries you never touch.
The daemon operates entirely within the kernel page cache. It does not modify binaries, does not change LD_PRELOAD, and does not interfere with systemd service ordering. It simply reads files into memory and lets the kernel handle eviction when active workloads need the space. Linux memory management is designed to treat unused RAM as wasted RAM. The page cache is reclaimed instantly when an application requests memory. You will not see out-of-memory conditions because of preload. You will see faster application launches.
Reboot after installation. The daemon needs a clean slate to track your actual usage patterns.
Install and enable the daemon
Fedora does not ship preload in the base repositories. The project is maintained upstream and distributed through RPM Fusion. You need to enable the free repository first, then install the package. Run these commands as root or with sudo.
sudo dnf install -y \
https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
# Adds the RPM Fusion free repository to your dnf configuration
# $(rpm -E %fedora) resolves to your current release number automatically
sudo dnf install -y preload
# Pulls the daemon and its systemd unit from the newly enabled repo
The package drops a systemd service file into /usr/lib/systemd/system/preload.service. Never edit files in /usr/lib/. Package updates will overwrite them. If you need to override defaults, create a drop-in in /etc/systemd/system/preload.service.d/. For now, the stock unit is correct. Enable and start the service.
sudo systemctl enable --now preload
# Creates the symlink in /etc/systemd/system so it starts on boot
# --now starts the daemon immediately without requiring a reboot
sudo systemctl status preload
# Always check status before assuming it is running
# Look for Active: active (running) and recent log lines
The daemon wakes up every sixty seconds by default. It scans /proc to find recently executed binaries, calculates their access frequency, and queues them for prefetching. You will not see immediate results. The learning phase takes two to four days of normal desktop usage. Let it run. Do not restart it repeatedly to force a learning cycle. The algorithm relies on consistent sampling intervals.
Check the service state before you change anything else. Half the time the symptom is a misconfigured unit, not a broken daemon.
Tune the configuration
The main configuration file lives at /etc/preload.conf. This is the correct location for user modifications. The package ships defaults that work for most desktop workloads. You only need to adjust values if you have specific hardware constraints or an unusually heavy application profile. Open the file with your preferred editor.
sudo nano /etc/preload.conf
# Edit the user-facing configuration in /etc
# Changes here survive package updates and override package defaults
The file uses a simple key = value syntax. Blank lines and lines starting with # are ignored. Focus on these three parameters.
memtotal controls the maximum fraction of physical RAM the daemon may use for caching. The default is 0, which tells preload to calculate an adaptive limit based on your total memory and current free space. Set it to a decimal between 0.1 and 0.5 if you want a hard ceiling. A value of 0.2 caps usage at twenty percent of your RAM.
memfree sets the minimum amount of free RAM the system must keep available. The default is 0, meaning preload will respect the kernel's built-in watermarks. If you run memory-intensive background tasks like compilation or video encoding, set this to 1024 or 2048 (in megabytes) to guarantee breathing room for active processes.
processes defines how many recent executions the daemon tracks in its internal history. The default is 500. Desktop users who launch dozens of tools daily can safely increase this to 1000 or 2000. The daemon stores this data in memory, not on disk, so the overhead is negligible.
Save the file and restart the daemon. Always restart after config changes. The daemon reads the file once at startup and does not watch for modifications.
sudo systemctl restart preload
# Reloads the configuration file and resets the internal sampling state
# Use restart, not reload, because preload does not implement a SIGHUP handler
Trust the package manager. Manual file edits drift, snapshots stay. Keep your /etc/preload.conf version-controlled if you manage multiple machines.
Verify it is learning
You cannot watch the page cache fill in real time. The kernel manages cache eviction transparently. You can verify preload is working by checking its state file and its journal output. The daemon writes a binary state file to /var/lib/preload/preload.state. This file contains the probability scores and access timestamps for every tracked binary. It grows as the daemon observes more launches.
ls -lh /var/lib/preload/preload.state
# Shows the current size of the learning database
# Expect gradual growth over the first three days of operation
A healthy state file will be several megabytes after a week of normal use. If it stays at zero or a few kilobytes, the daemon is not tracking processes correctly. Check the journal for errors. Use the -xe flags for better readability. The x flag adds explanatory context to priority messages, and the e flag jumps to the end of the log.
journalctl -xeu preload
# Filters logs to the preload unit only
# -x adds explanatory text, -e jumps to the most recent entries
# Look for periodic sleep and scan messages in the output
You will see a repeating pattern in the logs. The daemon wakes, scans /proc, updates its internal tables, reads files into the page cache, and goes back to sleep. If you see permission denied or cannot open /proc/, your SELinux policy might be blocking the daemon. Fedora ships with a correct SELinux policy for preload. Do not disable SELinux. Check the audit log instead.
sudo ausearch -m avc -ts recent | grep preload
# Searches the audit log for SELinux denials related to preload
# Run this only if journalctl shows permission errors
Run journalctl first. Read the actual error before guessing. Most permission issues come from running the daemon under a non-root user or from a custom systemd override that drops privileges incorrectly.
Watch for memory pressure and false alarms
Users often panic when they run free -h and see almost no free memory. This is expected behavior. Linux uses all available RAM for caching. The buff/cache column shows memory dedicated to page cache and kernel buffers. The available column shows what the kernel can actually hand to applications without swapping. preload increases the buff/cache number. It does not reduce the available number.
free -h
# Displays total, used, free, shared, buff/cache, and available memory
# Focus on the available column, not the free column
# The kernel will evict preload's cached files instantly if an app needs RAM
If your system starts swapping heavily, preload is not the cause. Swapping happens when active processes exceed physical RAM. The page cache is the first thing the kernel discards. You can confirm cache eviction by watching vmstat 1. Look at the si and so columns for swap activity. If they spike, you need more RAM or you need to close memory-hungry applications. preload will not fix a memory shortage. It only optimizes I/O for repeated launches.
Some users report that preload slows down their system. This usually happens on machines with 4 GB of RAM or less, or on systems running continuous heavy workloads like database servers or build farms. The daemon's sixty-second wake cycle introduces a tiny I/O spike. On a spinning hard drive, that spike can compete with active reads. On an NVMe drive, the spike is invisible. If you notice stutter during the daemon's wake cycle, increase the interval parameter in /etc/preload.conf to 120 or 180. The learning period will take longer, but the system will feel smoother.
Monitor available memory, not free. The kernel handles cache eviction automatically.
When to use preload
Use preload when you run a desktop workload with repeated application launches and have 8 GB of RAM or more. Use preload when your system uses a mechanical hard drive and seek latency dominates startup times. Use preload when you want a set-and-forget optimization that requires zero manual intervention after the learning period. Skip preload when you run a server or headless machine that does not launch interactive applications. Skip preload when your system has 4 GB of RAM or less and active workloads already consume most of the memory. Skip preload when you use an NVMe SSD and already experience sub-second application launches. Stay on the default configuration if you only deviate from the standard desktop workflow occasionally.