How to Monitor Disk I/O Performance on Fedora (iotop, iostat)

Monitor Fedora disk I/O with iotop for per-process usage and iostat for device statistics.

The disk is humming, but the system is crawling

You rebooted your Fedora machine after a routine update. The desktop loads, but opening a folder takes ten seconds. The terminal hangs on a simple ls. The fans spin up, but top shows the CPU sitting at two percent. The system is not starving for processing power. It is waiting on the storage subsystem. You need to see which process is hammering the disk and whether the drive itself is choking.

What is actually happening under the hood

Disk I/O waits happen when the kernel queues read or write requests faster than the storage device can fulfill them. Modern NVMe drives hide latency well, but heavy database transactions, package manager operations, or runaway backup scripts will still push the queue depth past the hardware limit. When that happens, the kernel marks processes as D (uninterruptible sleep). They cannot be killed. They simply wait for the block layer to respond.

You need two different lenses to diagnose this. One lens shows which process is generating the requests. The other lens shows how the physical or virtual block device is handling the load. iotop gives you the process view. iostat gives you the device view. Using both together turns a vague slowdown into a precise bottleneck.

Install the monitoring stack

Fedora does not ship these monitoring utilities by default to keep the base image lean. You need to pull them from the standard repositories. iotop lives in its own package, while iostat ships inside the sysstat collection. Run the installation command once. The sysstat package also installs the sar utility, which records historical performance data. Fedora users typically run dnf upgrade --refresh weekly to pull in security patches and updated monitoring tools. Keep your toolchain current so you are not debugging with outdated kernel interface assumptions.

Here is how to pull both utilities into your system in a single transaction.

sudo dnf install iotop sysstat -y
# -y skips the confirmation prompt for non-interactive installs
# sysstat brings iostat, sar, and mpstat into your PATH
# iotop provides per-process block I/O accounting
# dnf resolves dependencies and pulls from enabled repositories

Run dnf upgrade --refresh after installing new monitoring tools. It forces a metadata refresh and ensures you are working with the latest package versions.

Watch per-process activity with iotop

This tool attaches to the kernel block layer and maps pending requests back to process IDs. It works like top, but tracks bytes read and written instead of CPU cycles. You must run it as root to see full process names and accurate accounting. Without elevated privileges, it falls back to kernel thread names like kworker/u8:2, which tells you nothing about the actual application.

Here is how to launch iotop in a focused mode that filters out idle processes.

sudo iotop -o
# -o shows only processes currently performing disk I/O
# root privileges are required to read /proc/diskstats accurately
# the interface updates every second by default
# press q to exit when you identify the offending process

The output columns tell a clear story. READ and WRITE show instantaneous throughput in bytes per second. SWAPIN shows time spent waiting for swap pages. IO shows the percentage of time the process spent waiting on I/O. COMMAND shows the executable name. If you see a database daemon or a backup utility sitting at 90 percent IO, you have found your culprit. You can sort the output by pressing o to toggle the idle filter, or P to switch between cumulative and instantaneous rates.

Check the IO column first. High percentages mean the process is blocked on storage, not computation.

Track device-level throughput with iostat

While iotop tracks processes, iostat tracks the block devices themselves. It reads from /proc/diskstats and calculates throughput, latency, and queue depth. The extended mode breaks down metrics per device and per partition. This is where you spot a failing drive, a misconfigured RAID array, or a storage controller hitting its limit.

Here is how to run iostat in extended monitoring mode with a one-second interval.

iostat -xz 1 5
# -x enables extended statistics like await and svctm
# -z suppresses devices with zero activity to reduce noise
# 1 sets the refresh interval to one second
# 5 limits the output to five iterations before exiting

The extended output contains several critical columns. %util shows the percentage of time the device was busy processing requests. Values near 100 percent mean the queue is saturated. await shows the average time (in milliseconds) for an I/O request to complete, including queue wait time and service time. svctm shows the average service time per request. If await is significantly higher than svctm, requests are sitting in the queue. r/s and w/s show reads and writes per second. rkB/s and wkB/s show throughput in kilobytes.

Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/sysconfig/sysstat if you need to change the default sar collection interval. Never edit /usr/lib/sysstat/. The package manager will overwrite your changes on the next update.

Watch await and %util together. A saturated queue with rising latency means the storage subsystem cannot keep up.

Verify the bottleneck is gone

Cross-reference the two tools to confirm the fix. If iotop shows a process writing heavily and iostat shows %util at 100 percent with rising await, you have confirmed the bottleneck. Kill or throttle the process. Watch the metrics drop. Reboot before you debug. Half the time the symptom is gone after a clean boot cycle clears stuck I/O caches.

Run journalctl -xe after the system stabilizes. The x flag adds explanatory text and the e flag jumps to the end. Most sysadmins type journalctl -xeu <unit> muscle-memory style. Look for storage-related errors or filesystem warnings that might explain why the queue backed up in the first place.

Run iostat -xz 1 for thirty seconds after the fix. Stable await under five milliseconds means the drive is breathing again.

Common pitfalls and what the error looks like

Common mistakes include confusing %util with CPU usage, or assuming iowait in top means the disk is broken. iowait just means the CPU is idle waiting for I/O. It is normal during dnf transactions. Another pitfall is running iotop without root and getting kernel thread names. You will also see Permission denied if you try to access /proc/diskstats without proper capabilities. The error prints Error: iotop needs to be run as root. Run it with sudo. Do not force it.

SELinux denials show up in journalctl -t setroubleshoot with a one-line summary. Read those before disabling SELinux. A misconfigured policy can silently block a backup daemon from writing to the journal, causing it to retry and hammer the disk.

If you see [FAILED] Failed to start systemd-journald.service during boot, your storage configuration probably references a missing interface name or a corrupted journal partition. Check /etc/fstab and verify mount points before rebooting into rescue mode.

Trust the package manager. Manual file edits drift, snapshots stay.

When to use iotop versus iostat versus alternatives

Use iotop when you need to identify which application is generating disk requests. Use iostat when you need to measure device throughput, latency, or queue depth. Use sar -d when you need historical data from the past few hours. Use bpftrace or bcc tools when you need kernel-level tracing of specific system calls. Stay on iotop and iostat for daily desktop and server triage.

Where to go next