How to Monitor Memory Usage and Fix High Memory Consumption on Fedora

Use standard Linux tools like free, top, and smem to identify what is consuming memory on Fedora, then take targeted action to reclaim it.

You upgraded, opened your apps, and the system froze

You launch your browser, your IDE, and a few terminals. The mouse starts stuttering. The disk LED blinks furiously. You open the system monitor and see memory usage at 98%. Panic sets in. You think you need more RAM or a reboot. You probably don't. The kernel is likely caching disk data aggressively, or one process is leaking memory. This guide shows you how to distinguish between healthy cache and a real problem, find the culprit, and reclaim resources without guessing.

What's actually happening

Linux treats unused RAM as wasted RAM. The kernel grabs free memory and fills it with page cache, dentries, and inodes. This cache speeds up file access. When an application needs memory, the kernel instantly drops the cache and hands the RAM over. The "used" number you see in many tools includes this cache. It looks scary but it's not a problem.

The real metric is available. This number tells you how much memory is ready for new processes right now. It combines truly free memory with reclaimable cache and buffers. If available is high, your system is healthy. If available is low, something is consuming RAM that cannot be reclaimed. That's when you investigate.

Think of RAM like a desk. The cache is papers you've spread out for quick reference. You can clear the desk instantly when you need space for a new project. A memory leak is a stack of papers glued to the desk. You can't clear them without removing the stack entirely. Your job is to find the glued stacks.

Check overall memory

Run free -h to get a human-readable snapshot of your memory state. The output shows total, used, free, shared, buff/cache, and available. Focus on the available column. This value represents the actual headroom your system has. Ignore the free column. It only shows memory that is completely empty, which is rarely useful on a running system.

free -h
# -h formats numbers in human-readable units like GB and MB
# Check the 'available' column, not 'free'
# 'available' includes reclaimable cache, so it reflects real capacity
# If 'available' is above 10% of total, the system is usually fine

Trust the available column. Ignore the red bars in GUI monitors until available drops below 500MB.

Find the top memory consumers

Once you confirm low available memory, find the culprit. Processes share memory pages. A simple RSS (Resident Set Size) count can overestimate usage because it counts shared libraries multiple times. Use ps for a quick list, but install smem for accuracy. smem calculates USS (Unique Set Size), which shows memory that would actually be freed if you killed the process.

ps aux --sort=-%mem | head -20
# --sort=-%mem sorts processes by memory usage in descending order
# head -20 limits output to the top 20 consumers
# This gives a quick overview but overcounts shared memory
# Use this for a fast check, not for precise accounting

For accurate per-process memory, smem is the standard tool. It handles shared memory correctly by calculating PSS (Proportional Set Size) and USS. The USS column shows the memory unique to each process. This is the number that matters when you decide what to kill.

sudo dnf install smem
# Install smem for accurate per-process memory accounting
# smem calculates PSS and USS to handle shared memory correctly
smem -r -s uss | head -20
# -r requests a report sorted by memory
# -s uss sorts by Unique Set Size, the memory unique to each process
# The USS column shows what you actually gain by killing a process

Kill by USS, not RSS. Shared libraries vanish from the count only when the last process exits. Killing a process with high RSS but low USS might not free much memory.

Drop page cache

Sometimes available is low, but no single process is guilty. The kernel might be holding onto stale cache that isn't dropping fast enough. You can force the kernel to release page cache, dentries, and inodes. This is safe but temporary. The cache will refill as you use the system. Use this only for testing or immediate relief, not as a permanent fix.

sync
# Flushes file system buffers to disk
# Ensures no dirty data is lost before dropping cache
echo 3 | sudo tee /proc/sys/vm/drop_caches
# Writes 3 to drop_caches to clear pagecache, dentries, and inodes
# sudo tee is needed because /proc/sys files are root-only
# This is safe but the cache will repopulate quickly

Dropping cache buys seconds, not solutions. If memory fills up again instantly, a process is leaking.

Identify and fix a leaking process

If a process grows steadily over time, it has a memory leak. Restarting the service clears the leak. For systemd services, use systemctl. For standalone apps, send signals. Always try a graceful kill first. SIGTERM asks the process to clean up. SIGKILL forces termination and can leave corrupt state.

sudo systemctl restart <service-name>
# Restarts the service and clears leaked memory
# Replace <service-name> with the actual unit name like firefox.service
# Check status after restart to ensure the service came back up
# systemctl status <service-name> shows the current state and recent logs

For non-service processes, use kill. Send SIGTERM first. Wait a few seconds. If the process is frozen, escalate to SIGKILL.

kill <PID>
# Sends SIGTERM to the process, requesting a graceful shutdown
# The process can catch this signal and clean up resources
kill -9 <PID>
# Sends SIGKILL, which cannot be caught or ignored
# Use this only if the process is frozen and ignores SIGTERM
# SIGKILL may leave temporary files or locks behind

Restart the service before blaming the hardware. A leak is software, not silicon.

Verify the fix

Run free -h again. The available column should rise. Run smem -r -s uss to confirm the process memory dropped. If the service restarted, check systemctl status <service> to ensure it is active. A green active (running) state confirms the fix held. If the memory climbs back up within minutes, the leak is persistent. You may need to update the application or adjust its configuration.

Tune swappiness and zram

Fedora manages swap intelligently. The default swappiness is 60, which balances RAM and swap usage. Lowering swappiness makes the kernel prefer RAM over swap. This helps on systems with plenty of RAM but can hurt if you run out of memory. Fedora often enables zram, which compresses data in RAM before swapping. This is usually better than disk swap. Check your swap setup.

sudo sysctl vm.swappiness=10
# Sets swappiness to 10, reducing the tendency to swap
# Lower values keep more data in RAM, higher values swap earlier
# This change is temporary and resets on reboot
# Test this value before making it permanent

To make the change permanent, edit a file in /etc/sysctl.d/. Never edit files in /usr/lib/sysctl.d/. Those files ship with packages and get overwritten on updates.

# /etc/sysctl.d/99-custom.conf
vm.swappiness=10
# Sets swappiness permanently across reboots
# Files in /etc/sysctl.d/ override defaults from /usr/lib/sysctl.d/
# Edit /etc files only, never modify files in /usr/lib/
# The 99- prefix ensures this file loads late in the sequence

Fedora enables zram by default on many Workstation and Server configurations via systemd-zram-setup@.service. This service creates a compressed block device in RAM and formats it as swap. The compression ratio depends on your workload. Text and logs compress well. Encrypted data or already compressed media does not. Check if zram is active.

swapon --show
# Lists active swap devices and their sizes
# Look for a device named zram0 to confirm compressed swap is active
# zram reduces disk I/O by keeping swap data in compressed RAM
# If zram is present, it is already helping reduce pressure

Leave zram alone. It's faster than disk and safer than running out of memory.

Common pitfalls

Watch for these traps. free shows low free memory but high available. This is normal. Don't panic. Killing kworker or kthreadd. These are kernel threads. Killing them does nothing or causes instability. Disabling swap. This triggers the OOM killer instantly when RAM fills. Keep swap or zram enabled. The OOM killer is a safety net.

When you run journalctl -xe, the x flag adds explanatory hints and the e flag jumps to the end. Most sysadmins type journalctl -xeu <unit> to filter logs for a specific service. This muscle memory saves time when hunting for OOM events. The OOM killer logs appear in the kernel ring buffer. Use journalctl -k to see kernel messages directly. This isolates memory events from application noise.

If you see Out of memory: Killed process in the logs, the OOM killer saved the system. Find the log to see what was killed.

Out of memory: Killed process 1234 (firefox) total-vm:8000000kB, anon-rss:6000000kB
# The kernel terminated firefox because memory was exhausted
# Check journalctl -t kernel for the full context
# This error means you hit the hard limit, not a warning
# The kernel sacrificed firefox to keep the system alive

Read the OOM log. The kernel tells you exactly what it sacrificed to keep the system alive.

When to use this vs alternatives

Use free -h when you need a quick sanity check of available memory. Use smem when you need to identify the true memory cost of processes with shared libraries. Use drop_caches when you suspect stale cache is masking the real usage during testing. Use systemctl restart when a service has a known leak and needs a clean state. Use sysctl vm.swappiness when you want to tune the balance between RAM retention and swap usage. Use zram when you have limited disk I/O bandwidth and need fast swap performance.

Pick the tool that matches the symptom. Diagnosis comes before tuning.

Where to go next