You reboot after a dnf upgrade and the desktop fails to start
You drop to a TTY, run journalctl, and watch thousands of lines scroll past. Scrolling through a raw dump is impossible. You need to isolate the exact service that crashed, find the timestamp of the failure, and ignore the routine noise. Filtering is not a convenience feature. It is the only way to read the journal without losing context.
How the journal actually stores data
The systemd journal is not a collection of plain text files. It is a binary database that indexes every log entry by unit, timestamp, priority, process ID, and hostname. When you run journalctl without flags, it reads the entire database and streams it to your terminal. Filtering works by querying those indexes instead of scanning line by line. This makes it fast, but it also means you must understand what the indexes contain before you can query them effectively.
Think of the journal like a structured spreadsheet. Each row is a log message. Each column is a metadata field. You can slice by column, sort by timestamp, and filter by severity. The binary format keeps disk usage predictable and allows systemd to rotate logs automatically. It also means you cannot use grep on the raw journal files. You must use journalctl to decode and filter the data.
Filter by service and unit
Start with the unit flag. Every systemd service, socket, timer, or target has a canonical name. Use -u to restrict output to that unit. Add -xe to get explanatory hints and jump to the end of the log. This combination is the standard first step for debugging a broken service.
# Restrict output to the SSH daemon unit
journalctl -u sshd
# Add explanatory text and jump to the end of the log
journalctl -xeu sshd
# Follow new log lines in real time as the service runs
journalctl -fu sshd
Systemd normalizes unit names automatically. sshd and sshd.service resolve to the same unit. Custom scripts or legacy daemons might log under a different unit name. Verify the exact name with systemctl list-units --type=service before filtering.
Run journalctl -xeu <unit> first. Read the explanatory hints before guessing at the cause.
Filter by time and boot cycle
Timestamps in the journal are flexible. You can use natural language, absolute dates, or boot cycle references. The journal tracks each boot as a separate session. This matters when a service crashes during startup but recovers later in the session.
# Show logs from the last hour using natural language
journalctl --since "1 hour ago"
# Show logs between two absolute timestamps
journalctl --since "2025-01-01 00:00:00" --until "2025-01-02 00:00:00"
# Show logs from the current boot cycle only
journalctl -b
# Show logs from the previous boot cycle
journalctl -b -1
The -b flag is critical for boot debugging. If a driver fails to load, the error appears in the previous boot log, not the current one. Combine -b -1 with -u to isolate the exact service that failed during startup. Systemd stores runtime logs in /run/log/journal/ and persistent logs in /var/log/journal/. If persistent logging is disabled, -b -1 will return nothing after a reboot.
Check the previous boot log before restarting. The error message disappears when the system comes back up.
Filter by priority and severity
Syslog priorities range from zero to seven. Zero is emergency. Seven is debug. The -p flag filters inclusively downward. Specifying err shows errors, critical, alert, and emergency messages. It hides warnings, notices, info, and debug output.
# Show errors and everything more severe
journalctl -p err
# Show warnings and above, restricted to the last 24 hours
journalctl -p warning --since "24 hours ago"
# Show only critical and emergency messages
journalctl -p crit
The priority levels map directly to syslog standards. emerg means system is unusable. alert means action must be taken immediately. crit means critical conditions. err means error conditions. warning means warning conditions. notice means normal but significant. info means informational. debug means debug-level messages. Most production debugging stops at err or warning. Debug output floods the terminal and masks the actual failure.
Filter by priority before you filter by time. Severity narrows the dataset faster than timestamps do.
Combine filters for precision
Filters compose naturally. Order does not matter, but grouping them logically improves readability. Combine unit, priority, and time to isolate a specific failure window.
# Show error-level logs from firewalld in the last day
journalctl -u firewalld -p err --since "24 hours ago"
# Show critical messages from the previous boot
journalctl -b -1 -p crit
# Follow debug output for a specific service
journalctl -fu NetworkManager -p debug
You can also filter by arbitrary fields. The journal indexes process IDs, user IDs, and custom metadata. Use _PID= to track a specific process. Use _UID= to track a specific user. Use _COMM= to track a specific executable name.
# Show logs from a specific process ID
journalctl _PID=1234
# Show logs from a specific executable name
journalctl _COMM=python3
# Show logs from a specific user ID
journalctl _UID=1000
Field filters bypass unit boundaries. They are useful when a daemon spawns child processes that log under different units. Combine field filters with -o cat to strip timestamps and read raw messages.
Test your filter with -n 50 first. Verify the output matches your expectations before scrolling through thousands of lines.
Verify and format the output
Once you have a working filter, adjust the output format to match your debugging needs. The default format includes timestamps, hostname, unit, and message. It is verbose but readable. Use -o to change the format.
# Output in pretty-printed JSON for parsing
journalctl -u sshd -o json-pretty
# Output raw messages only, stripping metadata
journalctl -u sshd -o cat
# Reverse order to see the newest entries first
journalctl -u sshd -r
# Limit output to the last 50 lines
journalctl -u sshd -n 50
JSON output is useful for piping into jq or log aggregation tools. Cat output is useful for reading long error messages without timestamp clutter. Reverse order is useful when you only care about the most recent failure. Line limits are useful for quick sanity checks.
Run journalctl --disk-usage before you format. Know how much space the journal is consuming before you decide to export or archive logs.
Common pitfalls and permission traps
The journal splits logs by user and privilege level. Regular users only see their own logs by default. System logs require root privileges. Running journalctl without sudo often returns an empty result or a permission error.
Failed to get D-Bus connection: Operation not permitted
This error appears when a non-root user tries to read system journal files. Switch to root or prefix the command with sudo. The journal uses D-Bus to enforce access control. Polkit can grant specific users read access, but the default Fedora configuration restricts system logs to root.
Unit name mismatches are the second most common trap. A service might be called httpd in the package but apache2 in the unit file. Or a custom script might log under systemd-journald instead of its own unit. Use systemctl status <unit> to verify the exact unit name before filtering. The status command shows recent log lines and the current state in one view. Always check status before restarting.
Rotated logs are the third trap. Systemd compresses and purges old logs automatically. If --since points to a date before the oldest retained log, the command returns nothing. Check retention settings in /etc/systemd/journald.conf. Edit /etc/, never /usr/lib/. The /usr/lib/ directory ships with the package and gets overwritten on updates. The /etc/ directory is for user modifications.
# /etc/systemd/journald.conf
[Journal]
# Keep logs until they consume 200MB of disk space
SystemMaxUse=200M
# Keep logs until they are 30 days old
MaxFileSec=1month
After changing the configuration, run sudo systemctl restart systemd-journald. The daemon reads the new limits on restart. Old logs are not retroactively deleted. New logs follow the retention policy.
Run journalctl --disk-usage first. Read the actual retention numbers before guessing why old logs disappeared.
When to use which approach
Use journalctl -u when you know the service name and need to isolate its output from the rest of the system. Use journalctl -b -1 when debugging a crash that happened during a previous boot cycle. Use journalctl -p err when you only care about failures and want to ignore routine informational messages. Use journalctl --disk-usage when your root partition is filling up and you need to audit log retention before making changes. Use journalctl -o json-pretty when you are piping logs into a parser or aggregation pipeline. Use journalctl _PID= when a daemon spawns child processes that log under different units.
Pick the narrowest filter first. Expand only when the output is empty. Trust the indexes. Manual scrolling drifts, structured queries stay accurate.