You see a flood of failed SSH logins at 3 AM
Your terminal is quiet until a monitoring alert pings your phone. The message says your machine is receiving dozens of authentication failures per minute. You ssh into the box and see a wall of Failed password for invalid user messages. You need to know whether this is background internet noise or a targeted attack. You also need a reliable way to track who is trying to get in, where they are coming from, and whether any attempt actually succeeded.
Fedora ships with two primary tools for this job. The systemd journal captures runtime service logs in real time. The audit daemon sits closer to the kernel and records system calls and file access with tamper-resistant metadata. Both tools work together, but they serve different stages of an investigation.
What is actually happening under the hood
Linux authentication is a pipeline. When a user types a password, the request passes through Pluggable Authentication Modules. PAM checks the credentials against /etc/shadow, applies account lockout policies, and logs the result. The sshd service hands the result to systemd-journald, which stores it in a binary log format. If you need deeper visibility, the Linux Audit Framework intercepts the open, read, and auth syscalls before they even reach the application layer.
Think of journald as a live security camera feed. It is fast, searchable, and rotates automatically. Think of auditd as a forensic ledger. It records exactly which process touched which file, what the return code was, and whether SELinux allowed or denied the action. You do not need both tools running at the same time for basic monitoring. You need journald for daily operations and auditd when you require compliance tracking or deep incident response.
The journal lives in /var/log/journal/ on disk or in /run/log/journal/ in volatile memory. The audit daemon writes to /var/log/audit/audit.log. Both paths are managed by systemd and logrotate. Neither tool modifies the authentication process itself. They only observe it.
Monitor real-time login attempts with journalctl
The journal is already running on your system. You do not need to install anything to start watching authentication events. The sshd service writes every login attempt to the journal under its unit name. You can stream those entries live and filter for success or failure.
Here is how to watch the SSH daemon log stream in real time and isolate authentication results.
# Stream the journal for the sshd unit in real time
sudo journalctl -u sshd -f | grep -iE 'failed|accepted|invalid'
# -u targets the specific systemd unit
# -f follows new entries as they are written
# grep filters the output for authentication keywords
# -iE enables case-insensitive extended regex matching
This command gives you immediate visibility into active attempts. If you are troubleshooting a specific user account, add the _UID or _COMM field to narrow the scope. The journal stores entries in /var/log/journal/ by default. If you are running on a minimal server install, the journal might be in volatile memory and will disappear on reboot. Run sudo systemctl enable systemd-journal-flush.service to persist logs to disk.
A quick convention aside: most sysadmins type journalctl -xeu sshd when debugging. The x flag adds explanatory priority tags, the e flag jumps to the end of the log, and the u flag filters by unit. It is faster than scrolling through raw output. Another standard pattern is journalctl --disk-usage. Run it weekly. The journal will silently consume your root partition if you leave the default limits untouched.
Here is how to search historical logs for a specific source IP address.
# Query the journal for connections from a specific IP
sudo journalctl -u sshd | grep '192.168.1.100'
# grep matches the IP against the full log line
# Add --since "yesterday" to limit the search window
# Add -n 50 to show only the last 50 matching entries
The journal does not parse structured JSON by default. It uses a custom binary format. The grep approach works because sshd prints the remote address in plain text. If you need structured querying, install jq and pipe journalctl -o json into it. The binary format is faster for the system, but plain text grep is usually enough for login tracking.
Set up persistent tracking with auditd
Real-time streaming is useful for active incidents. It does not help when you need to reconstruct events from yesterday or prove that a specific IP address triggered a lockout. The audit daemon fills that gap. It records system calls, file modifications, and authentication events with process IDs, user IDs, and timestamps.
Here is how to install the audit framework and create a persistent rule for tracking login attempts.
# Install the audit daemon and utilities
sudo dnf install audit -y
# Enable and start the service immediately
sudo systemctl enable --now auditd.service
# Create a persistent rule directory if it does not exist
sudo mkdir -p /etc/audit/rules.d/
# Write a rule to watch authentication log files for writes
echo "-w /var/log/secure -p wa -k login_attempts" | sudo tee /etc/audit/rules.d/99-login.rules
# Reload the audit rules from the persistent directory
sudo augenrules --load
The rule syntax follows a strict pattern. The -w flag specifies the file path. The -p flag defines the permissions to watch. wa means watch for writes and attribute changes. The -k flag assigns a custom key name. You will use that key to search the logs later. The augenrules command translates the text files in /etc/audit/rules.d/ into the binary format that the kernel understands. Always use augenrules instead of auditctl for persistent configuration. Runtime rules vanish on reboot. Persistent rules survive.
Here is how to query the audit logs using the key you just defined.
# Search audit logs for the custom key
sudo ausearch -k login_attempts
# -k filters by the rule key name
# Add -ts today to limit results to the current day
# Add -i to translate numeric UIDs and GIDs to names
The output will show the exact process that wrote to the log, the user context, and the syscall number. If you need to track failed logins specifically, you can add a second rule that watches the pam_unix authentication calls. The audit framework does not filter by success or failure out of the box. It records every match. You filter the results after the fact.
Verify the monitoring pipeline
You need to confirm that both tools are capturing events correctly before you rely on them during an incident. Trigger a controlled test. Attempt a login with an incorrect password from a secondary terminal or a remote machine. Then check both log sources.
Here is how to verify that the journal captured the failed attempt.
# Check recent sshd entries for the test failure
sudo journalctl -u sshd --since "5 minutes ago" | grep -i failed
# --since limits the search window to avoid noise
# grep isolates the failure message for quick verification
Here is how to verify that the audit daemon recorded the file access.
# Query audit logs for the test window
sudo ausearch -k login_attempts -ts "5 minutes ago" -i
# -ts sets the time boundary for the search
# -i translates IDs to readable names
# Look for the execve or openat syscall in the output
If the journal shows the failure but auditd does not, check the audit service status. Run sudo systemctl status auditd.service. If the service is inactive, SELinux might be blocking it. Check journalctl -t setroubleshoot for denial messages. If SELinux is enforcing and blocking auditd, run sudo restorecon -Rv /etc/audit/ to fix file contexts. Do not disable SELinux to make auditd work. Fix the context instead.
Common pitfalls and what the errors look like
Monitoring tools fail in predictable ways. Knowing the exact error messages saves hours of guessing.
The journal will refuse to follow logs if the unit name is misspelled. You will see -- No entries -- or Unit sshd.service could not be found. Double-check the service name with systemctl list-units --type=service | grep ssh.
The audit daemon will silently drop rules if the syntax is invalid. You will see Error: Invalid rule when running augenrules --load. The audit framework does not guess your intent. It requires exact flag ordering. Check the man page for audit.rules if the loader complains.
Disk space exhaustion is the most common operational failure. The journal grows until it hits the configured limit. If the limit is too high, the root filesystem fills up and services crash. Run journalctl --disk-usage to check current size. Edit /etc/systemd/journald.conf and set SystemMaxUse=500M to cap growth. Restart the daemon with sudo systemctl restart systemd-journald.
Audit logs rotate differently than the journal. They use logrotate by default. If you lose historical data after a week, check /etc/logrotate.d/audit. The rotation policy keeps seven copies by default. Adjust rotate 7 if your compliance policy requires longer retention.
Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/. The same rule applies to audit configuration. Place your custom rules in /etc/audit/rules.d/. Let the package manager handle /etc/audit/audit.rules.
When to use journalctl versus auditd versus fail2ban
Use journalctl when you need immediate visibility into active service logs and want to filter by unit, priority, or time window. Use auditd when you require syscall-level tracking, compliance reporting, or tamper-resistant records of file access and authentication events. Use fail2ban when you want automated IP blocking based on repeated failures without manual intervention. Use sshd_config when you need to harden the daemon itself by disabling password authentication, changing the port, or restricting allowed users.
Check the logs before you block. Run journalctl -u sshd first. Read the actual error before guessing.