The scenario
You are managing three Fedora servers. One handles web traffic, one runs a database, and one processes background jobs. A disk fills up on the database server. The web server starts returning 500 errors. You SSH into each machine, run journalctl, and scroll through thousands of lines trying to find the root cause. The logs are scattered. The timeline is broken. You need a single place to see what happened, when it happened, and why it broke.
What the stack actually does
The ELK stack centralizes log data by routing it through three dedicated services. Think of it like a postal sorting facility. Logstash is the mail carrier that picks up logs from your servers, stamps them with timestamps, and routes them. Elasticsearch is the warehouse that indexes every piece of mail so you can search it instantly. Kibana is the front desk where you visualize the data and build dashboards. On Fedora, these three components run as standard systemd services. They talk to each other over localhost by default. You do not need to compile anything from source. The Fedora repositories package them with proper init scripts, SELinux policies, and firewall rules.
Run systemctl status before you restart anything. Checking the current state saves you from masking a deeper configuration error.
Install and start the services
Fedora handles the Java dependency automatically when you pull in Elasticsearch. The package manager resolves the OpenJDK runtime, creates the elasticsearch and logstash system users, and drops configuration files into /etc/. You will only edit files in /etc/. Files in /usr/lib/ ship with the package and will be overwritten on the next dnf upgrade.
Here is how to pull the packages and register them with systemd.
sudo dnf install -y elasticsearch logstash kibana
# Pulls the three core packages and resolves Java dependencies automatically
sudo systemctl enable --now elasticsearch logstash kibana
# Creates symlinks in /etc/systemd/system for boot persistence and starts them immediately
sudo systemctl restart logstash
# Reloads the pipeline after any configuration change
Elasticsearch requires a dedicated heap allocation. The Fedora package ships with a sensible default in /etc/elasticsearch/jvm.options, but you should verify it matches your available RAM. Logstash starts a pipeline manager that watches /etc/logstash/conf.d/. Kibana binds to port 5601 on localhost by default. If you plan to access Kibana from another machine, you will need to adjust the network binding and open the firewall.
Run journalctl -xeu elasticsearch.service if the service fails to start. The explanatory flags show you exactly which configuration line tripped the guard.
Configure the Logstash pipeline
Logstash processes data in three stages: input, filter, and output. The input stage defines where logs come from. The filter stage parses, enriches, or drops events. The output stage ships the structured data to Elasticsearch. You define this flow in a single configuration file. The pipeline manager reads the file, compiles it into a Ruby-based execution graph, and applies it to every incoming event.
Here is a minimal pipeline that reads system logs, extracts timestamps and severity levels, and forwards them to Elasticsearch.
input {
file {
path => "/var/log/messages"
# Reads the traditional syslog file line by line without blocking the pipeline
start_position => "beginning"
# Processes historical entries on first run instead of only new lines
}
}
filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
# Parses the raw text into named fields using a prebuilt syslog pattern
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
# Replaces the default @timestamp with the actual log entry time
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
# Points to the local Elasticsearch HTTP API for indexing
index => "fedora-logs-%{+YYYY.MM.dd}"
# Creates a daily index pattern for easier retention management
}
}
Save the file as /etc/logstash/conf.d/logstash.conf. Restart Logstash to apply the changes. The pipeline manager validates the syntax before loading it. If the syntax is broken, Logstash will refuse to start and print a parse error to the journal.
Edit only /etc/logstash/conf.d/. Never modify files under /usr/share/logstash/ or /usr/lib/. Package updates will overwrite manual edits and break your pipeline.
Verify the pipeline
You need to confirm three things: Logstash is parsing events, Elasticsearch is accepting them, and Kibana can query them. Start by checking the service states. Then query the Elasticsearch cluster health endpoint. Finally, open Kibana in a browser and create an index pattern.
Here is how to verify the cluster is ready to receive data.
curl -s http://localhost:9200/_cluster/health?pretty
# Queries the REST API and returns JSON formatted for human readability
curl -s http://localhost:9200/_cat/indices?v
# Lists all active indices and shows document counts and shard status
sudo journalctl -xeu logstash.service --since "10 minutes ago"
# Shows recent pipeline activity and catches silent parsing failures
A healthy cluster returns status: green or yellow. Green means all primary and replica shards are active. Yellow means primaries are active but replicas are unassigned, which is normal for a single-node setup. If the index pattern does not appear in Kibana, check the index setting in your Logstash config. The pattern must match exactly.
Run curl against the health endpoint before you blame the network. Most "missing data" issues are just a mismatched index name.
Common pitfalls and error patterns
Logstash will refuse to start if the pipeline config contains a syntax error. The error message appears in the journal and stops the service from entering the active state.
[ERROR] 2024-05-12 14:23:01.105 [[main]-pipeline-manager] javapipeline - Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<LogStash::ConfigurationError: Expected one of #, {, }, at line 12, column 1 (byte 289) after filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
The missing closing brace on the filter block breaks the parser. Add the brace and restart the service. Elasticsearch will also fail to start if the JVM heap exceeds available RAM. You will see an OutOfMemoryError in /var/log/elasticsearch/elasticsearch.log. Lower the -Xms and -Xmx values in /etc/elasticsearch/jvm.options to match your hardware.
SELinux denials are another common blocker. If Logstash cannot read /var/log/messages, the audit log will show an avc: denied entry. Run ausearch -m avc -ts recent to find the denial. Apply the correct boolean or restorecon the file. Do not disable SELinux. The policies exist to prevent the pipeline from reading sensitive credentials or kernel ring buffers.
Firewall rules also trip up remote Kibana access. Kibana binds to 127.0.0.1 by default. Change server.host: "0.0.0.0" in /etc/kibana/kibana.yml, then open the port.
sudo firewall-cmd --permanent --add-port=5601/tcp
# Adds the rule to the persistent firewall configuration
sudo firewall-cmd --reload
# Applies the change to the running firewall without dropping active connections
Read the actual error before guessing. Half the time the journal tells you exactly which file or permission is missing.
When to use ELK versus alternatives
Use the ELK stack when you need full-text search across terabytes of logs and want to build custom dashboards with Kibana. Use traditional rsyslog forwarding when you only need to ship logs to a central file for compliance and do not care about real-time querying. Use Grafana Loki with Promtail when you want lightweight log aggregation that reuses your existing Prometheus infrastructure and keeps storage costs low. Use Cockpit's built-in journal viewer when you are managing a small fleet and only need to glance at recent boot or service errors from a browser. Stay on local journalctl with logrotate if you are running a single workstation and want zero external dependencies.
Trust the package manager. Manual file edits drift, snapshots stay.