The scenario
You just deployed a new Fedora server or decided to monitor your desktop machine. The system runs fine until a disk fills up or a service crashes at 3 AM. You need a way to see what is happening before the outage becomes a panic. Prometheus collects the numbers. Grafana draws the charts. Together they turn raw metrics into actionable alerts.
How the pull model actually works
Monitoring on Fedora typically follows a pull architecture. Prometheus does not wait for your services to push data. It visits each target on a fixed schedule, requests metrics over HTTP, and stores the results in a local time-series database. Grafana does not collect data. It queries Prometheus and renders the results as dashboards. Think of Prometheus as a meter reader walking a route every fifteen seconds. Think of Grafana as the dashboard in your car that translates those readings into gauges and warning lights.
The separation of concerns matters. Prometheus handles storage, querying, and alerting rules. Grafana handles visualization and dashboard sharing. You can run Prometheus without Grafana. You can run Grafana with other data sources. Most Fedora users pair them because the integration is straightforward and both packages ship in the official repositories.
Install and launch the services
Fedora packages both tools in the default repositories. You do not need third-party repositories or container images to get started. Run the installation command and enable the systemd units.
sudo dnf install -y prometheus grafana
# -y skips the confirmation prompt. Both packages pull in their dependencies.
# prometheus provides the metrics collector and time-series database.
# grafana provides the web UI and dashboard engine.
Start the services and enable them to survive reboots. Always check the unit status immediately after enabling. The status command shows recent log lines and the current state in one view.
sudo systemctl enable --now prometheus
# enable creates the symlink for boot. --now starts the unit immediately.
# Prometheus binds to port 9090 by default.
sudo systemctl enable --now grafana-server
# The package names the unit grafana-server, not grafana.
# Grafana binds to port 3000 by default.
Open a browser and navigate to http://localhost:3000. The default credentials are admin for both username and password. Change the password on first login. The web interface will prompt you to do so.
Run systemctl status before you restart anything. Half the time the service is already healthy and you are chasing a ghost.
Configure the scrape targets
Prometheus reads its configuration from /etc/prometheus/prometheus.yml. Never edit files in /usr/lib/prometheus/. Those files ship with the package and get overwritten on updates. Your modifications belong in /etc/.
The default configuration scrapes Prometheus itself. You will want to add your own targets. Create a new scrape job for the local machine or any service that exposes a /metrics endpoint.
# /etc/prometheus/prometheus.yml
global:
scrape_interval: 15s
# How often Prometheus visits each target. 15 seconds is the standard baseline.
evaluation_interval: 15s
# How often alerting rules are evaluated. Match scrape_interval for simplicity.
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
# Prometheus always monitors itself. Keep this block intact.
- job_name: "fedora-host"
static_configs:
- targets: ["localhost:9100"]
# Node Exporter runs on 9100. Add it when you install it.
# static_configs lists the endpoints Prometheus will pull from.
Save the file and reload Prometheus. You do not need to restart the service. A reload applies the new configuration without dropping existing time-series data.
sudo systemctl reload prometheus
# SIGHUP triggers a config reload. The process stays alive.
# Metrics collection continues uninterrupted during the reload.
If you are exposing Grafana or Prometheus to other machines on your network, open the firewall ports. Fedora ships with firewalld enabled by default. The runtime and persistent configurations must stay in sync.
sudo firewall-cmd --permanent --add-port=9090/tcp
# Permanent rules survive reboots. They do not take effect immediately.
sudo firewall-cmd --permanent --add-port=3000/tcp
# Grafana web UI requires port 3000.
sudo firewall-cmd --reload
# --reload applies the permanent rules to the active firewall.
# Always run --reload after editing persistent rules.
Reload the firewall after every rule change. Otherwise the runtime config and the persistent config diverge, and you will spend an hour wondering why your rules are not working.
Wire Grafana to the metrics database
Grafana needs to know where Prometheus lives. Log into the Grafana web interface and navigate to Connections > Data Sources. Click Add data source and select Prometheus.
Set the URL to http://localhost:9090. Leave the access method as Server (default). Save and test. Grafana will query the Prometheus API and return a success message if the connection works.
Add a dashboard next. The official Prometheus 2.0 Stats dashboard is available in the Grafana catalog. Search for it by ID or name, import it, and select your Prometheus data source. The panels will populate automatically.
Do not paste raw SQL or custom queries into Grafana panels until you understand PromQL. The query language is strict. A missing label or a typo in the metric name returns empty results without warning.
Verify the data pipeline
Verification happens in three layers. Check the systemd units, check the HTTP endpoints, and check the stored metrics.
systemctl is-active prometheus grafana-server
# Returns active for both if the services are running.
# Quick sanity check before digging into logs.
Query the Prometheus API directly to confirm it is accepting scrapes.
curl -s http://localhost:9090/api/v1/targets | python3 -m json.tool
# -s suppresses the progress meter. python3 formats the JSON output.
# Look for "health": "up" in the response.
Check the journal for configuration errors or scrape failures. The -xe flags add explanatory text and jump to the end of the log.
journalctl -xeu prometheus
# -x adds explanatory context to log lines. -e jumps to the end.
# -u filters by unit name. Read the actual error before guessing.
If you see scrape failed or context deadline exceeded, your target is either down or unreachable. Verify the target process is listening on the correct port. Run ss -tlnp | grep 9100 to confirm.
Run journalctl -xe first. Read the actual error before guessing.
Common pitfalls and exact error strings
Configuration syntax errors are the most common blocker. Prometheus validates the YAML on reload. A missing space or a misaligned colon stops the reload and leaves the old configuration active.
level=error ts=2024-05-12T14:22:01.123Z caller=main.go:568 msg="Error loading config" err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: line 14: mapping values are not allowed in this context"
The error points to the exact line. Fix the indentation and reload. Do not force the service to restart. A restart drops the in-memory index and forces a full reload from disk.
SELinux denials appear when a service tries to bind to a non-standard port or read a file outside its policy. Check the audit log before disabling SELinux.
type=AVC msg=audit(1715548921.000:112): avc: denied { name_bind } for pid=1234 comm="prometheus" src=9090 scontext=system_u:system_r:prometheus_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
The one-line summary in journalctl -t setroubleshoot tells you exactly what policy is blocking the action. Apply the correct boolean or port label instead of switching SELinux to permissive.
Grafana returns a 401 Unauthorized error if the default credentials are used after the password change prompt is dismissed. Reset the admin password via the CLI if you lock yourself out.
grafana-cli admin reset-admin-password newpassword
# Resets the admin password in the SQLite database.
# Requires root privileges. Run it only when the web UI is inaccessible.
Trust the package manager. Manual file edits drift, snapshots stay.
When to use this stack versus alternatives
Use Prometheus and Grafana when you need a reliable, queryable time-series database with built-in alerting and a mature visualization layer. Use Node Exporter alongside Prometheus when you want hardware-level metrics like CPU load, disk I/O, and network throughput. Use Telegraf with InfluxDB when you prefer a push-based architecture and want to ship metrics from dozens of lightweight agents. Use a simple cron job with mailx when you only need to check one metric and do not want to maintain a monitoring stack. Stay on the upstream Fedora packages if you only deviate from the defaults occasionally.