How to Host Containers with Podman as Systemd Services on Fedora Server

Generate a systemd unit file with podman generate systemd and install it to run containers as persistent services.

The scenario

You just finished building a container image for a web app or a background worker. It runs perfectly when you type podman run in your terminal. Then you close the terminal. The container dies. You need it to survive reboots, restart automatically on crashes, and run without an active shell session. You try wrapping it in a cron job or a bash loop. Both approaches break when the system updates or when you need proper logging and dependency ordering. The correct path is handing the container over to systemd.

What systemd and Podman are actually doing

Systemd is the init system that manages every process on Fedora. It tracks state, handles dependencies, and restarts services when they fail. Podman is a daemonless container engine. When you run a container manually, the shell process is the parent. Kill the shell, kill the container. Systemd changes the parent process. It becomes the direct supervisor.

Think of it like handing off a running project to a dedicated operations manager. You set the initial parameters once. The manager watches the health, restarts it if it crashes, and ensures it starts before dependent services like databases or reverse proxies. The podman generate systemd command bridges these two worlds. It translates Podman runtime arguments into a native systemd unit file. Systemd then executes podman run or podman start with the exact same flags you originally used, but under systemd lifecycle control.

Systemd also handles cgroup resource limits, logging routing, and service ordering. The container stops being a detached process and becomes a first-class system service. You gain structured logging in the journal, automatic restart policies, and integration with host-level firewall and storage rules.

Generating and installing the unit file

Start with a running container that you know works. The generator reads the current configuration and writes a unit file that reproduces it. You need to decide whether systemd should create a fresh container on boot or resume an existing one.

Use the --new flag when you want systemd to run podman run every time the service starts. This guarantees a clean state but discards any runtime data unless you mounted persistent volumes. Use the --start flag when you want systemd to run podman start on an already created container. This preserves runtime state but requires the container to exist before the service starts.

Here is how to generate a unit file for a fresh container on boot:

podman generate systemd --new --name my-app --files --restart-policy unless-stopped
# --new tells systemd to run podman run instead of podman start
# --name targets the specific container you want to manage
# --files outputs to disk instead of printing to stdout
# --restart-policy maps to the systemd Restart= directive

The command creates a file named container-my-app.service in your current directory. Systemd expects unit files in /etc/systemd/system/ for administrator overrides. Files in /usr/lib/systemd/system/ ship with packages and get overwritten during updates. Always place custom units in /etc/.

Move the file and reload the systemd manager:

sudo mv container-my-app.service /etc/systemd/system/
# /etc/systemd/system/ is the correct location for custom admin units
sudo systemctl daemon-reload
# daemon-reload forces systemd to rescan unit directories and pick up the new file
sudo systemctl enable --now container-my-app
# enable adds it to the default target for boot persistence
# --now starts it immediately without requiring a separate start command

Systemd now owns the container lifecycle. You can stop, start, and check the status using standard systemctl commands. The container will survive reboots and restart automatically if the process exits with a non-zero code.

Reboot before you debug. Half the time the symptom is gone.

Verifying the service is running

Do not assume the service started correctly just because the command returned without error. Check the actual state and recent logs in one step.

systemctl status container-my-app
# Shows active state, main PID, and the last ten log lines
journalctl -xeu container-my-app
# -x adds explanatory annotations, -e jumps to the end, -u filters by unit

Look for Active: active (running) in the status output. The journal output will show the exact podman run command systemd executed. If you see Restart=on-failure in the unit file, systemd will attempt to restart the container up to the configured limit before marking it as failed. Check the ExecStart line to verify all your original volume mounts and environment variables survived the translation.

Systemd routes container stdout and stderr directly into the journal. You no longer need podman logs. The journal provides timestamped, structured output that you can grep, filter by priority, or forward to a remote syslog server. Use journalctl -u container-my-app --since "10 minutes ago" to isolate recent activity.

Run journalctl first. Read the actual error before guessing.

Common pitfalls and error messages

The generator is reliable, but it does not guess your intent. You will run into three specific failure modes.

The first is a missing container when using --start. Systemd will try to run podman start on a container that does not exist yet. The journal will show:

Error: no container with name or ID "my-app" found: no such container

Switch to --new or create the container manually before enabling the service.

The second is a rootless permission mismatch. If you generated the unit file as a regular user but moved it to /etc/systemd/system/ with sudo, systemd will try to run the container as root. Podman rootless containers require the XDG_RUNTIME_DIR environment variable and a specific user namespace setup. The service will fail with:

Error: cannot find XDG_RUNTIME_DIR

Keep rootless containers in the user systemd directory at ~/.config/systemd/user/. Use systemctl --user enable --now container-my-app instead. Only use /etc/systemd/system/ for containers that actually need root privileges or host-level network access.

The third is a stale PID file or socket conflict. If you previously ran the container manually and forgot to stop it, systemd will try to bind to the same ports or mount the same paths. You will see:

Error: port is already allocated

Stop the manual container first. Systemd cannot share resources with a manually managed instance. Run podman stop my-app before enabling the service.

Always check journalctl -xeu container-my-app before guessing. The log line tells you exactly which flag failed.

Updating containers and handling image changes

Containers do not update themselves. When a new image version ships, the running container continues using the old layers. Systemd does not know about container registries. You must pull the new image, stop the service, remove the old container, and let systemd recreate it.

Here is the safe update sequence:

sudo podman pull my-registry/my-app:latest
# Fetch the new image layers without affecting the running container
sudo systemctl stop container-my-app
# Gracefully stop the service and wait for the container to exit
sudo podman rm my-app
# Remove the old container instance to free the name and mounts
sudo systemctl start container-my-app
# systemd runs the ExecStart line again and creates a fresh container

The --new flag makes this workflow clean. Systemd destroys the old container and builds a new one from the latest image. If you used --start, you must run podman update or manually recreate the container before restarting the service.

Add a Wants= or After= directive to the unit file if your container depends on a database or a message queue. Systemd will wait for the dependency to reach its target state before starting your app. Edit the unit file in /etc/systemd/system/container-my-app.service, add After=postgresql.service, run sudo systemctl daemon-reload, and restart.

Trust the package manager. Manual file edits drift, snapshots stay.

When to use systemd units versus other orchestration

Pick the right tool for the deployment scale.

Use a systemd unit when you are running a single container or a small group of tightly coupled services on one host. Use Podman pods when you need multiple containers to share the same network namespace and IPC resources without Docker compatibility layers. Use Kubernetes when you are managing dozens of nodes, need automatic load balancing, or require declarative scaling across a cluster. Use a simple bash wrapper or cron job only for temporary debugging sessions that do not need to survive a reboot.

Systemd handles the host lifecycle. It integrates with firewall rules, disk quotas, and service dependencies. It is the native control plane for Fedora Server. Do not reach for external orchestrators until you actually need multi-host coordination.

Where to go next