How to Set Up Pod (Multi-Container Groups) with Podman on Fedora

Create a Podman pod with `podman pod create` and add containers using the `--pod` flag to share network and resources.

When separate containers get in your way

You are running a web application that depends on a database. You spin up an Nginx container and a Redis container. They refuse to talk to each other over localhost. You start mapping ports, configuring custom bridges, and passing environment variables with internal IP addresses. The setup works until one container restarts and gets a new IP. You are back to debugging network routes.

You want both processes to share the same loopback interface and inter-process communication stack, just like two services running side by side on a bare metal server. You also want to start, stop, and inspect them as a single unit. That is exactly what a Podman pod does.

What a pod actually does

A container isolates a single process with its own filesystem, network stack, and PID namespace. A pod wraps multiple containers in a shared infrastructure layer. All containers inside the same pod share the same network namespace and IPC namespace. They see 127.0.0.1 as the exact same loopback device. They can communicate over Unix domain sockets without extra configuration. The pod itself gets one IP address and one set of published ports. The containers inside just bind to 0.0.0.0 or 127.0.0.1 and the pod routes the traffic.

Think of a pod as a shared apartment. Each container is a tenant with their own private room. They share the front door, the plumbing, and the mail slot. If the building manager shuts off the water, every tenant feels it. If one tenant moves out, the apartment stays open until the last one leaves.

Podman implements this by creating a hidden infrastructure container. The infra container holds the network namespace open. It has no user process. It just keeps the shared stack alive until you explicitly tear it down. This design avoids the complexity of a background daemon. Fedora ships Podman as the default container engine because it runs rootless by default and integrates cleanly with systemd without requiring a persistent socket.

Check the infra container before you assume a pod is broken. The hidden process is intentional. Do not try to remove it manually.

Creating and populating a pod

Podman handles pods without a background daemon. Every command runs directly in your terminal. You create the pod first, then attach containers to it. The --pod flag tells Podman to join the existing namespace instead of creating a new one.

Here is how to create a pod and launch two containers inside it.

podman pod create --name app-stack --publish 8080:80 --publish 6379:6379
# --name gives the pod a human-readable identifier for later commands
# --publish maps host ports to the pod's shared network namespace
# The pod starts in a created state until the first container runs

podman run -d --pod app-stack --name web-server nginx:latest
# -d detaches the container so it runs in the background
# --pod app-stack attaches this container to the existing pod namespace
# nginx:latest pulls the image if it is missing and starts the process

podman run -d --pod app-stack --name cache redis:latest
# Redis joins the same pod and shares the network stack
# It can now reach nginx on 127.0.0.1:80 without port mapping

The pod automatically transitions to a running state once the first container starts. You do not need to start the pod separately. Podman tracks the lifecycle of the pod and its containers together.

Run podman pod create before you attach containers. Attaching to a non-existent pod will fail with a clear namespace error.

Managing the pod lifecycle

You will need to stop, restart, and inspect the group as a unit. Podman provides pod-level commands that operate on every attached container simultaneously.

Here is how to control the pod lifecycle and view its internal state.

podman pod stop app-stack
# Stops all containers in the pod gracefully
# Sends SIGTERM to each process and waits for the default timeout

podman pod start app-stack
# Restarts the infra container and every attached container
# Preserves the original command and environment variables

podman pod top app-stack
# Lists all running processes across every container in the pod
# Useful for verifying that background workers actually started

podman pod logs app-stack --tail 20
# Streams combined stdout and stderr from all containers
# --tail limits output so you do not flood your terminal

Podman does not automatically restart pods after a reboot. You must configure systemd service files or use podman generate systemd if you want persistence. The container engine itself is stateless across reboots.

Generate systemd units before you reboot. Manual restart commands disappear when the session ends.

Verifying the shared network

You need to confirm that the containers actually share the loopback interface. Run a quick connectivity test from inside one container to the other.

Here is how to verify network isolation and shared routing.

podman exec web-server curl -s http://127.0.0.1:80
# exec runs a command inside the already-running container
# curl hits the local loopback address
# If the pod is working, you will see the Nginx welcome page

podman exec cache redis-cli ping
# redis-cli connects to the default Redis port on localhost
# A PONG response proves the IPC and network stack are shared

podman pod inspect app-stack --format '{{.State}}'
# --format pulls a single field from the JSON output
# Running confirms the pod infrastructure is active

podman pod ps
# Lists all active pods with their container count and port mappings
# Use this instead of podman ps when you want to see group status

Check the shared namespace before you add more containers. If the pod state shows Exited or Stopped, the network stack is down and new containers will fail to attach.

Common pitfalls and error messages

Podman pods behave differently from Docker pods and from standalone containers. You will run into a few predictable friction points.

Port conflicts are the most common issue. If you publish port 8080 on the host and another service is already using it, Podman refuses to start the pod.

Error: port is already allocated

The error appears when you run podman run or podman pod start. Check what is listening on the host with ss -tlnp | grep 8080. Stop the conflicting service or change the host port in the --publish flag.

Another frequent mistake is trying to remove a pod while containers are still running inside it. Podman protects the namespace from accidental teardown.

Error: cannot remove pod with running containers

Stop the containers first, or use the --force flag if you are certain you want to tear everything down. The force flag kills the processes and removes the pod in one step.

Rootless users sometimes hit namespace limits. Fedora enables rootless containers by default, but the kernel restricts how many user namespaces you can create. If you see user namespaces are not enabled or cannot create new namespace, check your /etc/security/limits.conf and ensure user.max_user_namespaces is set to a value above zero. The default Fedora installation usually handles this, but dual-boot systems or older kernels may require a manual bump.

Do not mix podman pod rm with podman rm. The first removes the pod infrastructure. The second removes individual containers. Running podman rm -f $(podman pod ps -q) will delete pods but leave orphaned containers behind. Always target the correct scope.

Read the exact error string before forcing a removal. Forced operations skip cleanup and leave dangling volumes.

Debugging network issues inside pods

When containers inside a pod cannot reach each other, the problem is usually a binding mismatch or a firewall rule. Containers inside a pod share the network stack, but they do not automatically bypass host-level firewalld rules.

Here is how to isolate routing problems without leaving the terminal.

podman exec web-server ip addr show eth0
# Shows the shared interface IP and subnet mask
# All containers in the pod will report the exact same output

podman exec web-server ss -tlnp
# Lists listening sockets inside the container
# Verify that nginx is actually bound to 0.0.0.0 or 127.0.0.1

podman exec cache curl -v http://127.0.0.1:80
# -v prints the full TCP handshake and HTTP headers
# Connection refused means the target process is not listening
# Timeout means a firewall or routing rule is blocking the packet

Fedora ships with firewalld enabled by default. If you publish ports on the pod, firewalld handles the forwarding automatically. If you are testing with --network host, you must open ports manually with firewall-cmd --add-port=8080/tcp --permanent followed by firewall-cmd --reload. The runtime and persistent configurations must match.

Run journalctl -xeu firewalld if traffic drops silently. The logs show exactly which rule rejected the packet.

When to use pods versus alternatives

Use a Podman pod when you are grouping tightly coupled services that need to communicate over localhost and share a single IP address. Use standalone containers when each service needs its own network interface, isolated port mappings, or independent restart cycles. Use systemd service files when you need host-level resource limits, automatic boot startup, and integration with the system logger. Stay on native packages when the application requires direct hardware access or desktop integration that containers cannot provide.

Pods shine for application stacks. They do not replace orchestration tools for cluster-wide deployments. If you are managing dozens of nodes, look at Kubernetes or Podman Desktop with Kubernetes compatibility. For a single server or a developer workstation, pods keep the networking simple and the commands predictable.

Trust the shared namespace. Manual IP routing drifts, pod networking stays consistent.

Where to go next