The default bridge is already working
You spin up two containers, expect them to talk to each other by name, and get a connection refused error. You check the network interfaces and see podman0 sitting there with a bunch of veth pairs attached. You assume you need to write a custom CNI plugin or edit a JSON config file. You do not.
Fedora ships Podman with Netavark as the default networking backend. Netavark replaced the older CNI stack to simplify rootless container networking and remove the need for setuid binaries. The default podman bridge network is created automatically when you run your first container. It assigns each container an IP from a private subnet, sets up NAT so they can reach the internet, and handles basic routing. Containers on this default bridge can reach the outside world, but they cannot resolve each other by container name. They only see each other by IP, and even then, firewall rules or subnet isolation might block direct traffic.
Think of the default bridge like a public coffee shop Wi-Fi. Every device gets internet access. Devices can see each other's MAC addresses, but the network does not provide a directory service. You need to know the exact IP to talk to someone, and the network does not guarantee isolation.
Podman stores its network configuration in /var/lib/cni/ for root users and in $HOME/.local/share/containers/ for rootless users. Never edit files in /usr/lib/containers/ or /usr/share/containers/. Those directories ship with the package. User modifications belong in /etc/containers/ or the rootless home directory. Manual edits to the upstream paths get overwritten on the next dnf upgrade --refresh. Trust the package manager. Manual file edits drift, snapshots stay.
Run podman network ls to see what is actually active. The output shows the network name, driver, scope, and ID. If the podman network is missing, Podman will recreate it on the next podman run. You rarely need to touch it unless you are debugging a broken subnet.
Check the default bridge first. Most networking issues are just missing DNS or wrong subnet assumptions.
How Netavark handles the plumbing
Netavark operates as a standalone binary that Podman calls during container lifecycle events. When you start a container, Podman passes a JSON network configuration to Netavark. Netavark reads the config, creates the virtual ethernet pair, attaches one end to the container's network namespace, and attaches the other end to the bridge interface. It then applies the firewall rules needed for NAT and port forwarding.
The shift to Netavark matters for rootless users. Older CNI plugins required elevated privileges to modify the host network stack. Netavark uses slirp4netns for rootless networking by default, which provides user-space networking without touching the host firewall. This keeps your rootless containers isolated from the host network while still allowing outbound internet access.
Here's how to inspect the default bridge and see the actual subnet, gateway, and attached containers.
podman network inspect podman
# Returns JSON with IPAM config, gateway, and container list
# The Subnet field shows the private range (usually 10.88.0.0/16)
# The Gateway field shows the bridge IP that handles NAT
# The Containers array lists every container currently attached
# Read the Subnet before assuming containers can ping each other
Netavark also handles DNS resolution for containers on the same custom network. It injects a lightweight DNS server into the bridge namespace. When a container sends a query for my-db, the DNS server checks the attached containers and returns the correct IP. This only works on user-defined networks, not on the default podman bridge.
If you are running dnf upgrade --refresh weekly, you will occasionally see Podman pull in Netavark updates. The binary is backward compatible with existing network configs. You do not need to recreate networks after a minor update.
Run podman network inspect podman to see the actual subnet and gateway. Guessing the IP range wastes time.
Creating and using custom networks
Custom networks solve two problems. They provide automatic DNS resolution between containers. They isolate traffic so that a compromised container on one network cannot scan or attack containers on another.
Create a custom network with a descriptive name. Podman assigns a new subnet automatically, usually from the 10.89.0.0/16 range. You can override the subnet if you need to match an existing infrastructure range, but the default works for 99 percent of use cases.
Here's how to create a custom network and attach a container to it during startup.
podman network create app-network
# Creates a new bridge with a fresh subnet and internal DNS server
# The network persists across Podman restarts
# Use a name that describes the workload, not a generic label
podman run -d --name web-server --network app-network nginx
# Attaches the container to app-network instead of the default bridge
# The --network flag overrides the default podman bridge
# The container gets an IP from the app-network subnet
# DNS resolution for other containers on app-network is now active
You can also attach running containers to a network without restarting them. This is useful when you are debugging or adding a new service to an existing stack.
podman network connect app-network existing-container
# Adds the running container to the specified network
# Podman creates a new veth pair and updates the bridge
# The container keeps its original IP on the old network
# DNS entries are updated automatically for the new network
When you create custom networks, you are telling the firewall to allow traffic between containers on that bridge. If you have firewalld running, you do not need to open ports manually for container-to-container traffic on the bridge. The bridge interface handles it internally. If you publish ports with -p, you must reload the firewall afterward.
firewall-cmd --reload
# Applies persistent firewall rules to the runtime configuration
# Without this, published ports may not accept external traffic
# The bridge itself remains isolated from the host firewall
# Run this after every rule change to prevent config drift
Name your networks after their purpose. frontend-net beats net1.
Verify the connection
Verification requires two steps. First, confirm the containers are on the same network. Second, test DNS resolution and actual connectivity.
Here's how to check network membership and test name resolution.
podman network inspect app-network
# Shows the Containers array with web-server and any other attached units
# Verify the container ID matches your running instance
# The IP field shows the address assigned on this specific network
# Multiple networks per container will show separate IP entries
podman exec web-server ping -c 3 database-container
# Sends three ICMP packets to the target container by name
# DNS resolution happens automatically on user-defined networks
# A successful reply confirms routing and DNS are working
# Replace ping with curl if your containers do not have iputils installed
If you are testing web traffic, use curl or wget inside the container. ICMP might be blocked by the container's default policy, but TCP will still work.
podman exec web-server curl -s http://database-container:8080/health
# Tests actual HTTP connectivity over the bridge
# The -s flag suppresses progress output for cleaner logs
# A 200 OK response proves the full stack is reachable
# Time out errors usually mean the target port is not listening
Test connectivity before you scale. A broken network breaks the stack.
Common pitfalls and error messages
Container networking failures usually fall into three categories. Wrong network attachment, DNS resolution failure, or rootless firewall restrictions.
The most common error appears when you try to connect to a container by name but forgot to put them on the same custom network.
Error: dial tcp: lookup database-container on 169.254.169.254:53: no such host
This error means the container's DNS resolver cannot find the target. The default podman bridge does not provide internal DNS. You must create a custom network and attach both containers to it.
Another frequent issue involves rootless containers trying to bind to privileged ports.
Error: listen tcp 0.0.0.0:80: bind: permission denied
Rootless users cannot bind to ports below 1024. Netavark respects this limit. Use ports above 1024 or run the container with --userns=host if you understand the security implications.
If you see [FAILED] Failed to start NetworkManager.service during boot, your network configuration probably references a missing interface name. This is unrelated to Podman but often shows up when users try to fix container networking by editing host network files.
When debugging, check the journal first. Netavark logs its actions to systemd.
journalctl -xeu podman
# The -x flag adds explanatory text to error lines
# The -e flag jumps to the end of the journal
# The -u flag filters for the podman unit
# Read the actual error before guessing at firewall rules
SELinux denials show up in journalctl -t setroubleshoot with a one-line summary. Read those before disabling SELinux. Container networking rarely triggers SELinux blocks unless you are mounting host network namespaces or using custom CNI plugins.
Read the error before disabling the firewall. SELinux and firewalld rarely block container-to-container traffic on the bridge.
When to use which network mode
Podman supports several network modes. Pick the one that matches your isolation requirements and performance needs.
Use bridge (the default) when you want automatic NAT, outbound internet access, and simple port publishing. Use host when you need maximum network performance and the container must bind directly to host interfaces. Use slirp4netns when you are running rootless containers and cannot modify the host firewall or routing tables. Use none when the container only needs loopback access and will communicate through mounted sockets or Unix domain sockets. Use a custom named bridge when multiple containers need to resolve each other by name and require traffic isolation from other workloads.
Pick the mode that matches your security boundary. Isolation costs a little convenience.