How to Manage Container Storage and Volumes with Podman on Fedora

Create and manage persistent storage for Podman containers using named volumes to ensure data survives container restarts and removals.

You deleted the container and the data vanished

You ran podman rm -f my-db to clean up a test database. You spun up a fresh container to restart the experiment. The database is empty. All your test data is gone. You didn't back it up. The container filesystem is ephemeral by design. When the container is removed, the writable layer is discarded. Any file created inside the container lives only as long as the container exists. You need a mechanism to keep data alive across container lifecycles without manually copying files in and out of the rootfs.

What's actually happening

Podman uses an overlay filesystem for containers. The container image is a read-only template. When you run a container, Podman creates a thin writable layer on top of that image. Any file you create or modify lives in that top layer. This layer is tied to the container ID. Remove the container, and the layer is destroyed.

Volumes break this link. A volume is a directory on the host filesystem that Podman manages and mounts into the container. The data lives on the host, independent of the container. The container just sees a folder at the mount path. You can delete and recreate the container a hundred times, and the volume stays put. The data survives because it never lived inside the container's writable layer.

Fedora defaults to rootless containers for users. Your storage lives in ~/.local/share/containers/storage. Root containers use /var/lib/containers/storage. Never edit files directly in the storage directory. Podman tracks metadata there. Use podman volume commands to manage data. Direct file edits can corrupt the storage driver index.

Create and mount a named volume

Here is how to create a named volume and mount it so your data survives container removal.

# Create a named volume. Podman allocates a directory in the storage driver.
# This command fails if the volume already exists, which catches typos early.
podman volume create my-app-data

# Run a container with the volume mounted at /data inside the container.
# The -v flag maps the host volume name to the container path.
podman run -d --name my-app -v my-app-data:/data nginx

# Verify the volume is attached and check its mount point on the host.
# The Mountpoint field shows where the data lives on the filesystem.
podman volume inspect my-app-data

The Mountpoint output points to a directory inside the storage driver. For rootless containers, this is usually under ~/.local/share/containers/storage/volumes/my-app-data/_data. You can access this path from the host to inspect or backup files, but prefer podman exec or podman cp for interacting with the data.

Explicit volume creation is a safety habit. Run podman volume create before podman run. If the volume already exists, the command returns an error. This catches typos immediately. If you rely on implicit creation, a typo silently creates a new empty volume and your data ends up in the wrong place.

Verify persistence

Here is how to confirm that data survives container recreation.

# Create a test file inside the running container.
# The file is written to the volume, not the container layer.
podman exec my-app sh -c "echo 'persistent data' > /data/test.txt"

# Stop and remove the container. The volume remains on the host.
podman rm -f my-app

# Start a new container with the same volume.
# The new container sees the existing data immediately.
podman run -d --name my-app-new -v my-app-data:/data nginx

# Check that the file survived the container recreation.
podman exec my-app-new cat /data/test.txt

The output should print persistent data. If the file is missing, check the mount path. A typo in the volume name during the second run command creates a new empty volume instead of reusing the old one.

Run podman volume inspect before you delete. Verify the mountpoint matches your expectation.

Common pitfalls and errors

You mount a volume and the container crashes with a permission error. Or the file appears empty. SELinux is likely blocking access. Fedora enables SELinux by default. Containers run in a confined domain. Volumes created by Podman are labeled correctly. If you mount a host directory that isn't a Podman volume, you might hit a denial.

The error looks like this in the container logs:

open /data/file: permission denied

Or you see a denial in the journal:

type=AVC msg=audit(1698765432.123:456): avc:  denied  { read } for  pid=1234 comm="nginx" name="file" dev="sda1" ino=5678 scontext=system_u:system_r:container_t:s0:c123,c456 tcontext=unconfined_u:object_r:home_t:s0 tclass=file permissive=0

Check journalctl -t setroubleshoot for SELinux denials. The denial message tells you exactly which label is missing.

When mounting host paths, use the :z or :Z suffix to fix the label. The :z suffix relabels the content so multiple containers can share it. The :Z suffix relabels the content so only this container can access it. Use :z for shared data like configuration directories. Use :Z for private data like database files.

# Mount a host directory with shared SELinux context.
# The :z suffix relabels the content so multiple containers can share it.
# Use :Z if only this container should access the data.
podman run -d --name my-app -v /home/user/my-data:/data:z nginx

Another trap is implicit volume creation. If you run podman run -v my-data:/data nginx and my-data does not exist, Podman creates it. This is convenient but dangerous if you typo the name. You might create my-dta instead of my-data and wonder why data is missing.

# This creates a new volume named 'my-dta' because the typo is treated as a new name.
# You will not get an error. The data will go to the wrong place.
podman run -d --name my-app -v my-dta:/data nginx

Always create volumes explicitly with podman volume create before running the container. This fails fast if the name is wrong.

Fedora uses overlay storage driver by default. Rootless containers use fuse-overlayfs unless you have root privileges or specific kernel support. The driver choice affects performance and features. You rarely need to change this. Just know that podman info shows your current driver.

Run podman system prune periodically to reclaim space from dangling images and unused volumes. Volumes are not removed by default during a prune. You must pass --volumes to delete unused volumes. Be careful. This deletes data.

Manage volumes

Here is how to list, filter, and remove volumes.

# List all volumes to see what exists and how many containers use each one.
podman volume ls

# List only volumes not used by any container.
# This helps you find orphaned data before you clean up.
podman volume ls --filter dangling=true

# Remove a volume. This fails if a container is still using it.
# Use --force to remove the volume and any containers referencing it.
podman volume rm my-app-data

You can also inspect a volume without running a container. This is useful for debugging mount paths or checking driver options.

# Run a temporary container to inspect the volume contents without affecting the app.
# The --rm flag removes the debug container immediately after it exits.
podman run --rm -v my-app-data:/data alpine ls -la /data

Pick the mount type based on the data lifecycle. Named volumes for app data, bind mounts for dev workflows.

When to use volumes vs alternatives

Use named volumes when you want Podman to handle the directory creation and storage driver integration. Use bind mounts when you need the container to read or write files in a specific host directory like a code repository. Use tmpfs mounts when you need fast, RAM-backed storage for temporary files that do not need to persist. Use Quadlet units when you are deploying services and want systemd to manage the volume alongside the container.

Where to go next