The scenario
You just switched from Docker to Podman on Fedora. You run podman pull nginx and it completes successfully. You run it again a week later and the terminal prints a message saying the image already exists. You try to clean up old images with podman rmi and it complains about dangling containers. You are looking at a terminal full of image IDs and wondering why your disk usage is creeping up. You need a clear way to manage the local image cache without guessing.
What is actually happening under the hood
Podman does not run a background daemon. Every command you type spawns a short-lived process that talks directly to the storage driver. Fedora ships with rootless Podman enabled by default. That means all your images, containers, and volumes live inside your home directory, specifically under ~/.local/share/containers/storage. The storage driver uses an overlay filesystem to stack read-only layers. When you pull an image, Podman downloads each layer once and caches it. Subsequent pulls of the same image or related images reuse those layers instead of downloading them again. This saves bandwidth and disk space, but it also means old layers accumulate until you explicitly prune them.
Think of the storage directory like a shared library. Each image is a book made of stacked chapters. When you request a new book, the librarian checks the shelves first. If the chapters already exist, they just hand you a new cover and a table of contents. If you never return the books, the shelves fill up. Podman gives you the tools to audit and clean the shelves.
The overlay driver relies on kernel support. Fedora kernels ship with overlay enabled by default. If the driver cannot initialize, Podman falls back to vfs. The vfs driver works everywhere but duplicates every layer on disk, which consumes significantly more space. Check your kernel version and ensure the overlay module is loaded before troubleshooting storage issues.
Run podman info before you guess. The storage driver and graph root path tell you exactly where the data lives.
Pulling images and managing the cache
Start by pulling an image from a registry. The default registry is Docker Hub, but Fedora maintains its own container registry at registry.fedoraproject.org. You can specify the full URL or rely on the search mirrors configured in /etc/containers/registries.conf. Edit /etc/containers/registries.conf to add custom mirrors or block specific registries. Never edit files in /usr/lib/containers/ because package updates will overwrite them.
Here is how to pull a base image and a specific tagged version.
# Pull the default latest tag from Docker Hub or configured mirrors
podman pull fedora
# Pull a specific release from the official Fedora registry
podman pull registry.fedoraproject.org/fedora:41
# Pull an image without verifying TLS (only for trusted local registries)
podman pull --tls-verify=false localhost:5000/myapp:dev
Podman caches layers across images. If you pull fedora:40 and fedora:41, the base filesystem layers are shared. The podman images size column shows the uncompressed size of the image, not the actual disk footprint. The actual footprint is lower because of layer deduplication.
Always pull explicitly tagged images in production. The latest tag is mutable and can change without warning. Pin your version to avoid unexpected behavior after a registry update.
Listing and filtering local images
Listing images shows you what is cached locally. The output includes the repository name, tag, image ID, creation timestamp, and size. The size column shows the uncompressed size of the image layers. Podman shares layers across images, so the total disk usage will be lower than the sum of the listed sizes.
Here is how to filter the local image list to find what you actually need.
# Show all locally cached images with their full details
podman images
# Filter by repository name to isolate Fedora base images
podman images fedora
# Exclude dangling images that have no tag and are not referenced by other images
podman images --filter dangling=false
# Show only untagged images to find candidates for cleanup
podman images --filter dangling=true
Dangling images are layers that lost their tag after a newer pull. They are safe to remove. The --filter dangling=true flag isolates them quickly. You can combine filters with --format to output machine-readable data for scripting.
Run podman images --filter dangling=true before every cleanup. Identify the orphans before you delete anything.
Removing images and reclaiming space
Removing images requires the image ID or the name:tag combination. Podman refuses to delete an image if a container is currently using it, even if the container is stopped. You must remove the container first, or use the force flag. The force flag stops and removes any containers using the image before deleting the image layers.
Here is how to safely remove images and force removal when necessary.
# Remove an image by its exact name and tag
podman rmi fedora:latest
# Remove an image by its short or long ID
podman rmi a1b2c3d4e5f6
# Force removal of an image and any containers currently using it
podman rmi --force fedora:latest
If you want to clean up everything at once, use the system prune command. It removes all stopped containers, unused networks, and dangling images. Add the --all flag to remove all unused images, not just dangling ones.
Here is how to reclaim disk space by pruning unused artifacts.
# Remove stopped containers, unused networks, and dangling images
podman system prune
# Remove all unused images, not just dangling ones
podman system prune --all
Pruning is irreversible. Run podman images first to verify that you are not deleting base images your active containers depend on.
Verify the state
Run podman images again to confirm the removal. The output should no longer list the deleted image. Check your disk usage with du -sh ~/.local/share/containers/storage to see the actual space reclaimed. The overlay driver caches metadata, so the directory size might not drop instantly. Run podman system reset only as a last resort, because it wipes the entire local storage directory and requires you to pull images again.
Check the storage driver and graph root path to ensure Podman is writing to the expected location.
# Display storage configuration and verify the rootless graph root
podman info | grep -A 3 "graphRoot"
# Confirm the overlay driver is active and not falling back to vfs
podman info | grep -A 2 "driver"
Verify the state after every removal. If the image ID still appears, a running container is holding a reference to it. Stop the container first, then retry the removal.
Common pitfalls and exact error messages
You will hit permission errors if your user account lacks the correct groups or if the storage directory has stale locks. Fedora requires rootless users to be in the storage group for certain FUSE operations, though modern setups handle this automatically. If you see a storage lock error, check for zombie Podman processes.
The podman rmi command will refuse to proceed and print the following error. Copy the exact wording into your search engine if you need to find community workarounds.
Error: unable to delete a1b2c3d4e5f6 (must be forced) - image is being used by running container
The error is intentional. Podman protects you from accidentally deleting an image that a running service depends on. Stop the container with podman stop <CONTAINER_ID> before removing the image.
If you mount a host directory into a container and see Permission denied errors, SELinux is blocking the access. Rootless Podman handles labels automatically for most cases, but host directory mounts require explicit context relabeling. Add the :z suffix to share the directory between multiple containers, or use :Z to keep it private to one container.
Here is how to mount a host directory with the correct SELinux context.
# Mount a host directory and share it across multiple containers
podman run -v /home/user/data:/data:z nginx
# Mount a host directory and restrict it to a single container
podman run -v /home/user/secrets:/secrets:Z nginx
If the overlay driver fails to initialize, Podman falls back to vfs. The vfs driver works everywhere but duplicates every layer on disk, which consumes significantly more space. Check your kernel version and ensure the overlay module is loaded. Fedora kernels ship with overlay support enabled by default.
Run journalctl -xeu podman.socket if you suspect a system-level socket issue, though rootless Podman uses podman.socket in the user namespace instead. Check the user socket status with systemctl --user status podman.socket. Restart the socket if it is stuck in a failed state.
Read the actual error before guessing. Most storage issues are caused by stale locks or missing kernel modules, not broken software.
When to use which approach
Use podman pull when you need to cache an image before running a container or building a new one. Use podman images --filter dangling=true when you want to identify orphaned layers that are safe to delete. Use podman rmi when you are removing a specific image by tag or ID. Use podman system prune --all when you are reclaiming disk space after a large development cycle. Use the :z mount suffix when multiple containers need read-write access to the same host directory. Use the :Z mount suffix when a single container needs exclusive access to a host directory. Stay on the default rootless configuration unless you are running system services that require host network namespaces.