The scenario
You just installed Fedora and want to test a Kubernetes deployment locally. You run kubectl apply -f deployment.yaml and get a connection refused error. You check the documentation and see mentions of podman play kube, but nothing actually starts. The cluster never materializes. You are stuck staring at a terminal that refuses to acknowledge a control plane.
What is actually happening
Podman is a daemonless container engine. It manages OCI containers and pods, but it does not include a Kubernetes control plane. Kubernetes requires etcd for distributed state, kube-apiserver for routing, kube-scheduler for placement, and kubelet for node management. Podman does not ship those components. When you try to run Kubernetes commands directly on the host, the API server is missing. The requests go nowhere.
Think of Podman as a shipping yard. It can load and unload containers, but it does not run the logistics network that coordinates thousands of ships across ports. Kubernetes is the logistics network. To run it, you need a dedicated environment that can host the control plane and worker nodes. On Fedora, that environment is a lightweight virtual machine created by podman machine. The VM provides the isolated kernel space, cgroup hierarchy, and network bridge that Kubernetes expects.
Podman on the host communicates with the VM over an encrypted SSH tunnel. When you run podman ps or podman run, the command is forwarded to the guest, executed by the local Podman instance inside the VM, and the results are streamed back. This architecture keeps your host system clean while giving you a full Linux kernel inside the guest. Kubernetes components expect a real kernel with cgroups v2, iptables, and network namespaces. The VM delivers exactly that.
Check the VM state before you debug container failures. Half the time the cluster is down because the machine itself stopped.
The fix: spinning up a VM and deploying Kubernetes
You need to provision the VM first, then install a lightweight Kubernetes distribution inside it. The standard workflow uses podman machine to create the environment, then drops in k3s or uses podman play kube for simple manifest testing.
Here is how to initialize and start the virtual machine that will host your cluster.
podman machine init --name k8s-test --cpus 2 --memory 4096 --disk-size 20
# --name gives the VM a predictable identifier for future commands
# --cpus and --memory allocate resources that Kubernetes components will actually consume
# --disk-size prevents the VM from running out of space during image pulls
podman machine start k8s-test
# Boots the VM and establishes the SSH tunnel that podman uses to manage containers inside it
Once the VM is running, you need to connect to it and install a Kubernetes distribution. k3s is the standard choice for local testing because it bundles etcd, the API server, and the container runtime into a single binary. You can also use podman play kube if you only need to test a single deployment manifest without a full cluster.
Here is how to SSH into the VM and install k3s for a functional local cluster.
podman machine ssh k8s-test
# Opens a root shell inside the running virtual machine
curl -sfL https://get.k3s.io | sh -
# Downloads the official k3s installer and runs it with default settings
# The installer automatically starts the control plane and configures systemd units
exit
# Returns you to the host Fedora terminal
You now have a running control plane inside the VM. The next step is to configure kubectl on your host to talk to it. The k3s installer generates a kubeconfig file at /etc/rancher/k3s/k3s.yaml inside the VM. You need to copy it out and point your local kubectl at it.
Here is how to extract the configuration and verify the connection.
podman machine ssh k8s-test "cat /etc/rancher/k3s/k3s.yaml" > ~/.kube/config
# Pulls the cluster credentials from the VM and saves them to the standard host path
chmod 600 ~/.kube/config
# Restricts access to the credential file so kubectl does not refuse to read it
kubectl cluster-info
# Queries the API server to confirm the control plane is reachable
Convention aside: podman machine uses QEMU under the hood on Fedora. It does not require Docker or systemd inside the guest. The VM runs a minimal Fedora CoreOS image by default, which aligns with upstream container practices. You do not need to install docker or containerd manually. The podman command on the host automatically routes container operations to the VM when the machine is active.
Always run systemctl status k3s inside the guest before restarting services. The status output shows recent log lines and the current state in one view.
Verify the cluster is alive
Run kubectl get nodes to see if the control plane registered itself as a worker. You should see a single node with a Ready status. If the node shows NotReady, check the VM logs. The control plane might be waiting for the container runtime to finish initializing.
kubectl get nodes
# Lists all registered nodes and their current health status
kubectl get pods --all-namespaces
# Shows system pods like coredns and metrics-server that k3s deploys automatically
If you see coredns and metrics-server in the kube-system namespace with Running status, the cluster is operational. You can now apply your deployment manifests. Run kubectl apply -f your-manifest.yaml and watch the pods schedule.
Watch the pod restart count. If it climbs past five, the container is crashing on startup. Check the logs before scaling.
Common pitfalls and error messages
The most frequent failure point is insufficient memory. Kubernetes components are heavier than standard containers. The API server alone consumes roughly 200 megabytes at idle. If you allocated less than 2 gigabytes to the VM, the kernel will start OOM-killing processes. You will see this error when kubectl tries to reach the API server:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The API server is not listening because the kernel terminated it. Run journalctl -xeu k3s inside the VM to confirm. The x flag adds explanatory text and the e flag jumps to the end. Look for Out of memory: Killed process messages. Stop the VM, increase the memory allocation to at least 4096 megabytes, and restart.
Another common issue involves SELinux on the host. Fedora enables SELinux in enforcing mode by default. podman machine creates storage directories under ~/.local/share/containers. If you manually move the kubeconfig file or mount volumes without proper labels, SELinux will block access. You will see Permission denied errors when kubectl tries to read the config. Run restorecon -v ~/.kube/config to fix the context. SELinux denials show up in journalctl -t setroubleshoot with a one-line summary. Read those before disabling SELinux.
Network conflicts also appear when you run multiple VMs. podman machine assigns a virtual network interface on the host. If you already run VirtualBox or libvirt networks on the same subnet, the DHCP server will clash. You will see this during boot:
[FAILED] Failed to start podman-machine-default.service.
Change the VM network range or use podman machine init --network-pair podman to isolate it. Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/.
If you see [FAILED] Failed to start k3s.service inside the VM, your storage driver might be misconfigured. k3s defaults to containerd and overlay2. If the VM filesystem does not support overlay mounts, the service will refuse to start. Check /var/log/journal inside the VM for overlay2 not supported messages. Switch to vfs storage by editing /etc/containers/storage.conf inside the guest before reinstalling.
Run journalctl -xe first. Read the actual error before guessing.
When to use this approach versus alternatives
Use podman machine with k3s when you need a full Kubernetes control plane for local development and testing. Use podman play kube when you only want to validate a single deployment manifest without spinning up etcd or the API server. Use kind when you are already running Docker and want multi-node cluster simulation. Use minikube when you need built-in addons like ingress and metrics server without manual configuration. Stay on the upstream podman machine workflow if you want to avoid Docker dependencies and keep your Fedora system aligned with Red Hat container standards.
Trust the package manager. Manual file edits drift, snapshots stay.