Create VM from command line

Use the virt-install command-line tool to create and configure KVM virtual machines on Fedora without a graphical interface.

Story / scenario opener

You need a fresh Fedora environment to test a new service, but you refuse to risk your host system. You open the terminal, type virt-install, and stare at a wall of options. You need a reliable way to spin up a virtual machine from the command line, manage it without a GUI, and understand exactly what each flag does before you hit enter.

What's actually happening

KVM turns your CPU into a hardware hypervisor. QEMU emulates the surrounding hardware like disks, network cards, and USB controllers. libvirt sits on top of both and gives you a consistent API to talk to them. virt-install is not the hypervisor. It is a provisioning wrapper that translates your command-line flags into an XML domain definition, then hands that definition to libvirt. Think of virt-install as a form-filler. It builds the blueprint. libvirt is the construction crew that actually boots the machine. Once the VM is defined and powered on, virt-install steps away. You use virsh to talk to the running system.

Check the libvirt logs before you guess. Run journalctl -xeu libvirtd to see exactly why a domain failed to start.

Provision the host environment

Fedora ships with KVM and libvirt, but the CLI tools require explicit installation. You also need to grant your user account socket access so you can manage domains without prefixing every command with sudo.

Here's how to install the required packages and start the daemon.

sudo dnf install -y virt-install libvirt qemu-kvm
# libvirt provides the daemon, XML parser, and CLI tools
# qemu-kvm supplies the actual hardware emulation backend
# virt-install is the provisioning wrapper you will type
sudo systemctl enable --now libvirtd
# Start the daemon immediately and register it for boot

Here's how to grant your account permission to talk to the libvirt socket.

sudo usermod -aG libvirt $(whoami)
# Append your username to the libvirt group
# Group membership grants read/write access to /var/run/libvirt
newgrp libvirt
# Reload group membership in the current shell session
# Without this step, the new group won't apply until you log out

Run id to verify the group appears in your output. Future-you will thank you.

Create a headless VM

Headless provisioning is the standard for servers, CI runners, and automated test environments. You disable graphical output and route the guest kernel console directly to your terminal. This saves host RAM and removes the dependency on VNC or SPICE clients.

Here's how to provision a Fedora VM that boots directly into a serial console.

virt-install \
  --name fedora-test-vm \
  # Assign a unique identifier for libvirt tracking and virsh commands
  --ram 2048 \
  # Allocate 2 GB of host memory to the guest domain
  --vcpus 2 \
  # Pin two virtual CPUs to available host cores
  --disk path=/var/lib/libvirt/images/fedora-test.qcow2,size=20,format=qcow2 \
  # Create a 20 GB sparse disk in the default libvirt storage pool
  --os-variant fedora40 \
  # Tell libvirt to apply Fedora 40 hardware defaults and virtio optimizations
  --cdrom /path/to/Fedora-Server-dvd-x86_64-40.iso \
  # Mount the installation media as a virtual optical drive
  --network network=default \
  # Attach to the built-in NAT bridge for outbound internet access
  --graphics none \
  # Disable VNC/SPICE to force serial console output
  --console pty,target_type=serial
  # Redirect guest kernel and init output to your local terminal

The --os-variant flag is not cosmetic. It tells libvirt which kernel parameters to inject, which disk controller to emulate, and whether to enable ACPI. Omitting it forces libvirt to fall back to generic PC hardware, which often breaks virtio drivers and slows down disk I/O.

Keep your ISOs in a dedicated directory and update them with dnf upgrade --refresh on the host before provisioning. Stale installation media causes silent package failures during guest setup.

Import an existing cloud image

Cloud images are pre-built, minimal disk images that skip the interactive installer. They expect configuration via cloud-init on first boot. Importing them is faster than running a full installation, but you must provide a data source or the guest will hang waiting for credentials.

Here's how to import a pre-built .qcow2 image and attach it to a new domain.

virt-install \
  --name fedora-cloud-vm \
  --ram 2048 \
  --vcpus 2 \
  --import \
  # Skip the installation phase and boot the existing disk immediately
  --disk path=/path/to/Fedora-Cloud-Base.qcow2,format=qcow2 \
  # Point to the pre-built image instead of creating a new one
  --os-variant fedora40 \
  --network network=default \
  --graphics none \
  --console pty,target_type=serial \
  --noautoconsole
  # Prevent virt-install from trying to attach to the serial port immediately

Cloud images require a user-data ISO or a metadata service to set the root password, SSH keys, and hostname. Without it, the guest will boot to a locked console or fail to configure networking. Mount a cloud-init ISO as a second disk if you need to seed credentials manually.

Never edit files in /usr/lib/libvirt/. Those ship with the package. Place custom domain XML overrides in /etc/libvirt/qemu/ and run virsh define to apply them.

Verify it worked

Libvirt tracks every domain it knows about, regardless of power state. You need to confirm the domain is defined, check its lifecycle state, and attach to the console.

Here's how to inspect the domain list and connect to the running guest.

virsh list --all
# Display every defined domain regardless of power state
virsh start fedora-test-vm
# Power on the domain if it is currently shut off
virsh console fedora-test-vm
# Attach your terminal to the guest serial port

The --all flag is mandatory. Without it, virsh list only shows running domains, which hides your newly provisioned but powered-off VM. Type ~. to detach from the serial console without killing the guest process.

Run virsh dominfo <name> to see CPU pinning, memory allocation, and UUID details. Trust the package manager. Manual file edits drift, snapshots stay.

Common pitfalls and what the error looks like

Provisioning fails silently more often than it crashes. You will see specific error strings that point directly to the misconfiguration.

The virt-install command will refuse to proceed and print Error: Requested operation is not valid: unable to connect to domain console. This happens when --graphics none is omitted but the guest kernel lacks a framebuffer driver. The serial console never initializes. Add --graphics none and --console pty,target_type=serial to force text output.

You will see Error: Disk path '/var/lib/libvirt/images/...' already exists when libvirt detects a leftover file from a failed run. libvirt refuses to overwrite existing storage to prevent data loss. Delete the stale file or change the --disk path parameter.

If the guest boots but has no network, your --network network=default flag is likely pointing to a NAT bridge that lacks DHCP. Run virsh net-info default to verify the DHCP range. If you need host-to-guest access, switch to a bridge network and run virsh net-autostart <bridge-name> to make it persistent.

SELinux denials show up in journalctl -t setroubleshoot with a one-line summary. Read those before disabling SELinux. If you manually moved a disk image into /var/lib/libvirt/images/, run restorecon -Rv /var/lib/libvirt/images/ to fix the context.

Reboot the host before you debug persistent libvirt socket errors. Half the time the symptom is gone.

When to use this vs alternatives

Use virt-install when you need a reproducible, scriptable VM provisioning step in a CI pipeline or automation workflow. Use virt-manager when you prefer a graphical interface to tweak CPU pinning or add USB passthrough devices. Use qemu-kvm directly when you need bare-metal hardware emulation without libvirt's abstraction layer. Use libguestfs tools when you need to modify disk images offline without booting a guest. Stay on virt-install if you only need standard disk, memory, and network configuration.

Where to go next