When your VM feels sluggish
You spin up a Fedora VM for a database or a development environment. The host machine has plenty of cores and RAM, but the guest stutters under load. Disk I/O waits spike. The terminal cursor lags. You assume the guest OS is misconfigured, but the bottleneck is usually the hypervisor boundary. The default libvirt settings prioritize safety and compatibility over raw throughput. Adjusting the CPU model, vCPU allocation, and memory handling removes the artificial ceiling.
What the hypervisor is actually doing
KVM turns your Linux kernel into a type-1 hypervisor. QEMU emulates the hardware layer. Libvirt manages the lifecycle and exposes a consistent API. When you allocate resources, you are drawing a fence around what the guest can touch. The host scheduler decides when the guest actually runs. If you overcommit vCPUs, the host scheduler thrashes as it tries to share physical cores across too many virtual threads. If you set memory statically without ballooning, idle guest RAM sits unused while the host starves. The goal is to match the guest footprint to the actual workload, not the theoretical maximum.
Think of vCPUs like lanes on a highway. Adding more lanes does not help if the traffic light at the intersection is stuck on red. The host CPU scheduler is that traffic light. Memory ballooning works like a retractable awning. The hypervisor expands the guest memory when the workload needs it, and pulls it back when the guest is idle. Both mechanisms require explicit configuration to behave predictably.
Setting CPU and memory correctly
The graphical virt-manager interface handles basic allocation well. Open the VM details, navigate to CPUs and Memory, and adjust the sliders. The GUI writes the changes to the libvirt XML behind the scenes. For scripted workflows, headless servers, or precise control, use virsh. The command line exposes the distinction between live changes, current session changes, and persistent configuration changes.
Here is how to inspect the current allocation and adjust vCPUs without breaking the domain definition.
# Show current CPU and memory config
virsh dominfo my-vm
# Set the maximum allowed vCPUs for the domain definition
# --config writes to the persistent XML so the change survives reboots
virsh setvcpus my-vm 4 --config --maximum
# Set the active vCPU count for the next boot
# Hotplug requires guest OS support and the virtio driver
virsh setvcpus my-vm 4 --config
Memory configuration follows the same pattern. Libvirt measures memory in kibibytes. You must calculate the exact value or use a calculator. The --config flag modifies the persistent definition. The VM must be shut down for these changes to take effect. Live memory changes are possible but require the balloon driver and careful tuning.
# Set maximum memory to 8 GiB in the persistent configuration
# 8 * 1024 * 1024 = 8388608 KiB
virsh setmaxmem my-vm 8388608 --config
# Set current memory to 8 GiB in the persistent configuration
# This value applies when the domain starts
virsh setmem my-vm 8388608 --config
After editing, start the VM. The host kernel will allocate the requested resources and hand control to QEMU.
# Boot the domain with the new configuration
virsh start my-vm
Run virsh edit my-vm instead of touching files in /etc/libvirt/qemu/ directly. Libvirt validates the XML schema before applying changes. Direct file edits bypass validation and often cause libvirtd to reject the domain on next boot.
Pinning vCPUs and managing memory
Default scheduling works fine for general desktop use. Low-latency workloads, databases, and real-time applications need deterministic CPU placement. CPU pinning binds specific guest vCPUs to specific host logical cores. This prevents the host scheduler from migrating the guest thread across cores, which clears cache lines and introduces micro-stutters.
Here is how to pin vCPUs to dedicated host cores.
# Pin guest vCPU 0 to host logical core 2
# This reduces context switching and keeps cache hot
virsh vcpupin my-vm 0 2
# Pin guest vCPU 1 to host logical core 3
# Isolate these cores from host services if possible
virsh vcpupin my-vm 1 3
Memory ballooning lets the hypervisor reclaim idle guest memory dynamically. Fedora guest images include the virtio-balloon driver by default. The host can request the guest to release unused pages back to the hypervisor pool. This keeps the host stable when multiple VMs compete for RAM.
Here is how to verify the balloon device is present and check memory statistics.
# Query live memory statistics from the guest
# The output shows actual, swap_in, swap_out, and balloon values
virsh dommemstat my-vm
# Check if the balloon device is active in the guest kernel
# Run this inside the VM to confirm the driver loaded
cat /sys/devices/virtual/memory/balloon/actual
The guest agent bridges the communication gap between host and guest. It reports accurate memory pressure, coordinates clean snapshots, and synchronizes time. Install it inside the guest and enable the service.
# Install the agent package inside the Fedora guest
# Provides the virtio-serial channel for host communication
sudo dnf install qemu-guest-agent
# Enable and start the service immediately
# The host polls this service for memory and disk stats
sudo systemctl enable --now qemu-guest-agent
Check journalctl -xeu libvirtd if the host complains about missing channels. The agent must match the virtio-serial port defined in the domain XML.
Verify the configuration
Assumptions break systems. Verify every change before declaring the workload stable. The host reports what it allocated. The guest reports what it sees. Both must align.
# Confirm vCPU count and memory from the host perspective
# Look for vcpus, maxMemory, and memory fields
virsh dominfo my-vm
# Verify pinning assignments match your intent
# Output shows vCPU index and allowed host CPU list
virsh vcpupin my-vm --all
# Check live memory pressure and balloon activity
# high balloon values indicate the host is reclaiming guest RAM
virsh dommemstat my-vm
Inside the guest, run lscpu and free -h. The reported CPU count must match the --config value. The available memory must reflect the balloon driver's current state. If the numbers diverge, the guest kernel has not recognized the hotplug event or the balloon driver is stuck. Reboot the guest to synchronize the state.
Common pitfalls and what the error looks like
Hotplug changes fail when the guest lacks the required drivers or when the domain is not running. Libvirt enforces strict state transitions.
error: Requested operation is not valid: domain is not running
This error appears when you attempt a --live change on a shut-down VM. Add --config to modify the persistent definition instead. The change will apply on the next boot.
CPU model mismatches break migration and sometimes cause boot hangs. The host-passthrough model exposes every host CPU feature. It works perfectly until you try to migrate the VM to a different physical machine.
error: XML error: Invalid CPU mode
This error triggers when the XML references a CPU model that the host does not support, or when host-passthrough conflicts with explicit feature flags. Switch to qemu64 or max for portable definitions.
SELinux blocks custom disk paths or non-standard virtio-serial ports. The domain starts, but the guest agent channel fails. Check the audit log before disabling enforcement.
# View SELinux denials related to libvirt and qemu
# The one-line summary usually points to the exact blocked operation
journalctl -t setroubleshoot | grep libvirt
Edit /etc/libvirt/qemu/ configurations through virsh edit. Never modify files in /usr/lib/libvirt/. Those ship with the package and get overwritten on every dnf upgrade. Manual edits drift and break silently.
Which configuration path fits your workload
Use host-passthrough when the VM stays on one physical machine and you need every instruction set extension for compilation or cryptography. Use qemu64 or max when you plan to migrate VMs across different hardware generations. Use static memory allocation when the guest runs a fixed-footprint service and you want predictable performance without balloon latency. Use memory ballooning when you run multiple VMs on a single host and need to reclaim idle RAM automatically. Use CPU pinning when the workload requires sub-millisecond latency and you can isolate host cores from background services. Use the default scheduler when you are running general desktop VMs or occasional development containers.
Run virsh dominfo before you guess. Read the actual allocation before tweaking.