How to Set Up Cgroups v2 Resource Limits on Fedora

Fedora defaults to cgroups v2, letting you use systemd slice units or direct cgroupfs writes to apply per-process CPU, memory, and I/O limits.

You launched a backup script and your laptop became a space heater

The disk is thrashing, the mouse lags, and top shows a single process consuming 90% of your CPU. You kill the process, but the system feels sluggish for minutes afterward. Or worse, a runaway service fills your RAM, triggers the OOM killer, and drops your database connection. You need a way to cage these processes before they take down the whole machine.

Fedora provides cgroups v2 to solve this. You can enforce strict limits on CPU, memory, and I/O for any process or group of processes. This keeps your system responsive even when applications misbehave.

How cgroups v2 manages resources

Cgroups v2 is the kernel's resource manager. Think of it like a landlord managing an apartment building. The building has a total amount of electricity, water, and bandwidth. Without rules, one tenant can run a crypto miner and drain the power for everyone else. Cgroups let you assign quotas to specific tenants. If a tenant tries to use more than their share, the kernel throttles them or kills them, depending on the rule.

Fedora has used cgroups v2 as the default since Fedora 31. The old v1 hierarchy is gone. You interact with cgroups through systemd, which acts as the property manager enforcing the landlord's rules. The cgroup tree mirrors the systemd unit hierarchy. At the root, you have init.scope. Below that, system.slice holds system services, user.slice holds user sessions, and machine.slice holds containers. Every service falls under a slice. Limits set on a slice apply to all children. This inheritance means you can set a global budget for all user sessions and then carve out sub-limits for individual apps.

Understand the tree structure. Limits propagate downward. A limit on a parent slice caps the total usage of all children combined.

Verify cgroups v2 is active

First, confirm your system is actually using v2. Fedora enables this by default, but custom kernels or chroot environments might differ.

Check the mount point type. The output must show cgroup2.

# Check the mount point type. cgroup2 means v2 is active.
mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)

If you see cgroup instead of cgroup2, you are on v1. This guide assumes v2. Fedora Workstation and Server ship with v2 enabled.

Run mount | grep cgroup before configuring limits. v1 and v2 use different property names and sysfs paths.

Limit a one-off command

Use systemd-run for temporary jobs. This creates a transient scope that disappears when the command finishes.

Here's how to run a script with a hard memory limit and CPU cap.

# Run a script with a hard memory limit and CPU cap.
# The --scope flag wraps the command in a temporary cgroup.
systemd-run --scope \
  -p CPUQuota=50% \
  -p MemoryMax=512M \
  ./heavy-task.sh
# --scope creates a transient slice for this command only
# CPUQuota=50% caps usage to half of one core
# MemoryMax=512M triggers OOM kill if usage exceeds 512 megabytes

Use systemd-run for ad-hoc tasks. It cleans up automatically when the command exits.

Apply limits to a service

For persistent services, edit the unit file. Never edit files in /usr/lib/systemd/. Use drop-ins in /etc/.

Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/. Package updates will overwrite changes in /usr/lib/.

Open the drop-in editor for the service. This creates a file in /etc/systemd/system/<service>.service.d/.

# Open the drop-in editor for the nginx service.
# This creates a file in /etc/systemd/system/nginx.service.d/.
sudo systemctl edit nginx

Add the resource properties in the [Service] section.

[Service]
# Limit CPU usage to a quarter of one core.
CPUQuota=25%
# Hard memory limit. The kernel kills the process if exceeded.
MemoryMax=256M
# Disable swap usage for this service.
MemorySwapMax=0
# Lower I/O priority relative to other services.
IOWeight=50

Reload systemd and restart the service to apply the changes.

# Reload systemd to pick up the new drop-in configuration.
sudo systemctl daemon-reload
# Restart the service to apply limits to the running process.
sudo systemctl restart nginx

Always run daemon-reload after editing unit files. Systemd caches unit data in memory.

Group services with slices

Group related services into a custom slice to share a pool of resources. This is useful for multi-component applications where you want to limit the total footprint of the app, not just individual daemons.

Create a slice unit file in /etc/systemd/system/.

# Create a slice unit file for a group of related services.
# Place this in /etc/systemd/system/myapp.slice.
sudo nano /etc/systemd/system/myapp.slice

Define the resource budget for the slice.

[Unit]
# Human-readable description for systemctl status.
Description=My Application Slice

[Slice]
# Allow up to two full cores for all services in this slice.
CPUQuota=200%
# Total memory budget for the entire group.
MemoryMax=2G

Assign services to the slice in their unit files.

# Add this to the [Service] section of individual units.
Slice=myapp.slice

Slices inherit from their parent. A slice inside system.slice shares the system pool unless you set explicit limits.

Soft limits versus hard limits

MemoryMax is a hard wall. The kernel kills the process if it crosses this line. MemoryHigh is a speed bump. When a process hits MemoryHigh, the kernel aggressively reclaims memory from that cgroup. It slows the process down but doesn't kill it. This is useful for services that spike occasionally.

Here's how to configure both limits for a service that needs breathing room.

[Service]
# Throttle when usage exceeds 200M.
MemoryHigh=200M
# Kill if usage exceeds 256M.
MemoryMax=256M
# MemoryHigh triggers reclaim pressure without killing the process
# MemoryMax is the absolute ceiling that triggers OOM kill

Set MemoryHigh slightly below MemoryMax. The gap gives the kernel room to throttle before it has to kill.

Verify limits are enforced

Check the cgroup tree and resource usage. systemd-cgls shows the hierarchy and current consumption.

View the cgroup tree with human-readable usage stats.

# View the cgroup tree and resource usage.
# The -f flag shows the memory and CPU usage in human-readable format.
systemd-cgls -f
└─user.slice
  └─user-1000.slice
    └─session-1.scope
      ├─ 1234 bash
      └─ 5678 ./heavy-task.sh

Read the limits directly from the cgroup filesystem. The values are in bytes.

# Read the memory limit directly from the cgroup filesystem.
cat /sys/fs/cgroup/system.slice/nginx.service/memory.max
268435456

The output 268435456 corresponds to 256 megabytes.

Run journalctl -xe if a service crashes immediately. The kernel logs OOM kills with a clear message.

Common pitfalls

Services can fail silently or crash instantly when limits are misconfigured. Check these patterns.

The service dies instantly after restart. The memory limit is too low. The kernel triggers an OOM kill the moment the service allocates its initial buffers. Increase MemoryMax or check for memory leaks.

The error Failed to set property MemoryMax: Invalid argument appears. This usually means a typo in the unit file or a missing controller. Verify the syntax with systemd-analyze.

# Check the unit file for syntax errors before reloading.
# This catches typos that daemon-reload might accept silently.
systemd-analyze verify nginx.service

The error Failed to start nginx.service: Unit nginx.service has a bad unit file setting shows up. This indicates a malformed property or unsupported value. Check the journal for details.

Failed to start nginx.service: Unit nginx.service has a bad unit file setting.
See system logs and 'systemctl status nginx.service' for details.

CPUQuota does not seem to work. CPUQuota is an absolute limit. If you set CPUQuota=50%, the process gets half of one core. If you want relative sharing, use CPUWeight. CPUWeight ranges from 1 to 10000. The default is 100. A weight of 200 gets twice the CPU time of a weight of 100 when the CPU is contended.

Verify unit syntax before restarting. systemd-analyze verify catches typos that daemon-reload might miss.

Choose the right tool

Use systemd-run --scope when you need to limit a one-off command or script without creating a permanent unit file.

Use drop-in overrides in /etc/systemd/system/ when you are tuning an existing service installed by a package.

Use custom slices when multiple services belong to the same application and should share a resource pool.

Use MemoryHigh instead of MemoryMax when you want to throttle a service under pressure rather than kill it.

Use CPUWeight when you want to prioritize one service over another based on relative importance.

Use IOWeight when disk I/O is the bottleneck and you need to starve background tasks to protect interactive workloads.

Where to go next