You added a new iSCSI target and your Fedora machine needs to see it
You provisioned a LUN on your storage array and now your Fedora system must access it as a local disk. Or perhaps you migrated a virtual machine disk to remote storage and the boot process hangs because the root filesystem isn't available. The terminal shows a missing block device, or systemd complains about a mount dependency that never resolves. You need to configure the initiator, log in to the target, and ensure the connection survives a reboot without creating a dependency loop that bricks the system.
What iSCSI actually does
iSCSI encapsulates SCSI commands inside TCP/IP packets. This allows a remote storage volume to appear as a local block device to the operating system. Fedora runs the initiator software that speaks the iSCSI protocol. The remote server runs the target. When the initiator logs in, the kernel creates a virtual block device. To the OS, /dev/sdb looks identical to a SATA drive plugged into the motherboard.
The critical difference is the dependency chain. A local disk is available as soon as the kernel probes the hardware. An iSCSI disk requires the network to be up, the iSCSI daemon to be running, and the login session to be established. If the network drops, the disk vanishes. If you mount a filesystem on a vanished disk, the kernel panics or hangs. Managing this sequence is the core challenge of iSCSI administration.
The iscsiadm utility manages the initiator database and communicates with the iscsid daemon. Discovery writes target information to the database. Login instructs the kernel to create a session. The separation between the database and the active session means you can script discovery once and rely on the stored configuration for future boots.
Install and configure the initiator
The iSCSI tools are not installed by default on minimal Fedora images. You need the iscsi-initiator-utils package. This package provides iscsiadm, the iscsid daemon, and the systemd units required for boot integration.
Here is how to install the package and enable the daemon.
sudo dnf install iscsi-initiator-utils
# Install the initiator tools and the daemon that manages connections.
# The package includes iscsiadm for CLI management and iscsid for session handling.
sudo systemctl enable --now iscsid
# Start the daemon immediately and ensure it runs on boot.
# iscsid handles login sessions, multipath coordination, and recovery.
The iscsid daemon runs in the background and maintains the state of all iSCSI sessions. iscsiadm sends commands to the daemon via D-Bus. You rarely need to restart iscsid unless you change global configuration in /etc/iscsi/iscsid.conf. Config files in /etc/ are user-modified. Files in /usr/lib/ ship with the package. Edit /etc/. Never edit /usr/lib/.
Enable the daemon before you attempt discovery. If the daemon isn't running, iscsiadm commands may fail or behave unpredictably.
Discover and log in to the target
Discovery finds available targets on the network. The standard method is sendtargets, which queries the target server for a list of IQNs. An IQN (iSCSI Qualified Name) uniquely identifies a target. The format is iqn.yyyy-mm.reversed.domain:label. For example, iqn.2024-01.com.example:storage indicates a target created in January 2024 by example.com.
Here is how to discover targets on a specific portal.
sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.1.100
# Query the target server at the specified IP for available IQNs.
# sendtargets is the standard discovery method for most storage arrays.
# Replace the IP with your actual target address.
The output lists the discovered targets. Each line shows the portal and the IQN.
192.168.1.100:3260,1 iqn.2024-01.com.example:storage.lun1
The number after the comma is the TPGT (Target Portal Group Tag). Most arrays use TPGT 1. You usually don't need to specify the TPGT unless the target has multiple configurations for the same IQN.
Once discovered, log in to the target. This creates the session and allocates the block device.
sudo iscsiadm --mode node --targetname iqn.2024-01.com.example:storage.lun1 --portal 192.168.1.100:3260 --login
# Establish a session with the discovered target.
# The kernel allocates a new block device upon success.
# The portal includes the port, usually 3260 for iSCSI.
Verify the connection immediately. Check the block device list and kernel messages.
lsblk
# List block devices to confirm the new disk appeared.
# Look for a device with the size matching your target LUN.
sudo dmesg | tail -20
# Check kernel messages for SCSI device registration.
# You should see lines indicating a new disk was detected.
If lsblk shows a new device, the login succeeded. The device name might be /dev/sdb, /dev/sdc, or something else depending on existing hardware. The name can change between reboots. Always use the UUID for persistent mounting.
Make the connection persistent
By default, iSCSI sessions are manual. The connection drops after a reboot. You must configure the node startup mode to automatic for the initiator to log in during the boot process.
Here is how to set automatic login for the target.
sudo iscsiadm --mode node --targetname iqn.2024-01.com.example:storage.lun1 --portal 192.168.1.100:3260 --op update --name node.startup --value automatic
# Configure the initiator to log in automatically on boot.
# Without this, the connection drops after a reboot.
# The configuration is stored in /var/lib/iscsi/nodes/.
The iscsiadm command updates the node database in /var/lib/iscsi/nodes/. This directory contains the persistent configuration for each target. The iscsid daemon reads this database at boot and initiates logins for nodes marked as automatic.
Ensure the iscsi service is enabled. This service triggers the login process for all automatic nodes.
sudo systemctl enable --now iscsi
# Enable the iscsi service to trigger logins on boot.
# iscsi.service depends on iscsid and network-online.target.
# It iterates the node database and logs in to automatic targets.
Reboot to test the dependency chain. If the system boots and lsblk shows the device, the persistence configuration is correct.
Mount the storage safely
Once the block device is available, format it and mount it. Use mkfs.xfs for the filesystem. XFS is the default on Fedora and handles large files well.
Here is how to format and mount the device.
sudo mkfs.xfs /dev/sdb
# Create an XFS filesystem on the new block device.
# Only run this on empty storage to avoid data loss.
sudo mkdir -p /mnt/iscsi-storage
# Create a directory to serve as the mount point.
sudo mount /dev/sdb /mnt/iscsi-storage
# Mount the filesystem immediately for testing.
Add an entry to /etc/fstab for persistent mounting. Use the UUID instead of the device name. Device names can change. The UUID is stable.
Here is how to find the UUID and create the fstab entry.
sudo blkid /dev/sdb
# Display the UUID and filesystem type for the device.
# Copy the UUID value for the fstab entry.
Add the following line to /etc/fstab. Replace the UUID with the value from blkid.
UUID=1234-5678-90ab /mnt/iscsi-storage xfs _netdev,x-systemd.requires=iscsi.service 0 0
The _netdev option tells systemd to wait for the network before mounting. This is essential for any network-based filesystem. The x-systemd.requires=iscsi.service option creates a hard dependency on the iSCSI service. Systemd will start iscsi.service, wait for it to activate, and only then attempt the mount. This prevents the race condition where the mount happens before the login completes.
Mount with _netdev and x-systemd.requires=iscsi.service. If you skip these options, systemd races the network and your boot hangs.
Common errors and recovery
iSCSI setups fail in predictable ways. The error messages point directly to the cause.
If discovery returns nothing, check the IP address and firewall. The target server must allow traffic on port 3260. Verify connectivity with ping or nc.
nc -zv 192.168.1.100 3260
# Test TCP connectivity to the iSCSI port.
# A successful connection confirms the network path is open.
If the login fails with iscsiadm: Could not login to [iface: default, target: ...], verify the IQN spelling. iSCSI names are case-sensitive and must match exactly. Check for trailing spaces or typos.
If you see iscsiadm: Could not login to [iface: default, target: ...]. Multiple TPGT, please specify the tpgt, the target has multiple portals or configurations. You need to specify the TPGT in the login command. This usually happens when the target exports the same IQN over multiple interfaces.
If the system hangs at boot, the mount is likely missing _netdev or the iSCSI dependency. Systemd tries to mount before the network is ready. The iSCSI daemon hasn't logged in yet. The disk doesn't exist. The mount unit waits forever. Boot from a rescue environment, edit /etc/fstab, and add the missing options.
Check the journal for detailed diagnostics. The iscsid daemon logs the exact reason for login failures.
journalctl -xeu iscsi.service
# View logs for the iscsi service with explanatory text.
# The -x flag adds context from man pages.
# The -e flag jumps to the end of the journal.
Run journalctl -xeu iscsi.service first. Read the actual error before guessing.
Choose the right storage protocol
Storage protocols serve different use cases. Select the one that matches your requirements.
Use iSCSI when you need block-level access to remote storage for VMs or databases. Use NFS when you need file-level sharing across multiple clients without managing block devices. Use Ceph when you require distributed storage with built-in replication and high availability. Use local NVMe when latency is the primary constraint and the data must stay on the host. Use multipathd when your storage array provides multiple network paths for redundancy.