You launch a compiled tool and it vanishes instantly
You run a desktop application or a custom script and the terminal prints Segmentation fault (core dumped). The process exits with code 139. You know something crashed, but you have no idea where. The application left behind a memory snapshot, but finding it and reading it requires the right tools. This is how you locate the dump, attach a debugger, and translate raw memory addresses into a readable stack trace.
What is actually happening
When a program violates memory rules, the Linux kernel sends it a SIGSEGV signal. The process terminates immediately. If core dumps are enabled, systemd-coredump catches the raw memory state and compresses it into a binary file. That file is a black box recorder. It contains every variable, pointer, and instruction pointer at the exact millisecond of failure. Reading it raw is impossible. You need a debugger to map those memory addresses back to source code lines and function names.
Fedora ships with systemd-coredump enabled by default. It stores compressed dumps in /var/lib/systemd/coredump/ and indexes them by PID, timestamp, and executable path. The coredumpctl command queries that index. The gdb command interprets the binary data. Together they turn a cryptic exit code into a precise file path and line number.
Check the current core dump limit before you start. A misconfigured systemd override can silently discard every crash.
systemctl show --property=CoreDump
# WHY: Confirms whether systemd is set to capture dumps (yes), ignore them (none), or forward them to a process (external).
# WHY: Fedora defaults to yes, but container runtimes or custom service files often override this to none.
# WHY: If the output reads CoreDump=none, no crash data will ever be saved.
Reboot before you debug. Half the time the symptom is gone.
How to capture and read the crash dump
List the available dumps to find the PID of the crashed process. The output shows a timestamp, signal, core size, PID, and executable name. Match the executable name to the application that failed.
coredumpctl list
# WHY: Queries the systemd-coredump journal index and prints every saved crash dump.
# WHY: The PID column is your entry point. You will use it to launch the debugger.
# WHY: The SIZE column shows compressed dump size. Large sizes indicate heavy memory usage or leaked allocations.
Once you identify the correct PID, launch GDB directly from the coredump manager. This bypasses manual file paths and handles decompression automatically.
coredumpctl debug <PID>
# WHY: Extracts the compressed dump from the journal index and feeds it to GDB.
# WHY: Automatically loads the matching executable and sets the architecture correctly.
# WHY: Drops you into a GDB session with the crash state already loaded.
Inside GDB, you are looking at a frozen execution state. Print the call stack to see which function triggered the crash.
bt
# WHY: Stands for backtrace. Prints the call stack from the crash point up to the entry function.
# WHY: Frame 0 is where the violation occurred. Higher frames show the calling chain.
# WHY: If you see ?? or unknown, the binary lacks debug symbols. Install debuginfo packages first.
Navigate to the crashing frame to inspect local variables and memory state.
frame 0
# WHY: Selects the topmost stack frame for inspection.
# WHY: GDB will now show the exact source line or assembly instruction that faulted.
# WHY: Use this to verify whether a pointer is null or an array is out of bounds.
info locals
# WHY: Dumps all local variables in the current frame.
# WHY: Shows uninitialized memory as garbage values or null pointers.
# WHY: Helps you spot which variable caused the segmentation fault.
Exit the debugger cleanly when you have the information you need.
quit
# WHY: Closes the GDB session and returns you to the shell.
# WHY: Does not modify the original core dump or system state.
# WHY: Safe to run at any point during post-mortem analysis.
Run journalctl first. Read the actual error before guessing.
Verify the backtrace
A successful backtrace shows numbered frames with function names, file paths, and line numbers. Frame 0 should point to the exact location of the memory violation. If you see readable paths like /usr/lib64/libfoo.so or /home/user/project/src/main.c, the debug symbols are loaded correctly.
If the output shows ?? or unknown, you are missing debuginfo packages. Fedora splits debug symbols into separate packages to save disk space. Install them with dnf debuginfo-install.
dnf debuginfo-install <package-name>
# WHY: Downloads the matching -debuginfo package from the Fedora debug repository.
# WHY: Places symbol tables in /usr/lib/debug/.build-id/ for GDB to locate automatically.
# WHY: Requires the debuginfo repository to be enabled. Fedora enables it by default on Workstation.
Re-run the coredumpctl debug <PID> command after installing the symbols. The backtrace should now resolve to readable source locations. Compare the line number against your code or the upstream source tree. The fix usually lives one or two frames up the stack, where the bad pointer was created or passed.
Trust the package manager. Manual file edits drift, snapshots stay.
Common pitfalls and exact error messages
Empty dump list. You run coredumpctl list and get no output. This usually means systemd-coredump is disabled for the service or the process ran in a restricted namespace. Check the service file for CoreDump=none. Remove the override or set it to yes. Restart the service and reproduce the crash.
coredumpctl list
# No output returned
Missing symbol tables. GDB prints No symbol table is loaded. Use the "file" command. This happens when the binary was compiled without debug info and you have not installed the matching debuginfo package. Run dnf debuginfo-install for the exact package that owns the executable. Use rpm -qf /path/to/binary to find the package name.
Permission denied. You see coredumpctl: command not found or Permission denied when trying to read dumps. The coredump manager requires root privileges or polkit authorization. Run the command with sudo or ensure your user is in the wheel group. Container environments often strip coredump access entirely. Run the crash outside the container or configure the runtime to forward dumps.
coredumpctl debug 12345
# Permission denied
Truncated backtrace. GDB shows only a few frames before stopping. This usually means the stack was corrupted or the crash happened in a signal handler. Use info registers to inspect the instruction pointer and stack pointer. Compare them against the expected memory layout. Stack corruption often points to buffer overflows or use-after-free bugs.
Snapshot the system before the upgrade. Future-you will thank you.
When to use this versus other debugging tools
Use coredumpctl when the crash already happened and you need to inspect the saved memory state. Use gdb --args ./program when you want to attach a debugger before the crash occurs and set breakpoints manually. Use journalctl -xeu <service> when the application runs as a background service and you need boot-time or runtime logs. Use strace -p <PID> when you suspect a system call failure rather than a memory violation. Stay on coredumpctl for post-mortem analysis of user-space applications.
Read the actual error before guessing.