Proxmox: Removing a Ghost Node from the Web UI
This is a follow-on to Proxmox Cluster Going Sluggish? Your Offline Node Has a Stale Config. That post covers nodes that are misbehaving but still real. This one covers nodes that don't exist at all.
After sorting out a stale corosync config on a returning node, I noticed the web UI was still showing an extra node — PVE9 — on every host in the cluster. It wasn't causing any problems, just sitting there looking wrong. Here's how to get rid of it.
What a Ghost Node Is
A ghost node is a stale directory in /etc/pve/nodes/ with no corresponding corosync membership. The Proxmox web UI reads from the shared cluster filesystem, not from corosync directly — so a leftover directory shows up as a node even if the machine is long gone. It can appear after a node was removed uncleanly, rebuilt under a different name, or never properly decommissioned.
Check It's Actually a Ghost
First confirm the node isn't just offline — check corosync:
pvecm status
Membership information
----------------------
Nodeid Votes Name
0x00000001 3 10.140.3.10
0x00000002 1 10.140.3.80
0x00000003 1 10.140.3.70
0x00000004 1 10.140.3.82
0x00000006 1 10.140.3.20
If it's not in this list, it's a ghost. Confirm the directory exists:
cat /etc/pve/corosync.conf | grep pve9 # nothing
ls /etc/pve/nodes/
# pve1 pve2 pve7 pve8 pve9 xenon
Check for VMs Before Deleting
The node directory may contain VM or container configs:
ls /etc/pve/nodes/pve9/qemu-server/
ls /etc/pve/nodes/pve9/lxc/
If there are configs, check them before doing anything:
cat /etc/pve/nodes/pve9/qemu-server/102.conf
If the node is truly gone, any disks listed as local-lvm:vm-XXX-disk-Y were on that node's local storage and are already inaccessible. You won't be able to recover them. Make sure you're happy with that before proceeding.
Remove It
Try the proper route first:
pvecm delnode pve9
If the node was never in corosync you'll get:
Node/IP: pve9 is not a known host of the cluster.
In that case, remove the directory directly:
rm -rf /etc/pve/nodes/pve9
The deletion replicates across the cluster filesystem immediately. Verify:
ls /etc/pve/nodes/
# pve1 pve2 pve7 pve8 xenon
Reload the web UI — the ghost node is gone.
Quick Reference
| Command | Purpose |
|---|---|
pvecm status |
Confirm node isn't in corosync |
ls /etc/pve/nodes/ |
List all node directories |
ls /etc/pve/nodes/<name>/qemu-server/ |
Check for VM configs |
ls /etc/pve/nodes/<name>/lxc/ |
Check for container configs |
pvecm delnode <name> |
Proper removal (works if node was in corosync) |
rm -rf /etc/pve/nodes/<name> |
Manual removal for true ghost nodes |
No comments:
Post a Comment