Longhorn Node Disk Space Not Updating on UI After Expansion
Article Number: 000021868
Environment
SUSE Enterprise Storage - Longhorn
Situation
After expanding the disk space on a Kubernetes node, Longhorn fails to recognize and update the new disk capacity. The node.longhorn.io
object for the affected node does not reflect the expanded size, even though the disk expansion was successful at the operating system level.
Upon reviewing the longhorn-manager
pod logs, the following recurring error is observed:
2025-05-24T10:27:50.565235147+02:00 time="2025-05-24T08:27:50Z" level=error msg="Dropping Longhorn node out of the queue" func=controller.handleReconcileErrorLogging file="utils.go:79" LonghornNode=longhorn-system/node01 controller=longhorn-node error="failed to sync node for longhorn-system/node01: no node name provided to check node down or deleted" node=node01
This error indicates that the node.spec.name
field is missing from the respective node.longhorn.io
object, preventing the Longhorn node controller from properly syncing the node's status, including disk information.
Cause
The Longhorn node-controller
includes a disk monitor that periodically queries the node's disk statistics and updates the node.status.diskStatus
field of the node.longhorn.io
object. However, if the node.spec.name
field is absent from the node.longhorn.io
definition, the controller cannot correctly identify and process the node, leading to sync failures and the inability to reflect updated disk sizes.
Resolution
To resolve this issue, you must manually add the node.spec.name
field to the affected node.longhorn.io
object.
- Identify the Longhorn Node Object(s): First, list all
node.longhorn.io
objects in thelonghorn-system
namespace to identify the affected node(s).
kubectl get node.longhorn.io -n longhorn-system
Example Output:
NAME READY ALLOWSCHEDULING SCHEDULABLE AGE
node01 True true True 46m
node02 True true True 46m
node03 True true True 46m
node.longhorn.io
object you intend to modify. Replace <NAME> with the actual name of your Longhorn node (e.g., node01
).
kubectl get node.longhorn.io <NAME> -n longhorn-system -o yaml > <NAME>_longhorn_node_backup.yaml
node.longhorn.io
resource for the problematic node. Replace <NAME> with the actual name of your Longhorn node.
kubectl edit node.longhorn.io <NAME> -n longhorn-system
me
: Locate the spec
section within the YAML. Add or modify the name
field under spec
to match the metadata.name
of the object.
Example:
Change highlighted by # This is the change made
apiVersion: longhorn.io/v1beta2
kind: Node
metadata:
finalizers:
- longhorn.io
generation: 1
name: node01
namespace: longhorn-system
resourceVersion: "12713"
uid: 3c11b55d-e0633-47a2f-a525-a2f98e96df66
spec:
allowScheduling: true
disks:
default-disk-697367c0672dc2af:
allowScheduling: true
diskDriver: ""
diskType: filesystem
evictionRequested: false
path: /var/lib/longhorn/
storageReserved: 150265232486
tags: []
evictionRequested: false
instanceManagerCPURequest: 0
name: node01 # This is the change made
tags: []
node.longhorn.io
object. Once the node.spec.name
field is correctly added, the Longhorn manager should automatically begin syncing the node. You should observe that the disk status on the node.longhorn.io
object is updated to reflect the expanded disk space.