Skip to content

Longhorn Node Disk Space Not Updating on UI After Expansion

Article Number: 000021868

Environment

SUSE Enterprise Storage - Longhorn

Situation

After expanding the disk space on a Kubernetes node, Longhorn fails to recognize and update the new disk capacity. The node.longhorn.io object for the affected node does not reflect the expanded size, even though the disk expansion was successful at the operating system level.

Upon reviewing the longhorn-manager pod logs, the following recurring error is observed:

2025-05-24T10:27:50.565235147+02:00 time="2025-05-24T08:27:50Z" level=error msg="Dropping Longhorn node out of the queue" func=controller.handleReconcileErrorLogging file="utils.go:79" LonghornNode=longhorn-system/node01 controller=longhorn-node error="failed to sync node for longhorn-system/node01: no node name provided to check node down or deleted" node=node01

This error indicates that the node.spec.name field is missing from the respective node.longhorn.io object, preventing the Longhorn node controller from properly syncing the node's status, including disk information.

Cause

The Longhorn node-controller includes a disk monitor that periodically queries the node's disk statistics and updates the node.status.diskStatus field of the node.longhorn.io object. However, if the node.spec.name field is absent from the node.longhorn.io definition, the controller cannot correctly identify and process the node, leading to sync failures and the inability to reflect updated disk sizes.

Resolution

To resolve this issue, you must manually add the node.spec.name field to the affected node.longhorn.ioobject.

  1. Identify the Longhorn Node Object(s): First, list all node.longhorn.io objects in the longhorn-system namespace to identify the affected node(s).
kubectl get node.longhorn.io -n longhorn-system

Example Output:

NAME      READY   ALLOWSCHEDULING   SCHEDULABLE   AGE
node01    True    true              True          46m
node02    True    true              True          46m
node03    True    true              True          46m
2. Backup the Affected Node Object (Optional but Recommended): Before making any changes, it's good practice to take a YAML backup of the specific node.longhorn.io object you intend to modify. Replace <NAME> with the actual name of your Longhorn node (e.g., node01).

kubectl get node.longhorn.io <NAME> -n longhorn-system -o yaml > <NAME>_longhorn_node_backup.yaml
3. Edit the Longhorn Node Object: Edit the node.longhorn.io resource for the problematic node. Replace <NAME> with the actual name of your Longhorn node.

kubectl edit node.longhorn.io <NAME> -n longhorn-system
4. Add node.spec.name:  Locate the spec section within the YAML. Add or modify the name field under spec to match the metadata.name of the object.

Example:

Change highlighted by # This is the change made

apiVersion: longhorn.io/v1beta2
kind: Node
metadata:
  finalizers:
  - longhorn.io
  generation: 1
  name: node01
  namespace: longhorn-system
  resourceVersion: "12713"
  uid: 3c11b55d-e0633-47a2f-a525-a2f98e96df66
spec:
  allowScheduling: true
  disks:
    default-disk-697367c0672dc2af:
      allowScheduling: true
      diskDriver: ""
      diskType: filesystem
      evictionRequested: false
      path: /var/lib/longhorn/
      storageReserved: 150265232486
      tags: []
  evictionRequested: false
  instanceManagerCPURequest: 0
  name: node01 # This is the change made
  tags: []
5. Save and Verify: Save the changes to thenode.longhorn.io object. Once the node.spec.name field is correctly added, the Longhorn manager should automatically begin syncing the node. You should observe that the disk status on the node.longhorn.io object is updated to reflect the expanded disk space.