Skip to content

How to remove worker role from a node managed by Rancher

This document (000021557) is provided subject to the disclaimer at the end of this document.

Environment

Rancher v2.8.x and above. This is applicable for RKE2 and K3S Rancher provisioned clusters.

Situation

  • There could be scenarios that a node pool was accidentally created with ALL roles and the cluster was provisioned but the intention was to have only Controlplane and etcd roles selected in the node pool.
  • In such scenarios, you can go ahead and perform the steps mentioned in the resolution to remove the worker role.

Resolution

STEP 1:

  • Use the "Edit Config" option to edit the cluster via Cluster Management in the Rancher UI.
  • Find the "MachinePool" where you would like to make the changes.
  • Uncheck the worker role and save the cluster config. This should remove the worker role from the cluster.
  • However, if you still see the values "workerRole: true" for the created Machine Pool when you perform "Edit YAML" of the cluster via Cluster Management, you can set "workerRole: false".

STEP 2:

  • Though the Machine pools were updated using STEP 1, you would see the worker role when you run the "kubectl get nodes" command.
  • This is because of the of the label "node-role.kubernetes.io/worker=true" still present on that node.
  • You would need to remove this using the below command:
kubectl label node <nodeName> node-role.kubernetes.io/worker-
  • Now, when you run the "kubectl get nodes" command, you would not be able to see the worker role on the node.

STEP 3:

  • When only the Controlplane and etcd roles are selected, the cluster would not allow to deploy regular pods on that nodes besides from the controlplane components.
  • This is due to certain safe taints added to the node by Rancher when selecting the controlplane and etcd roles.
  • We need to apply these taints on the nodes where worker role was removed because these taints are not applied when the worker role is selected.
  • To apply the taint, you can run the below command:
kubectl taint node <nodeName> node-role.kubernetes.io/etcd:NoExecute node-role.kubernetes.io/control-plane:NoSchedule
  • This would help the scheduler to not schedule any regular pods on the Controlplane and etcd nodes.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.