How to configure Calico in BGP mode on RKE2
Article Number: 000022040
Environment
- A Rancher-provisioned or standalone RKE2 cluster
- Calico is configured as the CNI using the supplied rke2-calico chart
Procedure
When creating an RKE2 cluster, if you choose Calico as the CNI, the defaults will configure VXLAN mode. You can follow the steps below to switch it to BGP mode. This is best done during cluster creation.
Note making large changes to Calico like migrating to BGP or configuring peering after the cluster has been created may result in temporary loss of pod network connectivity, it is highly recommended to make changes during cluster creation, or during a maintenance period.
Additionally, these steps assume there are no other changes, like the CIDR block, which should remain the same throughout the lifecycle of the cluster.
Set the Calico mode to BGP - A Rancher-provisioned RKE2 cluster
When creating the cluster in Rancher, in the Add-on: Calico section under Cluster Configuration, modify the values according to the configuration below.
On an existing cluster, instead go to Cluster Management, select the RKE2 downstream cluster you want to modify, Edit Config → Add-on: Calico.
...
installation:
calicoNetwork:
bgp: Enabled
ipPools:
# Modify the CIDR based on the current CIDR in use
- cidr: 10.42.0.0/16
encapsulation: None
...
Set the Calico mode to BGP - A standalone RKE2 cluster
You can configure it by creating a HelmChartConfig file on every RKE2 server node in the cluster:
cat > /var/lib/rancher/rke2/server/manifests/rke2-calico-config.yaml <<EOF
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-calico
namespace: kube-system
spec:
valuesContent: |-
installation:
calicoNetwork:
bgp: Enabled
ipPools:
# Modify the CIDR based on the current CIDR in use
- cidr: 10.42.0.0/16
encapsulation: None
EOF
On an existing cluster, this can be configured with a file (as above), or a HelmChartConfig object can be created in the cluster (below). It is important to do one or the other, create a HelmChartConfig file or object - not both
cat <<EOF | kubectl apply -f -
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-calico
namespace: kube-system
spec:
valuesContent: |-
installation:
calicoNetwork:
bgp: Enabled
ipPools:
# Modify the CIDR based on the current CIDR in use
- cidr: 10.42.0.0/16
encapsulation: None
EOF
Additional steps for existing clusters
Recreate the default IPv4 IPPool
If switching the mode to BGP on an existing cluster, the tigera-operator pod may show the following logs:
{"level":"error","ts":"2025-09-09T04:29:36Z","logger":"controller_ippool","msg":"Unable to modify IP pools while Calico API server is unavailable","Request.Namespace":"","Request.Name":"periodic-5m0s-reconcile-event","reason":"ResourceNotReady","stacktrace":"github.com/tigera/operator/pkg/controller/status.(*statusManager).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/status/status.go:356\ngithub.com/tigera/operator/pkg/controller/ippool.(*Reconciler).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/ippool/pool_controller.go:325\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.3/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.3/pkg/internal/controller/controller.go:314\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.3/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.3/pkg/internal/controller/controller.go:226"}
So you need to manually recreate the default-ipv4-ippool:
kubectl get ippools.crd.projectcalico.org default-ipv4-ippool -oyaml > default-ipv4-ippool-backup.yaml
kubectl delete ippools.crd.projectcalico.org default-ipv4-ippool
After deletion, the tigera-operator pod will recreate it, and the newly created default-ipv4-ippool will not have VXLAN or IPIP-related configurations. It looks like:
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
labels:
app.kubernetes.io/managed-by: tigera-operator
name: default-ipv4-ippool
spec:
allowedUses:
- Workload
- Tunnel
blockSize: 26
cidr: 10.42.0.0/16
natOutgoing: true
nodeSelector: all()
Recreate the calico-node pods to ensure that each pod uses the new configuration
kubectl -n calico-system delete pod -l k8s-app=calico-node
Once all Pods are running, check whether any tunnel interfaces still exist:
# If there are still residual tunnel interfaces, you can try manually deleting them or rebooting the host.
ip link show | grep -E "tunl0|vxlan"
Check BGP sessions are established
The calicoctl CLI can be used to verify BGP sessions between nodes in the cluster have been successfully established:
# You can download calicoctl from GitHub: https://github.com/projectcalico/calico
root@test-0:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 172.16.16.143 | node-to-node mesh | up | 04:38:28 | Established |
| 172.16.16.148 | node-to-node mesh | up | 04:38:29 | Established |
| 172.16.16.149 | node-to-node mesh | up | 04:38:29 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
Optional: Integrate with a BGP router
Note that integrating Calico’s BGP mode with other peers is optional, and relies on BGP support from the underlying physical network and other devices. It is recommended to check with maintainers of the network infrastructure if unsure.
Create a BGPConfiguration and a BGPPeer:
cat <<EOF | kubectl apply -f -
apiVersion: crd.projectcalico.org/v1
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: true
# BGP AS Number used by Calico
asNumber: 64512
---
apiVersion: crd.projectcalico.org/v1
kind: BGPPeer
metadata:
name: peer-to-bgp-router
spec:
# IP and AS Number of the BGP Router
asNumber: 64512
peerIP: 172.16.16.140
EOF
After creation, check the session status using calicoctl. If the global state shows Established, it means the session has been successfully established:
root@dtest-0:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 172.16.16.143 | node-to-node mesh | up | 04:38:28 | Established |
| 172.16.16.148 | node-to-node mesh | up | 04:38:29 | Established |
| 172.16.16.149 | node-to-node mesh | up | 04:38:29 | Established |
| 172.16.16.140 | global | up | 04:59:01 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
Check pod-to-pod networking across nodes
To test whether the overlay is functioning successfully, an overlay test can be performed. Alternatively, a manual example below uses a curl request to other pods running a web server, indicating successful connectivity:
root@test-0:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-f69bc9b4f-7h56l 1/1 Running 0 12s 10.42.22.91 test-0 <none> <none>
nginx-f69bc9b4f-hsb82 1/1 Running 0 19s 10.42.179.17 test-4 <none> <none>
nginx-f69bc9b4f-lb2ml 1/1 Running 0 12s 10.42.5.197 test-1 <none> <none>
nginx-f69bc9b4f-xrtht 1/1 Running 0 12s 10.42.139.194 test-3 <none> <none>
root@test-0:~# kubectl exec -it nginx-f69bc9b4f-7h56l -- curl -I 10.42.179.17
HTTP/1.1 200 OK
Server: nginx/1.27.2
Date: Tue, 09 Sep 2025 04:46:23 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
Connection: keep-alive
ETag: "66fd630f-267"
Accept-Ranges: bytes
root@test-0:~# kubectl exec -it nginx-f69bc9b4f-7h56l -- curl -I 10.42.5.197
HTTP/1.1 200 OK
Server: nginx/1.27.2
Date: Tue, 09 Sep 2025 04:46:26 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
Connection: keep-alive
ETag: "66fd630f-267"
Accept-Ranges: bytes
root@test-0:~# kubectl exec -it nginx-f69bc9b4f-7h56l -- curl -I 10.42.139.194
HTTP/1.1 200 OK
Server: nginx/1.27.2
Date: Tue, 09 Sep 2025 04:46:29 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
Connection: keep-alive
ETag: "66fd630f-267"
Accept-Ranges: bytes