How to run multiple ingress controllers
This document (000020160) is provided subject to the disclaimer at the end of this document.
Environment
Rancher, Kubernetes
Situation
Why use multiple ingress controllers?
At large numbers of ingresses and related workloads, a single ingress-controller can be a bottleneck in both throughput and reliability. It is recommended to shard ingresses across multiple ingress controllers in these scenarios.
Requirements
- A Kubernetes cluster created by Rancher v2.x or RKE
- A Linux cluster, Windows is currently not supported
- Helm installed and configured
Overview
At a high level, the process for sharding ingresses is to build out one or more extra ingress controllers and logically separate your ingresses to split the load between your ingress controllers evenly. This separation is handled through annotations on the ingresses. When an nginx-ingress-controller pod starts up with an ingressClass set, it will only try to satisfy ingresses that are annotated with the same ingressClass. This allows you to run as many ingress-controllers as needed to satisfy your ingress needs.
Creating extra nginx-ingress-controller charts
It is recommended to use the community nginx-ingress helm chart to install the extra ingress-controllers with NodePort services.
This deployment method allows you to run multiple ingress controllers on a single node, as there are no conflicting ports. You are required to route traffic to the correct ingress controller ports through an external load balancer.
Deploy a second default backend and ingress-controller from the nginx-ingress helm chart with the following values: controller.ingressClass
- unique name of the ingress class, such as ingress-nginx-2
controller.service.type=NodePort
controller.service.nodePorts.http
- define the NodePort between 30000-32767 you want to expose for http traffic. Optional, if not defined, one will be randomly assigned
controller.service.nodePorts.https
- define the NodePort between 30000-32767 you want to expose for http traffic. Optional, if not defined one will be randomly assigned
controller.kind=DaemonSet
For more configuration options, see the chart readme .
An example daemon set install would be:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install nginx-ingress-second -n ingress-nginx stable/nginx-ingress --set controller.ingressClass="ingress-class-2" --set controller.service.type=NodePort --set controller.kind=DaemonSet
This will create an ingress-nginx daemon set and service. This ingress controller will handle any ingress routed to it tagged with the annotation kubernetes.io/ingress.class: ingress-class-2
Sharding Ingresses
It is recommended to shard (split) your ingresses in a way that evenly splits load and configuration size between ingress controllers.
Sharding in this way means changing DNS and ingress hosts so that traffic for ingresses is sent to the correct ingress controllers, typically through an external load balancer.
The process for sharding ingresses is to tag each ingress with the ingressClass for the ingress controller you want to route them through. For example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app_1_ingress
annotations:
kubernetes.io/ingress.class: "ingress-class-2"
spec:
Once annotated with an ingressClass, these ingresses are now only handled by the ingress-controller that has that ingressClass.
In the default configuration, the Rancher-provided nginx-ingress-controller will only handle ingresses that either have the default ingress.class annotation of
nginx
or do not have an ingress.class annotation at all.
Next steps
From here it is just a matter of ensuring that the traffic for each ingress is routed to the correct nodePort on the nodes that the daemonset is targeted against.
If you did not specify a nodePort when deploying the chart, you can determine the nodePort that was assigned by checking the service created:
$ kubectl describe svc -n ingress-nginx nginx-ingress-second
Name: nginx-ingress-second-controller
Namespace: ingress-nginx
Labels: app=nginx-ingress
chart=nginx-ingress-1.35.0
component=controller
heritage=Helm
release=nginx-ingress-second
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["13.210.157.241"],"port":30155,"protocol":"TCP","serviceName":"ingress-nginx:nginx-ingress-second-controller","allNodes":tr...
Selector: app.kubernetes.io/component=controller,app=nginx-ingress,release=nginx-ingress-second
Type: NodePort
IP: 10.43.139.23
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30155/TCP
Endpoints: <none>
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30636/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
In this example, the service is exposed on every node on ports 30155 for http and 30636 for https
Status
Top Issue
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.