-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LoadBalancer targets too many nodes #506
Comments
For sure 👍 Sounds like a good thing to have, especially for larger clusters. I think the general guidance I gave in #373 (comment) still applies, but feel free to open a PR and ping me for additional questions. |
Awesome. Thanks! |
@apricote quick question about that. There you wrote:
Why not use the selector that already comes with the LoadBalancer Service object? selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx Are there situations where this selector leads to incorrect backends? |
The Service Selector selects the Pods that the service targets. The new annotation will select the Nodes that we add as targets for the Load Balancer. When you create a Service
|
Right, my thinking was to have the operator resolve the nodes those pods run on and use them and the node ports as LB backends. That way ingress traffic would only end up on nodes that have ingress pods currently scheduled on them. But given the freedom people have in their K8s networking setups I guess this makes too many assumptions, and having a dedicated label for the operator provides more flexibility. |
That is outside of the scope of HCCM. We only implement the interfaces that |
Hi there, I feel like I'm doing something very obvious wrong but I can't seem to figure out what it is and I'm unsure if I messed up the config or if the cloud controller is having some snafu.
I deployed an ingress-nginx as a Deployment with 3 replicas. The LoadBalancer Service object is created with the following
kubectl -n ingress-nginx get service/ingress-nginx-controller -o yaml
output. Sorry for the long object, I didn't want to leave anything out, but the interesting sections are likely annotations and spec.selector:When I check their status everything looks good:
And if I query that selector from the LoadBalancer spec above it returns exactly the pods I would expect it to return:
However, when I check the created load balancer it targets all the nodes of the K8s cluster, not just the three where those nginx pods are running:

So the setup kinda works, but this will become a problem once the cluster has more than the 25 allowed nodes.
When I check the
hcloud-cloud-controller-manager
logs I see that for some reason it creates the LB with all the nodes of the cluster:I feel like I'm doing something very obvious wrong. What do I have to do so that the LoadBalancer only targets the nodes of the pods that the Service selector returns?
The text was updated successfully, but these errors were encountered: