-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress controller dropping websocket connections when performing backend reload #2461
Comments
To my knowledge this is how Nginx behaves on reloads. That being said you can enable dynamic mode using |
Or use the |
Thx for the replay @JordanP and @ElvinEfendi , |
Ok, can you tell us what are you chaning? Maybe you are deploying a new version of your app?
This is not an issue with nginx in particular, you will face the same issue with other load balancers. The issue here is that you need to drain the connections before replacing the old pods (once kubernetes removes the pod from ready we cannot do anything about it) Please check #322 (comment) Also please adjust the value of worker-shutdown-timeout |
That is what happens now. |
@aledbf actually because the ingress controller is a cross-cluster service (lots of pods from diffrent namespaces are reversed from it ) each time a change event is triggered by it causes a reload and there for disconnects our WebSocket. |
You may want to read http://danielfm.me/posts/painless-nginx-ingress.html and especially the "Ingress Classes To The Rescue" section. |
For future reference ... setting |
just want to note that |
increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
* Ingress-Nginx: Upgrade [Trello](https://trello.com/c/M1snktNZ) Relates to [PR](/~https://github.com/ministryofjustice/analytics-platform-helm-charts/pull/197) Increased ssl-session and proxy-read timeouts to prevent `nginx` from closing websocket connections to frontend tools. Added default ssl cert argument. We are begining to use [cert-manager](/~https://github.com/jetstack/cert-manager) which stores resulting tls certs into secrets. We all know k8s secrets are namespaced scoped, so without a default tls cert provided to nginx ingress. We would need to ensure that the secret existed in every namepsace there was an ingress object that needed to use it. This also introduces a wierd explicit dependency in that we have to ensure the certificate exists first before we deploy nginx-ingress. Then we have to ensure the k8s secret's name matches that what we have provided in the config. So in theory nginx-ingress chart has a dependency on the cert-manager-resources chart. * Update nginx-ingress config for alpha and dev increases client_body_buffer_size and worker_shutdown_timeout to handle websockets disconnection and reconnection more gracefully. See: kubernetes/ingress-nginx#2461 * Run 3 nginx-ingress replicas in alpha env
NGINX Ingress controller version: 0.12.0 and 0.14.0
Kubernetes version (use
kubectl version
): v1.8.0Environment: aws with elb
uname -a
): 4.4.0-119-generic x86_64 GNU/LinuxWhat happened: Ingress controller dropping websocket connections when performing backend reload
What you expected to happen: Websockets should be left connected to the target server
How to reproduce it (as minimally and precisely as possible):
Load a simple websocket service into k8s and create ingress rule for it
open a websocket client and connect (for example - https://websocket.org/echo.html )
Cause ingress to perform reload backend (for example add or delete an ingress rule)
Anything else we need to know:
I verified this with 0.12 and 0.14 versions .
The text was updated successfully, but these errors were encountered: