Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slice bounds out of range - Frequent Ingress Controller restarts #12870

Closed
naveen2112 opened this issue Feb 19, 2025 · 3 comments
Closed

slice bounds out of range - Frequent Ingress Controller restarts #12870

naveen2112 opened this issue Feb 19, 2025 · 3 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@naveen2112
Copy link

Bug Description:
Our Kubernetes cluster has been running for the past 8 months, handling 10,000 to 30,000 requests per second, primarily for a chat module(web socket). There have been no sudden changes in load, but the Ingress Controller has started restarting frequently with the following error:

panic: runtime error: slice bounds out of range [:65283] with capacity 14918

This issue is affecting production stability, and we are looking for guidance on the possible cause and resolution.

Environment Details:
Ingress Controller Version:

NGINX Ingress Controller
nginx version: nginx/1.25.3
Repository: /~https://github.com/kubernetes/ingress-nginx
Build: 4fb5aac
Release: v1.10.1

Kubernetes Version:
v1.30.1

Steps to Reproduce:
The cluster runs with a stable load (10K–30K requests/sec).
No significant changes were made to the ingress configuration or deployment.
The Ingress Controller pod restarts frequently with a panic error.

Actual Behavior:
The Ingress Controller crashes frequently with a runtime panic.
An out-of-bounds slice access.

Relevant Logs:
panic: runtime error: slice bounds out of range [:65283] with capacity 14918

Troubleshooting Steps Taken:
Checked cluster resource utilization (CPU/memory) – no anomalies.
No recent configuration changes or version upgrades.
No sudden spike in traffic beyond normal range.
Observed that the issue occurs randomly without external triggers.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 19, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Feb 19, 2025
@longwuyuan
Copy link
Contributor

/close

duplicae of #12869

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

/close

duplicae of #12869

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants