-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support k8s < v1.19 & watch-ingress-without-class #12794
Conversation
/ok-to-test |
kvm2 driver with docker runtime
Times for minikube (PR 12794) ingress: 31.2s 32.2s 32.8s 31.8s 31.7s Times for minikube start: 49.5s 48.3s 46.5s 46.8s 45.9s docker driver with docker runtime
Times for minikube start: 22.2s 21.7s 21.2s 21.1s 22.5s Times for minikube ingress: 34.9s 32.4s 27.9s 31.9s 35.0s docker driver with containerd runtime
Times for minikube start: 31.3s 43.7s 43.6s 48.1s 43.8s Times for minikube ingress: 33.4s 37.0s 36.9s 33.4s 28.9s |
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
a1f3898
to
76c1e79
Compare
kvm2 driver with docker runtime
Times for minikube start: 51.0s 47.9s 48.5s 47.4s 46.4s Times for minikube ingress: 32.8s 32.3s 32.9s 34.3s 32.3s docker driver with docker runtime
Times for minikube ingress: 34.9s 35.0s 36.9s 27.9s 35.4s Times for minikube start: 22.9s 21.6s 22.0s 21.1s 21.9s docker driver with containerd runtime
Times for minikube (PR 12794) start: 45.1s 43.8s 36.9s 43.4s 43.6s Times for minikube ingress: 37.4s 37.4s 33.9s 32.9s 19.9s |
These are the flake rates of all failed tests.
To see the flake rates of all tests by environment, click here. |
// /~https://github.com/kubernetes/ingress-nginx/blob/0a2ec01eb4ec0e1b29c4b96eb838a2e7bfe0e9f6/deploy/static/provider/kind/deploy.yaml#L328 | ||
"IngressController": "ingress-nginx/controller:v0.49.3@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324", | ||
// issues: /~https://github.com/kubernetes/ingress-nginx/issues/7418 and /~https://github.com/jet/kube-webhook-certgen/issues/30 | ||
"KubeWebhookCertgenCreate": "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", | ||
"KubeWebhookCertgenPatch": "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", | ||
} | ||
if cc.CustomAddonImages == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could be wrong but @prezha does this mean the cutsom addon image wont work for ingress ? why we are removing this ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@medyagh in my first attempt to maintain backward compatibility (pr #12325), i used cc.CustomAddonImages
to pass compatible images but only in case that those were not already set by the user
that would not work because of the logic in assets.SelectAndPersistImages
overrides it - expecting to read custom images from the viper/flags (not from the cc
):
minikube/pkg/minikube/assets/addons.go
Lines 735 to 747 in 418a9a1
newImages := parseMapString(viper.GetString(config.AddonImages)) | |
for name, image := range newImages { | |
if image == "" { | |
out.WarningT("Ignoring empty custom image {{.name}}", out.V{"name": name}) | |
delete(newImages, name) | |
continue | |
} | |
if _, ok := addonDefaultImages[name]; !ok { | |
out.WarningT("Ignoring unknown custom image {{.name}}", out.V{"name": name}) | |
} | |
} | |
// Use newly configured custom images. | |
images = overrideDefaults(addonDefaultImages, newImages) |
hence, here i've amended the addons.supportLegacyIngress
to replace instead the default images to those compatible if the older k8s was used, ie, without using or affecting in any way the cc - the custom images user could have provided via flags, so that should continue to work as before
on a side note, i think the current logic in assets.SelectAndPersistImages
mentioned above could be problematic as it could override the custom images that were defined in the first run with the default ones in subsequent runs if those that follow would not use the same flag providing initial custom images, same could be for the custom registries, but if an issue at all, it would be another story :)
@prezha Anything in your PR description that follows the pattern |
@spowelljr thanks for pointing that out! to the best of my knowledge (ie, based on the issues' description and tries to replicate them), i left the issues i think should be fixed with this pr and also removed two "improves/fixes" issues that might be more related to the network issues in general + reworded two "potentially helps/fixes" issues that can be a subject of other issues as well (ie, #11987 and #12189) and therefore should remain open until confirmed that they could be closed as well |
@prezha I think this change looks great! Could you add a new test, outside of the existing addons_test, that starts a new cluster with an "old" k8s version then enables ingress? The code to verify ingress can be shared between the two tests. |
thanks @sharifelgamal i've also added support for ingress-dns and a new test that covers both addons now (for which i had to adapt integration.validateIngressAddon to be able to reuse it, and since i've also removed annotation and ingressClassName from nginx-ingress-v1.yaml testdata, the no-annotation behaviour is tested as well) examples:
|
b6aca23
to
ea3aa4f
Compare
kvm2 driver with docker runtime
Times for minikube start: 49.9s 46.2s 47.8s 46.9s 46.0s Times for minikube (PR 12794) ingress: 32.8s 30.8s 31.4s 31.2s 31.4s docker driver with docker runtime
Times for minikube start: 23.2s 21.6s 21.5s 22.2s 20.8s Times for minikube ingress: 34.4s 35.5s 36.9s 34.9s 35.5s docker driver with containerd runtime
Times for minikube start: 30.5s 42.5s 26.0s 43.8s 30.1s Times for minikube ingress: 29.4s 23.9s 47.4s 33.9s 29.4s |
These are the flake rates of all failed tests.
To see the flake rates of all tests by environment, click here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: prezha, sharifelgamal The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
in combination with the pr #12702, this should restore backward compatibility b/w ingress & ingress-dns addons and older k8s versions (<1.19) and also fix the
--watch-ingress-without-class
flag that was introduced in ingress api v1 (and not recognised in v1beta1)fixes #12793
fixes #12636
fixes #12536
fixes #12511
fixes #12152
improves #11987
improves #12189
etc. (potentially improves/helps/fixes some other issues as well)
btw, should it be needed, i believe a similar approach - ie, during runtime, if a user tries to enable an addon with an older k8s version: replace default addon image[s] with older/compatible ones (instead of "simulating" that the user-provided custom images - as i previously tried and it did not work, as that conflicts with the expected behaviour in assets.SelectAndPersistImages and it will break)
examples: