Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling 'ingress' returned an error #12152

Closed
azamaschikov opened this issue Aug 6, 2021 · 5 comments · Fixed by #12794
Closed

Enabling 'ingress' returned an error #12152

azamaschikov opened this issue Aug 6, 2021 · 5 comments · Fixed by #12794
Labels
addon/ingress kind/support Categorizes issue or PR as a support question. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@azamaschikov
Copy link

azamaschikov commented Aug 6, 2021

Hello, guys! I trying setup Ansible AWX to Minikube in my VM and when enabling Ingress I catch next error:

Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]

Full output of minikube start command:

[minikube@CentOS-7-template root]$ minikube start  --cni=flannel --install-addons=true --addons=ingress --kubernetes-version=stable --memory=6g --cpus=4
😄  minikube v1.22.0 on Centos 7.8.2003 (amd64)
✨  Automatically selected the docker driver
❗  Your cgroup does not allow setting memory.
    ▪ More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.21.2 preload ...
    > preloaded-images-k8s-v11-v1...: 502.14 MiB / 502.14 MiB  100.00% 61.58 Mi
    > gcr.io/k8s-minikube/kicbase...: 361.09 MiB / 361.09 MiB  100.00% 23.08 Mi
🔥  Creating docker container (CPUs=4, Memory=6144MB) ...
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Flannel (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎  Verifying ingress addon...
❗  Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Full output of minikube logs command:

[minikube_logs.txt](/~https://github.com/kubernetes/minikube/files/6947415/minikube_logs.txt)

Full output of failed command with -alsologtostderr option:

[minikube_addons.log](/~https://github.com/kubernetes/minikube/files/6947417/minikube_addons.log)

Full output of minikube kubectl -- get nodes

NAME       STATUS   ROLES                  AGE   VERSION
minikube   Ready    control-plane,master   38m   v1.21.2

Full output of minikube kubectl -- get pods -A

minikube@CentOS-7-template ~]$ minikube kubectl -- get pods -A
NAMESPACE       NAME                                        READY   STATUS                  RESTARTS   AGE
ingress-nginx   ingress-nginx-admission-create-mp4qh        0/1     ContainerCreating       0          38m
ingress-nginx   ingress-nginx-admission-patch-gz48r         0/1     ContainerCreating       0          38m
ingress-nginx   ingress-nginx-controller-59b45fb494-hvpcd   0/1     ContainerCreating       0          38m
kube-system     coredns-558bd4d5db-882sd                    0/1     ContainerCreating       0          38m
kube-system     etcd-minikube                               1/1     Running                 0          38m
kube-system     kube-apiserver-minikube                     1/1     Running                 0          38m
kube-system     kube-controller-manager-minikube            1/1     Running                 0          38m
kube-system     kube-flannel-ds-amd64-t4qww                 0/1     Init:ImagePullBackOff   0          38m
kube-system     kube-proxy-gz7mr                            1/1     Running                 0          38m
kube-system     kube-scheduler-minikube                     1/1     Running                 0          38m
kube-system     storage-provisioner                         1/1     Running                 0          38m

At all I have next environment:

  1. Version of my Minikube:
minikube@CentOS-7-template root]$ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
  1. Version of my Kubectl:
[minikube@CentOS-7-template root]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
  1. Version of my Docker:
[minikube@CentOS-7-template root]$ docker version
Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:49 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:54:13 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  1. Version of my operation system and kernel:
[root@CentOS-7-template ~]# cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7


[root@CentOS-7-template ~]# uname -a
Linux CentOS-7-template 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

What did I try before I wrote here?

  • Deploy on different operation system (Ubuntu 20 and now CentOS 7);
  • Delete Minikube cluster and deploy again like that:
minikube delete
minikube start
minikube addons enable ingress
  • Disable and enable ingress addons (after Minikube is deployed):
minikube addons disable ingress
minikube addons enable ingress
  • Install old version kubectl and minikube.
@spowelljr
Copy link
Member

Hi @azamaschikov, thanks for reporting you issue with minikube!

This ingress addon error has recently become a frequent issue among users. We have a main issue tracking this problem #10544, so please follow along there for updates. The current work around is running minikube delete --all --purge. However, to help with debugging would you mind running minikube delete --all and see if that fixes it first before running the --purge, this will help us know if the issue is isolated to the config or not, thanks!

@spowelljr spowelljr added addon/ingress kind/support Categorizes issue or PR as a support question. triage/duplicate Indicates an issue is a duplicate of other open issue. labels Aug 6, 2021
@azamaschikov
Copy link
Author

azamaschikov commented Aug 7, 2021

Hi, @spowelljr. Thank you for your replay!

I run delete command:

[minikube@CentOS-7-template ~]$ minikube delete --all
🔥  Deleting "minikube" in docker ...
🔥  Removing /home/minikube/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles

And after I run command minikube start again:

[minikube@CentOS-7-template ~]$ minikube start
😄  minikube v1.22.0 on Centos 7.8.2003 (amd64)
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

After I try enable Ingress addon and see the same error:

[minikube@CentOS-7-template root]$ minikube addons enable ingress --alsologtostderr
I0807 12:55:58.073450  228946 out.go:286] Setting OutFile to fd 1 ...
I0807 12:55:58.073678  228946 out.go:338] isatty.IsTerminal(1) = true
I0807 12:55:58.073691  228946 out.go:299] Setting ErrFile to fd 2...
I0807 12:55:58.073698  228946 out.go:338] isatty.IsTerminal(2) = true
I0807 12:55:58.073842  228946 root.go:312] Updating PATH: /home/minikube/.minikube/bin
I0807 12:55:58.074545  228946 addons.go:59] Setting ingress=true in profile "minikube"
I0807 12:55:58.074570  228946 addons.go:135] Setting addon ingress=true in "minikube"
I0807 12:55:58.074836  228946 host.go:66] Checking if "minikube" exists ...
I0807 12:55:58.075568  228946 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0807 12:55:58.126846  228946 out.go:165]     ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0807 12:55:58.129045  228946 out.go:165]     ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0807 12:55:58.132623  228946 out.go:165]     ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
I0807 12:55:58.132690  228946 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
I0807 12:55:58.132734  228946 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
I0807 12:55:58.132822  228946 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 12:55:58.173454  228946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49187 SSHKeyPath:/home/minikube/.minikube/machines/minikube/id_rsa Username:docker}
I0807 12:55:58.278938  228946 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
I0807 12:55:58.278973  228946 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
I0807 12:55:58.297980  228946 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
I0807 12:55:58.298006  228946 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
I0807 12:55:58.315351  228946 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
I0807 12:55:58.831736  228946 addons.go:313] Verifying addon ingress=true in "minikube"
I0807 12:55:58.837525  228946 out.go:165] 🔎  Verifying ingress addon...
🔎  Verifying ingress addon...
I0807 12:55:58.841327  228946 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0807 12:55:58.861370  228946 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0807 12:55:58.861402  228946 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
....
....
....
I0807 13:01:58.868391  228946 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0807 13:01:58.868442  228946 kapi.go:108] duration metric: took 6m0.02711548s to wait for app.kubernetes.io/name=ingress-nginx ...
I0807 13:01:58.872799  228946 out.go:165] 

W0807 13:01:58.872931  228946 out.go:230] ❌  Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
❌  Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
W0807 13:01:58.872949  228946 out.go:230] 

W0807 13:01:58.874180  228946 out.go:230] ╭─────────────────────────────────────────────────────────────────────────────╮
│                                                                             │
│    😿  If the above advice does not help, please let us know:               │
│    👉  /~https://github.com/kubernetes/minikube/issues/new/choose             │
│                                                                             │
│    Please attach the following file to the GitHub issue:                    │
│    - /tmp/minikube_addons_657d376187cd72746604141ceddc839ee4e6f05e_0.log    │
│                                                                             │
╰─────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────╮
│                                                                             │
│    😿  If the above advice does not help, please let us know:               │
│    👉  /~https://github.com/kubernetes/minikube/issues/new/choose             │
│                                                                             │
│    Please attach the following file to the GitHub issue:                    │
│    - /tmp/minikube_addons_657d376187cd72746604141ceddc839ee4e6f05e_0.log    │
│                                                                             │
╰─────────────────────────────────────────────────────────────────────────────╯
I0807 13:01:58.876957  228946 out.go:165] 

With --all --purge the same way.

[minikube@CentOS-7-template ~]$ minikube delete --all --purge
🔥  Deleting "minikube" in docker ...
🔥  Removing /home/minikube/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles
💀  Successfully purged minikube directory located at - [/home/minikube/.minikube]

[minikube@CentOS-7-template ~]$ minikube start
😄  minikube v1.22.0 on Centos 7.8.2003 (amd64)
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.21.2 preload ...
    > preloaded-images-k8s-v11-v1...: 502.14 MiB / 502.14 MiB  100.00% 132.09 M
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


[minikube@CentOS-7-template ~]$ minikube addons enable ingress --alsologtostderr
I0807 13:10:42.349441  238568 out.go:286] Setting OutFile to fd 1 ...
I0807 13:10:42.349717  238568 out.go:338] isatty.IsTerminal(1) = true
I0807 13:10:42.349732  238568 out.go:299] Setting ErrFile to fd 2...
I0807 13:10:42.349738  238568 out.go:338] isatty.IsTerminal(2) = true
I0807 13:10:42.349888  238568 root.go:312] Updating PATH: /home/minikube/.minikube/bin
W0807 13:10:42.350086  238568 root.go:291] Error reading config file at /home/minikube/.minikube/config/config.json: open /home/minikube/.minikube/config/config.json: no such file or directory
I0807 13:10:42.350600  238568 addons.go:59] Setting ingress=true in profile "minikube"
I0807 13:10:42.350631  238568 addons.go:135] Setting addon ingress=true in "minikube"
I0807 13:10:42.351453  238568 host.go:66] Checking if "minikube" exists ...
I0807 13:10:42.352827  238568 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0807 13:10:42.402503  238568 out.go:165]     ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
I0807 13:10:42.405050  238568 out.go:165]     ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0807 13:10:42.408666  238568 out.go:165]     ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
I0807 13:10:42.408732  238568 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
I0807 13:10:42.408772  238568 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
I0807 13:10:42.408885  238568 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 13:10:42.450686  238568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49192 SSHKeyPath:/home/minikube/.minikube/machines/minikube/id_rsa Username:docker}
I0807 13:10:42.557574  238568 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
I0807 13:10:42.557609  238568 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
I0807 13:10:42.577687  238568 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
I0807 13:10:42.577716  238568 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
I0807 13:10:42.595601  238568 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
I0807 13:10:43.115173  238568 addons.go:313] Verifying addon ingress=true in "minikube"
I0807 13:10:43.119871  238568 out.go:165] 🔎  Verifying ingress addon...
🔎  Verifying ingress addon...
I0807 13:10:43.123526  238568 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0807 13:10:43.144884  238568 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0807 13:10:43.144908  238568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0807 13:10:43.650235  238568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0807 13:10:44.153197  238568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]

...
...
...
I0807 13:16:43.153119  238568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0807 13:16:43.153155  238568 kapi.go:108] duration metric: took 6m0.029654209s to wait for app.kubernetes.io/name=ingress-nginx ...
I0807 13:16:43.157826  238568 out.go:165] 

W0807 13:16:43.158008  238568 out.go:230] ❌  Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
❌  Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
W0807 13:16:43.158037  238568 out.go:230] 

W0807 13:16:43.159128  238568 out.go:230] ╭─────────────────────────────────────────────────────────────────────────────╮
│                                                                             │
│    😿  If the above advice does not help, please let us know:               │
│    👉  /~https://github.com/kubernetes/minikube/issues/new/choose             │
│                                                                             │
│    Please attach the following file to the GitHub issue:                    │
│    - /tmp/minikube_addons_657d376187cd72746604141ceddc839ee4e6f05e_0.log    │
│                                                                             │
╰─────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────╮
│                                                                             │
│    😿  If the above advice does not help, please let us know:               │
│    👉  /~https://github.com/kubernetes/minikube/issues/new/choose             │
│                                                                             │
│    Please attach the following file to the GitHub issue:                    │
│    - /tmp/minikube_addons_657d376187cd72746604141ceddc839ee4e6f05e_0.log    │
│                                                                             │
╰─────────────────────────────────────────────────────────────────────────────╯
I0807 13:16:43.163153  238568 out.go:165] 

Could my network settings be causing the problem (I have a some strange settings in my Proxmox virtial machine)? Everything with Ingress fine on my other virtual machine (VMmanager KVM).

[minikube@CentOS-7-template ~]$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether aa:fa:fe:15:0e:d0 brd ff:ff:ff:ff:ff:ff
    inet 172.31.112.220/20 brd 172.31.127.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet MY.WHITE.IP.ADDR/32 brd MY.WHITE.IP.ADDR scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a8fa:feff:fe15:ed0/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f7:46:3c:1f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f7ff:fe46:3c1f/64 scope link 
       valid_lft forever preferred_lft forever
120: veth261a8dd@if119: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 8e:8b:22:0f:09:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::8c8b:22ff:fe0f:93e/64 scope link 
       valid_lft forever preferred_lft forever
142: br-9c3e419cac87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:8e:27:d4:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.49.1/24 brd 192.168.49.255 scope global br-9c3e419cac87
       valid_lft forever preferred_lft forever
    inet6 fe80::42:8eff:fe27:d4a7/64 scope link 
       valid_lft forever preferred_lft forever
148: vethd9f9739@if147: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-9c3e419cac87 state UP group default 
    link/ether d2:1d:9b:d8:5d:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::d01d:9bff:fed8:5df6/64 scope link 
       valid_lft forever preferred_lft forever

[minikube@CentOS-7-template ~]$ ip route show
default via 172.31.112.1 dev eth0 src MY.WHITE.IP.ADDR 
169.254.0.0/16 dev eth0 scope link metric 1002 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.112.0/20 dev eth0 proto kernel scope link src 172.31.112.220 
192.168.49.0/24 dev br-9c3e419cac87 proto kernel scope link src 192.168.49.1

All my logs in below (it's with --all and --all --purge).

minikube_logs_with_all-purge.log
minikube_addons_with_all-purge.log
minikube_logs_with_all.log
minikube_addons_with_all.log

@spowelljr
Copy link
Member

@azamaschikov That is odd that it still isn't working after running minikube delete --all --purge as that would clear out the config that's causing the issue for most users. So this has to be another issue, and as you pointed out, could be related to some of the strange settings.

@prezha
Copy link
Contributor

prezha commented Oct 27, 2021

this should be fixed with pr #12702 and #12794

example:

❯ minikube start  -p issue-12152 --cni=flannel --install-addons=true --addons=ingress --kubernetes-version=stable --memory=6g --cpus=4
😄  [issue-12152] minikube v1.23.2 on Opensuse-Tumbleweed
✨  Automatically selected the kvm2 driver
👍  Starting control plane node issue-12152 in cluster issue-12152
🔥  Creating kvm2 VM (CPUs=4, Memory=6144MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
❌  Unable to load cached images: loading cached images: stat /home/prezha/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0: no such file or directory
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm: 43.71 MiB / 43.71 MiB [-------------] 100.00% 12.04 MiB p/s 3.8s
    > kubectl: 44.73 MiB / 44.73 MiB [-------------] 100.00% 10.27 MiB p/s 4.6s
    > kubelet: 146.25 MiB / 146.25 MiB [-----------] 100.00% 15.60 MiB p/s 9.6s
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Flannel (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.4
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "issue-12152" cluster and "default" namespace by default

❯ minikube -p issue-12152 kubectl -- get pods -A
NAMESPACE       NAME                                        READY   STATUS      RESTARTS     AGE
ingress-nginx   ingress-nginx-admission-create--1-jz785     0/1     Completed   0            35s
ingress-nginx   ingress-nginx-admission-patch--1-qnwjr      0/1     Completed   1            35s
ingress-nginx   ingress-nginx-controller-5f66978484-mxpkv   1/1     Running     0            35s
kube-system     coredns-78fcd69978-w955g                    1/1     Running     0            35s
kube-system     etcd-issue-12152                            1/1     Running     0            48s
kube-system     kube-apiserver-issue-12152                  1/1     Running     0            48s
kube-system     kube-controller-manager-issue-12152         1/1     Running     0            48s
kube-system     kube-flannel-ds-amd64-gn7sp                 1/1     Running     0            35s
kube-system     kube-proxy-xd7lw                            1/1     Running     0            35s
kube-system     kube-scheduler-issue-12152                  1/1     Running     0            48s
kube-system     storage-provisioner                         1/1     Running     1 (4s ago)   47s

@tunt102
Copy link

tunt102 commented Aug 5, 2022

I have the exactly issue you had. And i had resolved by installing minikube with none root.
I post for new members that have same problem.
All i did:

#Add new User
adduser dev
passwd dev # password@7

add sudo role for dev

usermod -aG wheel dev ##only with centos
#Login to the newly created User
su - dev
#Add User to the Docker Group
sudo groupadd docker
sudo usermod -aG docker $USER
#- Re-Login or Restart the Server
#Install Minicube with dev user
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
mv ./minikube /usr/local/bin/minikube
Start minikube with Docker Driver
minikube start --driver=docker

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/ingress kind/support Categorizes issue or PR as a support question. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants