-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube dashboard #8119
Comments
Try |
Hi @vasanthakumars98, I don’t yet have a clear way to replicate this issue. Do you mind adding some additional details. Here is additional information that would be helpful:
Thank you for sharing your experience! |
hi @sharifelgamal here is it NAMESPACE NAME READY STATUS RESTARTS AGE opreating-system: ubuntu-version 18 |
hi @9kranti 🤔 Verifying dashboard health ... 🤔 Verifying dashboard health ... |
Once try with |
hi @9kranti |
Let's try this, |
hi @9kranti |
You can find the list of hypervisor on Kubernetes's official documentation |
hi @9kranti 🙄 minikube v1.9.2 on Ubuntu 18.04 ❗ 'virtualbox' driver reported an issue: /usr/bin/VBoxManage list hostinfo failed: 💡 Suggestion: Install the latest version of VirtualBox 🛑 The "virtualbox" driver should not be used with root privileges. |
hi @9kranti |
hi @9kranti |
Normally you don't log in to your system as root, but use sudo where needed instead. |
Whenever a `kubectl apply` fails while enabling an addon, it is retried with exponential backoff. The command (type `*exec.Cmd`) that this retry function runs in created outside the function - which means that it is reused on every retry. This is a problem because `exec.Cmd` (https://godoc.org/github.com/pkg/exec#Cmd) states that "... Cmd cannot be reused after calling its Run or Start methods." This retry is a common case due to, say, a CRD and its resource being present in the same YAML file of the addon which causes a race condition where the resource is created before its CRD is created in the cluster - this race is fixed by subsequent retries. I've noticed this in the dashboard and the ambassador addon. Due to the above mentioned bug, minikube throws errors like `exec: already started` in every retry and the retry is never successful, manifests are never deployed and addon creation errors out. Fix kubernetes#8138 Fix kubernetes#8119 Fix a few CI errors in kubernetes#8372
Whenever a `kubectl apply` fails while enabling an addon, it is retried with exponential backoff. The command (type `*exec.Cmd`) that this retry function runs in created outside the function - which means that it is reused on every retry. This is a problem because `exec.Cmd` (https://godoc.org/github.com/pkg/exec#Cmd) states that "... Cmd cannot be reused after calling its Run or Start methods." This retry is a common case due to, say, a CRD and its resource being present in the same YAML file of the addon which causes a race condition where the resource is created before its CRD is created in the cluster - this race is fixed by subsequent retries. I've noticed this in the dashboard and the ambassador addon. Due to the above mentioned bug, minikube throws errors like `exec: already started` in every retry and the retry is never successful, manifests are never deployed and addon creation errors out. Related to kubernetes#8138 kubernetes#8119 kubernetes#8372
Unable to enable dashboard: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: exec: already started
stdout:
stderr:
]
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 /~https://github.com/kubernetes/minikube/issues/new/choose
The text was updated successfully, but these errors were encountered: