* * ==> Audit <== * |---------|----------------------------------------------|----------|------------------|---------|----------------------|----------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|----------------------------------------------|----------|------------------|---------|----------------------|----------------------| | start | | minikube | Andriy_Dmytrenko | v1.26.0 | 12 Jul 22 15:16 EEST | | | stop | | minikube | Andriy_Dmytrenko | v1.26.0 | 12 Jul 22 15:31 EEST | 12 Jul 22 15:31 EEST | | delete | | minikube | Andriy_Dmytrenko | v1.26.0 | 12 Jul 22 15:31 EEST | 12 Jul 22 15:31 EEST | | profile | list | minikube | Andriy_Dmytrenko | v1.26.0 | 13 Jul 22 16:31 EEST | | | profile | list | minikube | Andriy_Dmytrenko | v1.26.0 | 13 Jul 22 22:50 EEST | | | start | --driver hyperkit -n 3 | minikube | Andriy_Dmytrenko | v1.26.0 | 13 Jul 22 22:50 EEST | | | delete | | minikube | Andriy_Dmytrenko | v1.26.0 | 13 Jul 22 23:10 EEST | 13 Jul 22 23:10 EEST | | start | --driver hyperkit -n 3 | minikube | Andriy_Dmytrenko | v1.26.0 | 13 Jul 22 23:10 EEST | | | | --extra-config=kubelet.cgroup-driver=systemd | | | | | | |---------|----------------------------------------------|----------|------------------|---------|----------------------|----------------------| * * ==> Last Start <== * Log file created at: 2022/07/13 23:10:42 Running on machine: EPUAKYIW09F0 Binary: Built with gc go1.18.3 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0713 23:10:42.412928 12921 out.go:296] Setting OutFile to fd 1 ... I0713 23:10:42.413204 12921 out.go:348] isatty.IsTerminal(1) = true I0713 23:10:42.413207 12921 out.go:309] Setting ErrFile to fd 2... I0713 23:10:42.413212 12921 out.go:348] isatty.IsTerminal(2) = true I0713 23:10:42.413587 12921 root.go:329] Updating PATH: /Users/Andriy_Dmytrenko/.minikube/bin I0713 23:10:42.413993 12921 out.go:303] Setting JSON to false I0713 23:10:42.591074 12921 start.go:115] hostinfo: {"hostname":"EPUAKYIW09F0","uptime":125902,"bootTime":1657617140,"procs":630,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c0120d4c-8136-5f14-a445-0918cc7e6847"} W0713 23:10:42.591197 12921 start.go:123] gopshost.Virtualization returned error: not implemented yet I0713 23:10:42.614047 12921 out.go:177] ๐Ÿ˜„ minikube v1.26.0 on Darwin 12.4 I0713 23:10:42.637251 12921 notify.go:193] Checking for updates... I0713 23:10:42.639358 12921 driver.go:360] Setting default libvirt URI to qemu:///system I0713 23:10:42.678087 12921 out.go:177] โœจ Using the hyperkit driver based on user configuration I0713 23:10:42.721804 12921 start.go:284] selected driver: hyperkit I0713 23:10:42.721834 12921 start.go:805] validating driver "hyperkit" against I0713 23:10:42.721861 12921 start.go:816] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0713 23:10:42.721997 12921 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0713 23:10:42.722211 12921 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/Andriy_Dmytrenko/.minikube/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin I0713 23:10:42.733331 12921 install.go:137] /Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit version is 1.26.0 I0713 23:10:42.791141 12921 install.go:79] stdout: /Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit I0713 23:10:42.791158 12921 install.go:81] /Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit looks good I0713 23:10:42.791566 12921 start_flags.go:296] no existing cluster config was found, will generate one from the flags I0713 23:10:42.792001 12921 start_flags.go:377] Using suggested 2200MB memory alloc based on sys=16384MB, container=0MB I0713 23:10:42.792091 12921 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true] I0713 23:10:42.792107 12921 cni.go:95] Creating CNI manager for "" I0713 23:10:42.792112 12921 cni.go:156] 0 nodes found, recommending kindnet I0713 23:10:42.792124 12921 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni I0713 23:10:42.792132 12921 start_flags.go:310] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0713 23:10:42.792254 12921 iso.go:128] acquiring lock: {Name:mkaf882b919975fe737f77c119e8719ae4deb7c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0713 23:10:42.813890 12921 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0713 23:10:42.835515 12921 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0713 23:10:42.835611 12921 preload.go:148] Found local preload: /Users/Andriy_Dmytrenko/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 I0713 23:10:42.836731 12921 cache.go:57] Caching tarball of preloaded images I0713 23:10:42.837060 12921 preload.go:174] Found /Users/Andriy_Dmytrenko/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0713 23:10:42.837093 12921 cache.go:60] Finished verifying existence of preloaded tar for v1.24.1 on docker I0713 23:10:42.837681 12921 profile.go:148] Saving config to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/config.json ... I0713 23:10:42.837721 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/config.json: {Name:mkdb0aaf43136ebdd20e97ebbca165fb91e9ed98 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:10:42.838282 12921 cache.go:208] Successfully downloaded all kic artifacts I0713 23:10:42.838337 12921 start.go:352] acquiring machines lock for minikube: {Name:mke291f73cf88ee987e4d32c07dca4b0ca7ecffb Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0713 23:10:42.838460 12921 start.go:356] acquired machines lock for "minikube" in 109.059ยตs I0713 23:10:42.838489 12921 start.go:91] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true} I0713 23:10:42.838608 12921 start.go:131] createHost starting for "" (driver="hyperkit") I0713 23:10:42.882208 12921 out.go:204] ๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ... I0713 23:10:42.883934 12921 main.go:134] libmachine: Found binary path at /Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit I0713 23:10:42.884056 12921 main.go:134] libmachine: Launching plugin server for driver hyperkit I0713 23:10:42.897980 12921 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:62903 I0713 23:10:42.898555 12921 main.go:134] libmachine: () Calling .GetVersion I0713 23:10:42.899117 12921 main.go:134] libmachine: Using API Version 1 I0713 23:10:42.899128 12921 main.go:134] libmachine: () Calling .SetConfigRaw I0713 23:10:42.899475 12921 main.go:134] libmachine: () Calling .GetMachineName I0713 23:10:42.899621 12921 main.go:134] libmachine: (minikube) Calling .GetMachineName I0713 23:10:42.899721 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:10:42.899917 12921 start.go:165] libmachine.API.Create for "minikube" (driver="hyperkit") I0713 23:10:42.899961 12921 client.go:168] LocalClient.Create starting I0713 23:10:42.900015 12921 main.go:134] libmachine: Reading certificate data from /Users/Andriy_Dmytrenko/.minikube/certs/ca.pem I0713 23:10:42.900426 12921 main.go:134] libmachine: Decoding PEM data... I0713 23:10:42.900450 12921 main.go:134] libmachine: Parsing certificate... I0713 23:10:42.901067 12921 main.go:134] libmachine: Reading certificate data from /Users/Andriy_Dmytrenko/.minikube/certs/cert.pem I0713 23:10:42.901369 12921 main.go:134] libmachine: Decoding PEM data... I0713 23:10:42.901380 12921 main.go:134] libmachine: Parsing certificate... I0713 23:10:42.901399 12921 main.go:134] libmachine: Running pre-create checks... I0713 23:10:42.901410 12921 main.go:134] libmachine: (minikube) Calling .PreCreateCheck I0713 23:10:42.901555 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:42.901792 12921 main.go:134] libmachine: (minikube) Calling .GetConfigRaw I0713 23:10:42.902463 12921 main.go:134] libmachine: Creating machine... I0713 23:10:42.902470 12921 main.go:134] libmachine: (minikube) Calling .Create I0713 23:10:42.902592 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:42.902767 12921 main.go:134] libmachine: (minikube) DBG | I0713 23:10:42.902578 12928 common.go:107] Making disk image using store path: /Users/Andriy_Dmytrenko/.minikube I0713 23:10:42.902833 12921 main.go:134] libmachine: (minikube) Downloading /Users/Andriy_Dmytrenko/.minikube/cache/boot2docker.iso from file:///Users/Andriy_Dmytrenko/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso... I0713 23:10:43.153723 12921 main.go:134] libmachine: (minikube) DBG | I0713 23:10:43.153629 12928 common.go:114] Creating ssh key: /Users/Andriy_Dmytrenko/.minikube/machines/minikube/id_rsa... I0713 23:10:43.309682 12921 main.go:134] libmachine: (minikube) DBG | I0713 23:10:43.309566 12928 common.go:120] Creating raw disk image: /Users/Andriy_Dmytrenko/.minikube/machines/minikube/minikube.rawdisk... I0713 23:10:43.309693 12921 main.go:134] libmachine: (minikube) DBG | Writing magic tar header I0713 23:10:43.309702 12921 main.go:134] libmachine: (minikube) DBG | Writing SSH key tar header I0713 23:10:43.310450 12921 main.go:134] libmachine: (minikube) DBG | I0713 23:10:43.310398 12928 common.go:134] Fixing permissions on /Users/Andriy_Dmytrenko/.minikube/machines/minikube ... I0713 23:10:43.506407 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:43.506421 12921 main.go:134] libmachine: (minikube) DBG | clean start, hyperkit pid file doesn't exist: /Users/Andriy_Dmytrenko/.minikube/machines/minikube/hyperkit.pid I0713 23:10:43.506489 12921 main.go:134] libmachine: (minikube) DBG | Using UUID df81102e-02e7-11ed-a7c4-acde48001122 I0713 23:10:43.709682 12921 main.go:134] libmachine: (minikube) DBG | Generated MAC ce:1:38:fe:c2:96 I0713 23:10:43.709717 12921 main.go:134] libmachine: (minikube) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube I0713 23:10:43.709813 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/Andriy_Dmytrenko/.minikube/machines/minikube", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"df81102e-02e7-11ed-a7c4-acde48001122", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001fe090)}, ISOImages:[]string{"/Users/Andriy_Dmytrenko/.minikube/machines/minikube/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/Andriy_Dmytrenko/.minikube/machines/minikube/bzimage", Initrd:"/Users/Andriy_Dmytrenko/.minikube/machines/minikube/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)} I0713 23:10:43.709854 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/Andriy_Dmytrenko/.minikube/machines/minikube", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"df81102e-02e7-11ed-a7c4-acde48001122", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001fe090)}, ISOImages:[]string{"/Users/Andriy_Dmytrenko/.minikube/machines/minikube/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/Andriy_Dmytrenko/.minikube/machines/minikube/bzimage", Initrd:"/Users/Andriy_Dmytrenko/.minikube/machines/minikube/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)} I0713 23:10:43.709922 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/Andriy_Dmytrenko/.minikube/machines/minikube/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "df81102e-02e7-11ed-a7c4-acde48001122", "-s", "2:0,virtio-blk,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/minikube.rawdisk", "-s", "3,ahci-cd,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/Andriy_Dmytrenko/.minikube/machines/minikube/tty,log=/Users/Andriy_Dmytrenko/.minikube/machines/minikube/console-ring", "-f", "kexec,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/bzimage,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube"} I0713 23:10:43.709976 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/Andriy_Dmytrenko/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U df81102e-02e7-11ed-a7c4-acde48001122 -s 2:0,virtio-blk,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/minikube.rawdisk -s 3,ahci-cd,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/Andriy_Dmytrenko/.minikube/machines/minikube/tty,log=/Users/Andriy_Dmytrenko/.minikube/machines/minikube/console-ring -f kexec,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/bzimage,/Users/Andriy_Dmytrenko/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube" I0713 23:10:43.710003 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger I0713 23:10:43.712457 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 DEBUG: hyperkit: Pid is 12931 I0713 23:10:43.713417 12921 main.go:134] libmachine: (minikube) DBG | Attempt 0 I0713 23:10:43.713439 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:43.713528 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:43.715592 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:43.715743 12921 main.go:134] libmachine: (minikube) DBG | Found 2 entries in /var/db/dhcpd_leases! I0713 23:10:43.715771 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:72:96:91:1d:e3 ID:1,4e:72:96:91:1d:e3 Lease:0x62cf26b2} I0713 23:10:43.715783 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:1e:7c:1a:a2:38:5c ID:1,1e:7c:1a:a2:38:5c Lease:0x62cd698e} I0713 23:10:43.724779 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: Using fd 5 for I/O notifications I0713 23:10:43.822488 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: /Users/Andriy_Dmytrenko/.minikube/machines/minikube/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD I0713 23:10:43.824122 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 20 unspecified don't care: bit is 0 I0713 23:10:43.824142 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0 I0713 23:10:43.824153 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0713 23:10:43.824165 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0713 23:10:43.824216 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0713 23:10:44.245307 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0 I0713 23:10:44.245320 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0 I0713 23:10:44.247217 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 20 unspecified don't care: bit is 0 I0713 23:10:44.247233 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0 I0713 23:10:44.247253 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0713 23:10:44.247267 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0713 23:10:44.247314 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0713 23:10:44.248097 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1 I0713 23:10:44.248117 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1 I0713 23:10:45.716223 12921 main.go:134] libmachine: (minikube) DBG | Attempt 1 I0713 23:10:45.716241 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:45.716424 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:45.718779 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:45.718867 12921 main.go:134] libmachine: (minikube) DBG | Found 2 entries in /var/db/dhcpd_leases! I0713 23:10:45.718878 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:72:96:91:1d:e3 ID:1,4e:72:96:91:1d:e3 Lease:0x62cf26b2} I0713 23:10:45.718888 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:1e:7c:1a:a2:38:5c ID:1,1e:7c:1a:a2:38:5c Lease:0x62cd698e} I0713 23:10:47.719113 12921 main.go:134] libmachine: (minikube) DBG | Attempt 2 I0713 23:10:47.719126 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:47.719229 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:47.720387 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:47.720444 12921 main.go:134] libmachine: (minikube) DBG | Found 2 entries in /var/db/dhcpd_leases! I0713 23:10:47.720451 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:72:96:91:1d:e3 ID:1,4e:72:96:91:1d:e3 Lease:0x62cf26b2} I0713 23:10:47.720465 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:1e:7c:1a:a2:38:5c ID:1,1e:7c:1a:a2:38:5c Lease:0x62cd698e} I0713 23:10:49.720739 12921 main.go:134] libmachine: (minikube) DBG | Attempt 3 I0713 23:10:49.720750 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:49.720884 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:49.722993 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:49.723061 12921 main.go:134] libmachine: (minikube) DBG | Found 2 entries in /var/db/dhcpd_leases! I0713 23:10:49.723111 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:72:96:91:1d:e3 ID:1,4e:72:96:91:1d:e3 Lease:0x62cf26b2} I0713 23:10:49.723144 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:1e:7c:1a:a2:38:5c ID:1,1e:7c:1a:a2:38:5c Lease:0x62cd698e} I0713 23:10:50.708228 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0 I0713 23:10:50.708337 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0 I0713 23:10:50.708349 12921 main.go:134] libmachine: (minikube) DBG | 2022/07/13 23:10:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0 I0713 23:10:51.723278 12921 main.go:134] libmachine: (minikube) DBG | Attempt 4 I0713 23:10:51.723291 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:51.723416 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:51.724475 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:51.724524 12921 main.go:134] libmachine: (minikube) DBG | Found 2 entries in /var/db/dhcpd_leases! I0713 23:10:51.724532 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:72:96:91:1d:e3 ID:1,4e:72:96:91:1d:e3 Lease:0x62cf26b2} I0713 23:10:51.724541 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:1e:7c:1a:a2:38:5c ID:1,1e:7c:1a:a2:38:5c Lease:0x62cd698e} I0713 23:10:53.724860 12921 main.go:134] libmachine: (minikube) DBG | Attempt 5 I0713 23:10:53.724876 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:53.724987 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:53.726634 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:53.726690 12921 main.go:134] libmachine: (minikube) DBG | Found 2 entries in /var/db/dhcpd_leases! I0713 23:10:53.726708 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:72:96:91:1d:e3 ID:1,4e:72:96:91:1d:e3 Lease:0x62cf26b2} I0713 23:10:53.726718 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:1e:7c:1a:a2:38:5c ID:1,1e:7c:1a:a2:38:5c Lease:0x62cd698e} I0713 23:10:55.727204 12921 main.go:134] libmachine: (minikube) DBG | Attempt 6 I0713 23:10:55.727216 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:55.727316 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:55.728358 12921 main.go:134] libmachine: (minikube) DBG | Searching for ce:1:38:fe:c2:96 in /var/db/dhcpd_leases ... I0713 23:10:55.728502 12921 main.go:134] libmachine: (minikube) DBG | Found 3 entries in /var/db/dhcpd_leases! I0713 23:10:55.728511 12921 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:1:38:fe:c2:96 ID:1,ce:1:38:fe:c2:96 Lease:0x62d0784e} I0713 23:10:55.728518 12921 main.go:134] libmachine: (minikube) DBG | Found match: ce:1:38:fe:c2:96 I0713 23:10:55.728522 12921 main.go:134] libmachine: (minikube) DBG | IP: 192.168.64.4 I0713 23:10:55.728600 12921 main.go:134] libmachine: (minikube) Calling .GetConfigRaw I0713 23:10:55.729283 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:10:55.729398 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:10:55.729485 12921 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes... I0713 23:10:55.729493 12921 main.go:134] libmachine: (minikube) Calling .GetState I0713 23:10:55.729572 12921 main.go:134] libmachine: (minikube) DBG | exe=/Users/Andriy_Dmytrenko/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0713 23:10:55.729635 12921 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 12931 I0713 23:10:55.731062 12921 main.go:134] libmachine: Detecting operating system of created instance... I0713 23:10:55.731069 12921 main.go:134] libmachine: Waiting for SSH to be available... I0713 23:10:55.731074 12921 main.go:134] libmachine: Getting to WaitForSSH function... I0713 23:10:55.731079 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:55.731191 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:55.731265 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:55.731340 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:55.731408 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:55.731864 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:55.732095 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:55.732104 12921 main.go:134] libmachine: About to run SSH command: exit 0 I0713 23:10:55.760318 12921 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain I0713 23:10:58.833008 12921 main.go:134] libmachine: SSH cmd err, output: : I0713 23:10:58.833020 12921 main.go:134] libmachine: Detecting the provisioner... I0713 23:10:58.833027 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:58.833189 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:58.833294 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:58.833374 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:58.833458 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:58.833613 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:58.833792 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:58.833797 12921 main.go:134] libmachine: About to run SSH command: cat /etc/os-release I0713 23:10:58.902461 12921 main.go:134] libmachine: SSH cmd err, output: : NAME=Buildroot VERSION=2021.02.12-1-g14f2929-dirty ID=buildroot VERSION_ID=2021.02.12 PRETTY_NAME="Buildroot 2021.02.12" I0713 23:10:58.902536 12921 main.go:134] libmachine: found compatible host: buildroot I0713 23:10:58.902541 12921 main.go:134] libmachine: Provisioning with buildroot... I0713 23:10:58.902546 12921 main.go:134] libmachine: (minikube) Calling .GetMachineName I0713 23:10:58.902705 12921 buildroot.go:166] provisioning hostname "minikube" I0713 23:10:58.902713 12921 main.go:134] libmachine: (minikube) Calling .GetMachineName I0713 23:10:58.902822 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:58.902927 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:58.903012 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:58.903107 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:58.903276 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:58.903436 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:58.903657 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:58.903668 12921 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0713 23:10:58.993059 12921 main.go:134] libmachine: SSH cmd err, output: : minikube I0713 23:10:58.993112 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:58.993315 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:58.993448 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:58.993561 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:58.993695 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:58.993982 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:58.994229 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:58.994238 12921 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0713 23:10:59.064191 12921 main.go:134] libmachine: SSH cmd err, output: : I0713 23:10:59.064208 12921 buildroot.go:172] set auth options {CertDir:/Users/Andriy_Dmytrenko/.minikube CaCertPath:/Users/Andriy_Dmytrenko/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/Andriy_Dmytrenko/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/Andriy_Dmytrenko/.minikube/machines/server.pem ServerKeyPath:/Users/Andriy_Dmytrenko/.minikube/machines/server-key.pem ClientKeyPath:/Users/Andriy_Dmytrenko/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/Andriy_Dmytrenko/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/Andriy_Dmytrenko/.minikube} I0713 23:10:59.064223 12921 buildroot.go:174] setting up certificates I0713 23:10:59.064233 12921 provision.go:83] configureAuth start I0713 23:10:59.064239 12921 main.go:134] libmachine: (minikube) Calling .GetMachineName I0713 23:10:59.064392 12921 main.go:134] libmachine: (minikube) Calling .GetIP I0713 23:10:59.064498 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:59.064602 12921 provision.go:138] copyHostCerts I0713 23:10:59.064707 12921 exec_runner.go:144] found /Users/Andriy_Dmytrenko/.minikube/ca.pem, removing ... I0713 23:10:59.065476 12921 exec_runner.go:207] rm: /Users/Andriy_Dmytrenko/.minikube/ca.pem I0713 23:10:59.065645 12921 exec_runner.go:151] cp: /Users/Andriy_Dmytrenko/.minikube/certs/ca.pem --> /Users/Andriy_Dmytrenko/.minikube/ca.pem (1103 bytes) I0713 23:10:59.065930 12921 exec_runner.go:144] found /Users/Andriy_Dmytrenko/.minikube/cert.pem, removing ... I0713 23:10:59.065934 12921 exec_runner.go:207] rm: /Users/Andriy_Dmytrenko/.minikube/cert.pem I0713 23:10:59.066025 12921 exec_runner.go:151] cp: /Users/Andriy_Dmytrenko/.minikube/certs/cert.pem --> /Users/Andriy_Dmytrenko/.minikube/cert.pem (1147 bytes) I0713 23:10:59.066238 12921 exec_runner.go:144] found /Users/Andriy_Dmytrenko/.minikube/key.pem, removing ... I0713 23:10:59.066241 12921 exec_runner.go:207] rm: /Users/Andriy_Dmytrenko/.minikube/key.pem I0713 23:10:59.066324 12921 exec_runner.go:151] cp: /Users/Andriy_Dmytrenko/.minikube/certs/key.pem --> /Users/Andriy_Dmytrenko/.minikube/key.pem (1675 bytes) I0713 23:10:59.066788 12921 provision.go:112] generating server cert: /Users/Andriy_Dmytrenko/.minikube/machines/server.pem ca-key=/Users/Andriy_Dmytrenko/.minikube/certs/ca.pem private-key=/Users/Andriy_Dmytrenko/.minikube/certs/ca-key.pem org=Andriy_Dmytrenko.minikube san=[192.168.64.4 192.168.64.4 localhost 127.0.0.1 minikube minikube] I0713 23:10:59.110709 12921 provision.go:172] copyRemoteCerts I0713 23:10:59.111088 12921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0713 23:10:59.111109 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:59.111283 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:59.111371 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.111443 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:59.111521 12921 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/Andriy_Dmytrenko/.minikube/machines/minikube/id_rsa Username:docker} I0713 23:10:59.153437 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1103 bytes) I0713 23:10:59.181964 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes) I0713 23:10:59.210383 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0713 23:10:59.236754 12921 provision.go:86] duration metric: configureAuth took 172.482245ms I0713 23:10:59.236763 12921 buildroot.go:189] setting minikube options for container-runtime I0713 23:10:59.236944 12921 config.go:178] Loaded profile config "minikube": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.1 I0713 23:10:59.236976 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:10:59.237201 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:59.237316 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:59.237428 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.237558 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.237670 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:59.237832 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:59.238047 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:59.238053 12921 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0713 23:10:59.306801 12921 main.go:134] libmachine: SSH cmd err, output: : tmpfs I0713 23:10:59.306807 12921 buildroot.go:70] root file system type: tmpfs I0713 23:10:59.306946 12921 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0713 23:10:59.306960 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:59.307087 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:59.307188 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.307277 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.307358 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:59.307473 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:59.307621 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:59.307667 12921 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0713 23:10:59.387056 12921 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0713 23:10:59.387075 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:10:59.387205 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:10:59.387302 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.387442 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:10:59.387592 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:10:59.387738 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:10:59.387926 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:10:59.387936 12921 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0713 23:11:00.095362 12921 main.go:134] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service โ†’ /usr/lib/systemd/system/docker.service. I0713 23:11:00.095372 12921 main.go:134] libmachine: Checking connection to Docker... I0713 23:11:00.095386 12921 main.go:134] libmachine: (minikube) Calling .GetURL I0713 23:11:00.095552 12921 main.go:134] libmachine: Docker is up and running! I0713 23:11:00.095557 12921 main.go:134] libmachine: Reticulating splines... I0713 23:11:00.095560 12921 client.go:171] LocalClient.Create took 17.195107437s I0713 23:11:00.095569 12921 start.go:173] duration metric: libmachine.API.Create for "minikube" took 17.195165653s I0713 23:11:00.095580 12921 start.go:306] post-start starting for "minikube" (driver="hyperkit") I0713 23:11:00.095582 12921 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0713 23:11:00.095596 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.095753 12921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0713 23:11:00.095770 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:11:00.095872 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:11:00.095974 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:11:00.096047 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:11:00.096132 12921 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/Andriy_Dmytrenko/.minikube/machines/minikube/id_rsa Username:docker} I0713 23:11:00.148298 12921 ssh_runner.go:195] Run: cat /etc/os-release I0713 23:11:00.152891 12921 info.go:137] Remote host: Buildroot 2021.02.12 I0713 23:11:00.152904 12921 filesync.go:126] Scanning /Users/Andriy_Dmytrenko/.minikube/addons for local assets ... I0713 23:11:00.153093 12921 filesync.go:126] Scanning /Users/Andriy_Dmytrenko/.minikube/files for local assets ... I0713 23:11:00.153191 12921 start.go:309] post-start completed in 57.60595ms I0713 23:11:00.153216 12921 main.go:134] libmachine: (minikube) Calling .GetConfigRaw I0713 23:11:00.153970 12921 main.go:134] libmachine: (minikube) Calling .GetIP I0713 23:11:00.154191 12921 profile.go:148] Saving config to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/config.json ... I0713 23:11:00.154691 12921 start.go:134] duration metric: createHost completed in 17.31557828s I0713 23:11:00.154722 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:11:00.154931 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:11:00.155043 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:11:00.155120 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:11:00.155221 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:11:00.155416 12921 main.go:134] libmachine: Using SSH client type: native I0713 23:11:00.155636 12921 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x43da180] 0x43dd1e0 [] 0s} 192.168.64.4 22 } I0713 23:11:00.155649 12921 main.go:134] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0713 23:11:00.225546 12921 main.go:134] libmachine: SSH cmd err, output: : 1657743060.515878889 I0713 23:11:00.225557 12921 fix.go:207] guest clock: 1657743060.515878889 I0713 23:11:00.225580 12921 fix.go:220] Guest: 2022-07-13 23:11:00.515878889 +0300 EEST Remote: 2022-07-13 23:11:00.154706 +0300 EEST m=+17.802284190 (delta=361.172889ms) I0713 23:11:00.225628 12921 fix.go:191] guest clock delta is within tolerance: 361.172889ms I0713 23:11:00.225650 12921 start.go:81] releasing machines lock for "minikube", held for 17.386690835s I0713 23:11:00.225701 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.225872 12921 main.go:134] libmachine: (minikube) Calling .GetIP I0713 23:11:00.225969 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.226046 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.226138 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.226519 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.226620 12921 main.go:134] libmachine: (minikube) Calling .DriverName I0713 23:11:00.226807 12921 ssh_runner.go:195] Run: systemctl --version I0713 23:11:00.226820 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:11:00.226903 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:11:00.226980 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:11:00.227088 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:11:00.227176 12921 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/Andriy_Dmytrenko/.minikube/machines/minikube/id_rsa Username:docker} I0713 23:11:00.227472 12921 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0713 23:11:00.227816 12921 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0713 23:11:00.227940 12921 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0713 23:11:00.228029 12921 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0713 23:11:00.228122 12921 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0713 23:11:00.228213 12921 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/Andriy_Dmytrenko/.minikube/machines/minikube/id_rsa Username:docker} I0713 23:11:00.267626 12921 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0713 23:11:00.267709 12921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} W0713 23:11:00.281958 12921 start.go:731] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 6 stdout: stderr: curl: (6) Could not resolve host: k8s.gcr.io W0713 23:11:00.282065 12921 out.go:239] โ— This VM is having trouble accessing https://k8s.gcr.io W0713 23:11:00.282096 12921 out.go:239] ๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0713 23:11:00.300515 12921 docker.go:602] Got preloaded images: I0713 23:11:00.300524 12921 docker.go:608] k8s.gcr.io/kube-apiserver:v1.24.1 wasn't preloaded I0713 23:11:00.300587 12921 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0713 23:11:00.312629 12921 ssh_runner.go:195] Run: which lz4 I0713 23:11:00.317325 12921 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0713 23:11:00.321929 12921 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0713 23:11:00.321958 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425543115 bytes) I0713 23:11:02.538233 12921 docker.go:567] Took 2.220907 seconds to copy over tarball I0713 23:11:02.538292 12921 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0713 23:11:08.361292 12921 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.821954269s) I0713 23:11:08.361309 12921 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0713 23:11:08.411675 12921 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0713 23:11:08.425175 12921 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes) I0713 23:11:08.447348 12921 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0713 23:11:08.578438 12921 ssh_runner.go:195] Run: sudo systemctl restart docker I0713 23:11:10.484335 12921 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.905822469s) I0713 23:11:10.485189 12921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0713 23:11:10.495375 12921 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes) I0713 23:11:10.520115 12921 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0713 23:11:10.650339 12921 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0713 23:11:10.764490 12921 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0713 23:11:10.780264 12921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0713 23:11:10.799910 12921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0713 23:11:10.814866 12921 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0713 23:11:10.861793 12921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0713 23:11:10.875852 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0713 23:11:10.896114 12921 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0713 23:11:11.030976 12921 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0713 23:11:11.170174 12921 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0713 23:11:11.305082 12921 ssh_runner.go:195] Run: sudo systemctl restart docker I0713 23:11:12.845809 12921 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.540660101s) I0713 23:11:12.845898 12921 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0713 23:11:13.053696 12921 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0713 23:11:13.216888 12921 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0713 23:11:13.232700 12921 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock I0713 23:11:13.232806 12921 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0713 23:11:13.238642 12921 start.go:468] Will wait 60s for crictl version I0713 23:11:13.238702 12921 ssh_runner.go:195] Run: sudo crictl version I0713 23:11:13.290620 12921 start.go:477] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.16 RuntimeApiVersion: 1.41.0 I0713 23:11:13.290739 12921 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0713 23:11:13.344322 12921 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0713 23:11:13.450083 12921 out.go:204] ๐Ÿณ Preparing Kubernetes v1.24.1 on Docker 20.10.16 ... I0713 23:11:13.450832 12921 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0713 23:11:13.460213 12921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0713 23:11:13.511163 12921 out.go:177] โ–ช kubelet.cgroup-driver=systemd I0713 23:11:13.534667 12921 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker I0713 23:11:13.534913 12921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0713 23:11:13.571459 12921 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0713 23:11:13.571476 12921 docker.go:533] Images already preloaded, skipping extraction I0713 23:11:13.571856 12921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0713 23:11:13.609486 12921 docker.go:602] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.1 k8s.gcr.io/kube-proxy:v1.24.1 k8s.gcr.io/kube-scheduler:v1.24.1 k8s.gcr.io/kube-controller-manager:v1.24.1 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0713 23:11:13.610133 12921 cache_images.go:84] Images are preloaded, skipping loading I0713 23:11:13.610384 12921 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0713 23:11:13.650137 12921 cni.go:95] Creating CNI manager for "" I0713 23:11:13.650144 12921 cni.go:156] 1 nodes found, recommending kindnet I0713 23:11:13.650694 12921 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0713 23:11:13.650711 12921 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.4 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.4 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0713 23:11:13.651073 12921 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.64.4 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.64.4 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.64.4"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.24.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0713 23:11:13.652047 12921 kubeadm.go:961] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.4 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0713 23:11:13.652113 12921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1 I0713 23:11:13.663184 12921 binaries.go:44] Found k8s binaries, skipping transfer I0713 23:11:13.663244 12921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0713 23:11:13.673357 12921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes) I0713 23:11:13.692290 12921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0713 23:11:13.711966 12921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0713 23:11:13.730587 12921 ssh_runner.go:195] Run: grep 192.168.64.4 control-plane.minikube.internal$ /etc/hosts I0713 23:11:13.734809 12921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.4 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0713 23:11:13.749322 12921 certs.go:54] Setting up /Users/Andriy_Dmytrenko/.minikube/profiles/minikube for IP: 192.168.64.4 I0713 23:11:13.749894 12921 certs.go:182] skipping minikubeCA CA generation: /Users/Andriy_Dmytrenko/.minikube/ca.key I0713 23:11:13.750273 12921 certs.go:182] skipping proxyClientCA CA generation: /Users/Andriy_Dmytrenko/.minikube/proxy-client-ca.key I0713 23:11:13.751191 12921 certs.go:302] generating minikube-user signed cert: /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/client.key I0713 23:11:13.751425 12921 crypto.go:68] Generating cert /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/client.crt with IP's: [] I0713 23:11:13.967789 12921 crypto.go:156] Writing cert to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/client.crt ... I0713 23:11:13.967803 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/client.crt: {Name:mk7642b806a36c693936e71d99764db4fd048a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:11:13.968141 12921 crypto.go:164] Writing key to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/client.key ... I0713 23:11:13.968147 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/client.key: {Name:mkd96482ea043235d5787a8c7c15164290c99b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:11:13.968359 12921 certs.go:302] generating minikube signed cert: /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.key.9be86875 I0713 23:11:13.968375 12921 crypto.go:68] Generating cert /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.crt.9be86875 with IP's: [192.168.64.4 10.96.0.1 127.0.0.1 10.0.0.1] I0713 23:11:14.098942 12921 crypto.go:156] Writing cert to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.crt.9be86875 ... I0713 23:11:14.098953 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.crt.9be86875: {Name:mke44b138a956fabdfe429181a65c486e7ace487 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:11:14.100375 12921 crypto.go:164] Writing key to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.key.9be86875 ... I0713 23:11:14.100386 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.key.9be86875: {Name:mkc272ac9f2976f02d7c1916cbb02c3d015c9166 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:11:14.101486 12921 certs.go:320] copying /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.crt.9be86875 -> /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.crt I0713 23:11:14.102285 12921 certs.go:324] copying /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.key.9be86875 -> /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.key I0713 23:11:14.102677 12921 certs.go:302] generating aggregator signed cert: /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.key I0713 23:11:14.102779 12921 crypto.go:68] Generating cert /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0713 23:11:14.493278 12921 crypto.go:156] Writing cert to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.crt ... I0713 23:11:14.493290 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.crt: {Name:mkdf9084918d2cfb388530fb70bfe77a1bab666b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:11:14.493612 12921 crypto.go:164] Writing key to /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.key ... I0713 23:11:14.493617 12921 lock.go:35] WriteFile acquiring /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.key: {Name:mkca7741cbf7c011913bd7e8b468e3f1b0aa48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0713 23:11:14.494110 12921 certs.go:388] found cert: /Users/Andriy_Dmytrenko/.minikube/certs/Users/Andriy_Dmytrenko/.minikube/certs/ca-key.pem (1675 bytes) I0713 23:11:14.494344 12921 certs.go:388] found cert: /Users/Andriy_Dmytrenko/.minikube/certs/Users/Andriy_Dmytrenko/.minikube/certs/ca.pem (1103 bytes) I0713 23:11:14.494399 12921 certs.go:388] found cert: /Users/Andriy_Dmytrenko/.minikube/certs/Users/Andriy_Dmytrenko/.minikube/certs/cert.pem (1147 bytes) I0713 23:11:14.494441 12921 certs.go:388] found cert: /Users/Andriy_Dmytrenko/.minikube/certs/Users/Andriy_Dmytrenko/.minikube/certs/key.pem (1675 bytes) I0713 23:11:14.498256 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0713 23:11:14.524423 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0713 23:11:14.548863 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0713 23:11:14.573719 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0713 23:11:14.597741 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0713 23:11:14.623826 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0713 23:11:14.650845 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0713 23:11:14.681441 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0713 23:11:14.723991 12921 ssh_runner.go:362] scp /Users/Andriy_Dmytrenko/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0713 23:11:14.763092 12921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (740 bytes) I0713 23:11:14.786406 12921 ssh_runner.go:195] Run: openssl version I0713 23:11:14.792604 12921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0713 23:11:14.806423 12921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0713 23:11:14.811640 12921 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 12 12:17 /usr/share/ca-certificates/minikubeCA.pem I0713 23:11:14.811724 12921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0713 23:11:14.818406 12921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0713 23:11:14.829951 12921 kubeadm.go:395] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0713 23:11:14.830382 12921 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0713 23:11:14.858432 12921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0713 23:11:14.869419 12921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0713 23:11:14.879354 12921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0713 23:11:14.889887 12921 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0713 23:11:14.890871 12921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0713 23:11:15.759247 12921 out.go:204] โ–ช Generating certificates and keys ... I0713 23:11:18.570988 12921 out.go:204] โ–ช Booting up control plane ... W0713 23:15:18.649459 12921 out.go:239] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.64.4 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.64.4 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0713 20:11:15.373305 1299 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I0713 23:15:18.650171 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0713 23:15:19.512974 12921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0713 23:15:19.529557 12921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0713 23:15:19.539485 12921 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0713 23:15:19.539504 12921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0713 23:15:19.929422 12921 out.go:204] โ–ช Generating certificates and keys ... I0713 23:15:21.228767 12921 out.go:204] โ–ช Booting up control plane ... I0713 23:19:21.479457 12921 kubeadm.go:397] StartCluster complete in 8m6.635717536s I0713 23:19:21.481480 12921 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0713 23:19:21.481919 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0713 23:19:21.523617 12921 cri.go:87] found id: "" I0713 23:19:21.523821 12921 logs.go:274] 0 containers: [] W0713 23:19:21.523827 12921 logs.go:276] No container was found matching "kube-apiserver" I0713 23:19:21.523831 12921 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0713 23:19:21.523888 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0713 23:19:21.554280 12921 cri.go:87] found id: "" I0713 23:19:21.554296 12921 logs.go:274] 0 containers: [] W0713 23:19:21.554305 12921 logs.go:276] No container was found matching "etcd" I0713 23:19:21.554312 12921 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0713 23:19:21.554380 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0713 23:19:21.594432 12921 cri.go:87] found id: "" I0713 23:19:21.594440 12921 logs.go:274] 0 containers: [] W0713 23:19:21.594444 12921 logs.go:276] No container was found matching "coredns" I0713 23:19:21.594448 12921 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0713 23:19:21.594508 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0713 23:19:21.626803 12921 cri.go:87] found id: "" I0713 23:19:21.626812 12921 logs.go:274] 0 containers: [] W0713 23:19:21.626815 12921 logs.go:276] No container was found matching "kube-scheduler" I0713 23:19:21.626819 12921 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0713 23:19:21.626881 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0713 23:19:21.658247 12921 cri.go:87] found id: "" I0713 23:19:21.658271 12921 logs.go:274] 0 containers: [] W0713 23:19:21.658294 12921 logs.go:276] No container was found matching "kube-proxy" I0713 23:19:21.658298 12921 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]} I0713 23:19:21.658370 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0713 23:19:21.689730 12921 cri.go:87] found id: "" I0713 23:19:21.689738 12921 logs.go:274] 0 containers: [] W0713 23:19:21.689742 12921 logs.go:276] No container was found matching "kubernetes-dashboard" I0713 23:19:21.689746 12921 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]} I0713 23:19:21.689804 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0713 23:19:21.719500 12921 cri.go:87] found id: "" I0713 23:19:21.719512 12921 logs.go:274] 0 containers: [] W0713 23:19:21.719517 12921 logs.go:276] No container was found matching "storage-provisioner" I0713 23:19:21.719520 12921 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0713 23:19:21.719583 12921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0713 23:19:21.752673 12921 cri.go:87] found id: "" I0713 23:19:21.752681 12921 logs.go:274] 0 containers: [] W0713 23:19:21.752685 12921 logs.go:276] No container was found matching "kube-controller-manager" I0713 23:19:21.752690 12921 logs.go:123] Gathering logs for kubelet ... I0713 23:19:21.752697 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0713 23:19:21.812470 12921 logs.go:123] Gathering logs for dmesg ... I0713 23:19:21.812482 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0713 23:19:21.826551 12921 logs.go:123] Gathering logs for describe nodes ... I0713 23:19:21.826563 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0713 23:19:22.014570 12921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0713 23:19:22.014579 12921 logs.go:123] Gathering logs for Docker ... I0713 23:19:22.014586 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0713 23:19:22.078709 12921 logs.go:123] Gathering logs for container status ... I0713 23:19:22.078720 12921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" W0713 23:19:22.119244 12921 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0713 20:15:20.056842 2693 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0713 23:19:22.119260 12921 out.go:239] W0713 23:19:22.119421 12921 out.go:239] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0713 20:15:20.056842 2693 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0713 23:19:22.119517 12921 out.go:239] W0713 23:19:22.120887 12921 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ /~https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0713 23:19:22.205248 12921 out.go:177] W0713 23:19:22.248179 12921 out.go:239] โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID' stderr: W0713 20:15:20.056842 2693 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0713 23:19:22.248429 12921 out.go:239] ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0713 23:19:22.248534 12921 out.go:239] ๐Ÿฟ Related issue: /~https://github.com/kubernetes/minikube/issues/4172 I0713 23:19:22.312198 12921 out.go:177] * * ==> Docker <== * -- Journal begins at Wed 2022-07-13 20:10:54 UTC, ends at Wed 2022-07-13 20:40:27 UTC. -- Jul 13 20:22:25 minikube dockerd[1019]: time="2022-07-13T20:22:25.660815463Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:41868->192.168.64.1:53: read: connection refused" Jul 13 20:22:25 minikube dockerd[1019]: time="2022-07-13T20:22:25.661314069Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:41868->192.168.64.1:53: read: connection refused" Jul 13 20:22:25 minikube dockerd[1019]: time="2022-07-13T20:22:25.663437531Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:41868->192.168.64.1:53: read: connection refused" Jul 13 20:22:26 minikube dockerd[1019]: time="2022-07-13T20:22:26.655218561Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:37532->192.168.64.1:53: read: connection refused" Jul 13 20:22:26 minikube dockerd[1019]: time="2022-07-13T20:22:26.655292184Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:37532->192.168.64.1:53: read: connection refused" Jul 13 20:22:26 minikube dockerd[1019]: time="2022-07-13T20:22:26.656749279Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:37532->192.168.64.1:53: read: connection refused" Jul 13 20:22:31 minikube dockerd[1019]: time="2022-07-13T20:22:31.650836317Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:41938->192.168.64.1:53: read: connection refused" Jul 13 20:22:31 minikube dockerd[1019]: time="2022-07-13T20:22:31.650906943Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:41938->192.168.64.1:53: read: connection refused" Jul 13 20:22:31 minikube dockerd[1019]: time="2022-07-13T20:22:31.652457669Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:41938->192.168.64.1:53: read: connection refused" Jul 13 20:22:33 minikube dockerd[1019]: time="2022-07-13T20:22:33.650101612Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:59898->192.168.64.1:53: read: connection refused" Jul 13 20:22:33 minikube dockerd[1019]: time="2022-07-13T20:22:33.650145966Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:59898->192.168.64.1:53: read: connection refused" Jul 13 20:22:33 minikube dockerd[1019]: time="2022-07-13T20:22:33.651607925Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:59898->192.168.64.1:53: read: connection refused" Jul 13 20:22:40 minikube dockerd[1019]: time="2022-07-13T20:22:40.655711299Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:33832->192.168.64.1:53: read: connection refused" Jul 13 20:22:40 minikube dockerd[1019]: time="2022-07-13T20:22:40.655754819Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:33832->192.168.64.1:53: read: connection refused" Jul 13 20:22:40 minikube dockerd[1019]: time="2022-07-13T20:22:40.657605078Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:33832->192.168.64.1:53: read: connection refused" Jul 13 20:22:40 minikube dockerd[1019]: time="2022-07-13T20:22:40.658549888Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:56774->192.168.64.1:53: read: connection refused" Jul 13 20:22:40 minikube dockerd[1019]: time="2022-07-13T20:22:40.658589865Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:56774->192.168.64.1:53: read: connection refused" Jul 13 20:22:40 minikube dockerd[1019]: time="2022-07-13T20:22:40.659717690Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:56774->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.721409590Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:40296->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.721453590Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:40296->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.768518290Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:40296->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.801825490Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:36890->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.802698890Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:36890->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.825383490Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:36890->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.887772990Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:46736->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.887890390Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:46736->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.919142590Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:55081->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.919194590Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:55081->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.941344390Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:55081->192.168.64.1:53: read: connection refused" Jul 13 20:39:49 minikube dockerd[1019]: time="2022-07-13T20:39:48.944262290Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:46736->192.168.64.1:53: read: connection refused" Jul 13 20:39:59 minikube dockerd[1019]: time="2022-07-13T20:39:59.666247490Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:38161->192.168.64.1:53: read: connection refused" Jul 13 20:39:59 minikube dockerd[1019]: time="2022-07-13T20:39:59.666325690Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:38161->192.168.64.1:53: read: connection refused" Jul 13 20:39:59 minikube dockerd[1019]: time="2022-07-13T20:39:59.670292090Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:38161->192.168.64.1:53: read: connection refused" Jul 13 20:40:01 minikube dockerd[1019]: time="2022-07-13T20:40:01.659522990Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:52679->192.168.64.1:53: read: connection refused" Jul 13 20:40:01 minikube dockerd[1019]: time="2022-07-13T20:40:01.661100690Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:52679->192.168.64.1:53: read: connection refused" Jul 13 20:40:01 minikube dockerd[1019]: time="2022-07-13T20:40:01.667374390Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:52679->192.168.64.1:53: read: connection refused" Jul 13 20:40:02 minikube dockerd[1019]: time="2022-07-13T20:40:02.686357090Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:59500->192.168.64.1:53: read: connection refused" Jul 13 20:40:02 minikube dockerd[1019]: time="2022-07-13T20:40:02.687165090Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:59500->192.168.64.1:53: read: connection refused" Jul 13 20:40:02 minikube dockerd[1019]: time="2022-07-13T20:40:02.690615190Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:59500->192.168.64.1:53: read: connection refused" Jul 13 20:40:02 minikube dockerd[1019]: time="2022-07-13T20:40:02.698447690Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:46257->192.168.64.1:53: read: connection refused" Jul 13 20:40:02 minikube dockerd[1019]: time="2022-07-13T20:40:02.698799590Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:46257->192.168.64.1:53: read: connection refused" Jul 13 20:40:02 minikube dockerd[1019]: time="2022-07-13T20:40:02.702354690Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:46257->192.168.64.1:53: read: connection refused" Jul 13 20:40:12 minikube dockerd[1019]: time="2022-07-13T20:40:12.663651690Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:37406->192.168.64.1:53: read: connection refused" Jul 13 20:40:12 minikube dockerd[1019]: time="2022-07-13T20:40:12.664381190Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:37406->192.168.64.1:53: read: connection refused" Jul 13 20:40:12 minikube dockerd[1019]: time="2022-07-13T20:40:12.666958690Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:37406->192.168.64.1:53: read: connection refused" Jul 13 20:40:14 minikube dockerd[1019]: time="2022-07-13T20:40:14.660901890Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:45240->192.168.64.1:53: read: connection refused" Jul 13 20:40:14 minikube dockerd[1019]: time="2022-07-13T20:40:14.661681690Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:45240->192.168.64.1:53: read: connection refused" Jul 13 20:40:14 minikube dockerd[1019]: time="2022-07-13T20:40:14.664159890Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:45240->192.168.64.1:53: read: connection refused" Jul 13 20:40:15 minikube dockerd[1019]: time="2022-07-13T20:40:15.658460190Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:42617->192.168.64.1:53: read: connection refused" Jul 13 20:40:15 minikube dockerd[1019]: time="2022-07-13T20:40:15.658668290Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:42617->192.168.64.1:53: read: connection refused" Jul 13 20:40:15 minikube dockerd[1019]: time="2022-07-13T20:40:15.661769190Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:42617->192.168.64.1:53: read: connection refused" Jul 13 20:40:17 minikube dockerd[1019]: time="2022-07-13T20:40:17.668301390Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:45063->192.168.64.1:53: read: connection refused" Jul 13 20:40:17 minikube dockerd[1019]: time="2022-07-13T20:40:17.669163190Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:45063->192.168.64.1:53: read: connection refused" Jul 13 20:40:17 minikube dockerd[1019]: time="2022-07-13T20:40:17.672660290Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:45063->192.168.64.1:53: read: connection refused" Jul 13 20:40:25 minikube dockerd[1019]: time="2022-07-13T20:40:25.656225690Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused" Jul 13 20:40:25 minikube dockerd[1019]: time="2022-07-13T20:40:25.656314790Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused" Jul 13 20:40:25 minikube dockerd[1019]: time="2022-07-13T20:40:25.659081490Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused" Jul 13 20:40:26 minikube dockerd[1019]: time="2022-07-13T20:40:26.659173190Z" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused" Jul 13 20:40:26 minikube dockerd[1019]: time="2022-07-13T20:40:26.659270590Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused" Jul 13 20:40:26 minikube dockerd[1019]: time="2022-07-13T20:40:26.662340190Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * * ==> describe nodes <== * * ==> dmesg <== * [Jul13 20:10] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.035181] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173) [ +7.033793] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182) [ +0.000019] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618) [ +0.017872] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.797921] systemd-fstab-generator[125]: Ignoring "noauto" for root device [ +0.066104] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +2.318981] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery [ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2) [ +3.572507] systemd-fstab-generator[547]: Ignoring "noauto" for root device [ +0.150658] systemd-fstab-generator[560]: Ignoring "noauto" for root device [Jul13 20:11] systemd-fstab-generator[779]: Ignoring "noauto" for root device [ +1.797369] kauditd_printk_skb: 16 callbacks suppressed [ +0.277176] systemd-fstab-generator[931]: Ignoring "noauto" for root device [ +0.379845] systemd-fstab-generator[988]: Ignoring "noauto" for root device [ +0.141686] systemd-fstab-generator[999]: Ignoring "noauto" for root device [ +0.140887] systemd-fstab-generator[1010]: Ignoring "noauto" for root device [ +1.695567] systemd-fstab-generator[1160]: Ignoring "noauto" for root device [ +0.196957] systemd-fstab-generator[1171]: Ignoring "noauto" for root device [ +5.333072] systemd-fstab-generator[1378]: Ignoring "noauto" for root device [ +1.204491] kauditd_printk_skb: 68 callbacks suppressed [Jul13 20:15] systemd-fstab-generator[2774]: Ignoring "noauto" for root device [Jul13 20:39] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large: [ +0.000058] clocksource: 'hpet' wd_now: acbd3b52 wd_last: abd76324 mask: ffffffff [ +0.000052] clocksource: 'tsc' cs_now: 75eab2b27cda cs_last: 73ade58db5d6 mask: ffffffffffffffff [ +0.006392] systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)! [ +0.000326] systemd[1]: systemd-resolved.service: Watchdog timeout (limit 3min)! [ +0.000338] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'. [ +0.000305] kauditd_printk_skb: 7 callbacks suppressed [ +0.004921] systemd[1]: systemd-logind.service: Watchdog timeout (limit 3min)! [ +0.000876] systemd[1]: systemd-journald.service: Main process exited, code=killed, status=6/ABRT [ +0.000044] systemd[1]: systemd-journald.service: Failed with result 'watchdog'. [ +0.005898] clocksource: Checking clocksource tsc synchronization from CPU 0. [ +0.003860] systemd[1]: systemd-udevd.service: Main process exited, code=killed, status=6/ABRT [ +0.000197] systemd[1]: systemd-udevd.service: Failed with result 'watchdog'. [ +0.015577] systemd[1]: systemd-resolved.service: Main process exited, code=killed, status=6/ABRT [ +0.000159] systemd[1]: systemd-resolved.service: Failed with result 'watchdog'. [ +0.007130] systemd[1]: systemd-logind.service: Main process exited, code=killed, status=6/ABRT [ +0.000194] systemd[1]: systemd-logind.service: Failed with result 'watchdog'. [ +0.035166] systemd[1]: systemd-journal-flush.service: Failed with result 'exit-code'. [ +1.357113] systemd-journald[4962]: File /run/log/journal/c534ab67d16b45f99ed46596451609d6/system.journal corrupted or uncleanly shut down, renaming and replacing. [ +2.424146] hrtimer: interrupt took 4252100 ns * * ==> kernel <== * 20:40:28 up 29 min, 0 users, load average: 0.74, 0.22, 0.08 Linux minikube 5.10.57 #1 SMP Thu Jun 16 23:36:20 UTC 2022 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.12" * * ==> kubelet <== * -- Journal begins at Wed 2022-07-13 20:10:54 UTC, ends at Wed 2022-07-13 20:40:28 UTC. -- Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.020024 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.120873 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.222118 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.323111 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.423657 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.524802 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.626608 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.727380 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.828283 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:24 minikube kubelet[2780]: W0713 20:40:24.855836 2780 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.4:8443: connect: connection refused Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.856291 2780 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.4:8443: connect: connection refused Jul 13 20:40:24 minikube kubelet[2780]: E0713 20:40:24.929360 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.031107 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.131994 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.233077 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.333682 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.434102 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.535052 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.635371 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.660487 2780 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.661115 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused" pod="kube-system/kube-controller-manager-minikube" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.661355 2780 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused" pod="kube-system/kube-controller-manager-minikube" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.664262 2780 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-minikube_kube-system(852f03e6fe9ac86ddd174fb038c47d74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-minikube_kube-system(852f03e6fe9ac86ddd174fb038c47d74)\\\": rpc error: code = Unknown desc = failed pulling image \\\"k8s.gcr.io/pause:3.6\\\": Error response from daemon: Get \\\"https://k8s.gcr.io/v2/\\\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:35773->192.168.64.1:53: read: connection refused\"" pod="kube-system/kube-controller-manager-minikube" podUID=852f03e6fe9ac86ddd174fb038c47d74 Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.735899 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.836453 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:25 minikube kubelet[2780]: E0713 20:40:25.937698 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.039398 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.140232 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.241767 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.342286 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.443724 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: W0713 20:40:26.457293 2780 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.4:8443: connect: connection refused Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.457468 2780 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.4:8443: connect: connection refused Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.544579 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.645363 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.663134 2780 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.663208 2780 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused" pod="kube-system/kube-apiserver-minikube" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.663265 2780 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused" pod="kube-system/kube-apiserver-minikube" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.663347 2780 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-minikube_kube-system(2c8ea627c99027911a2869906fb60543)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-minikube_kube-system(2c8ea627c99027911a2869906fb60543)\\\": rpc error: code = Unknown desc = failed pulling image \\\"k8s.gcr.io/pause:3.6\\\": Error response from daemon: Get \\\"https://k8s.gcr.io/v2/\\\": dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.4:43401->192.168.64.1:53: read: connection refused\"" pod="kube-system/kube-apiserver-minikube" podUID=2c8ea627c99027911a2869906fb60543 Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.746142 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.846790 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:26 minikube kubelet[2780]: E0713 20:40:26.947347 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.048050 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.148218 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.248645 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.350360 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.451508 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.459426 2780 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.64.4:8443: connect: connection refused Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.552515 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.653334 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.754259 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.854588 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.955461 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:27 minikube kubelet[2780]: E0713 20:40:27.962805 2780 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"minikube\" not found" Jul 13 20:40:28 minikube kubelet[2780]: E0713 20:40:28.055957 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:28 minikube kubelet[2780]: E0713 20:40:28.096422 2780 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.17017cc4d4eb3048", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:time.Date(2022, time.July, 13, 20, 15, 22, 676670536, time.Local), LastTimestamp:time.Date(2022, time.July, 13, 20, 15, 22, 676670536, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.64.4:8443: connect: connection refused'(may retry after sleeping) Jul 13 20:40:28 minikube kubelet[2780]: E0713 20:40:28.156386 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:28 minikube kubelet[2780]: E0713 20:40:28.257351 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:28 minikube kubelet[2780]: E0713 20:40:28.358260 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found" Jul 13 20:40:28 minikube kubelet[2780]: E0713 20:40:28.458664 2780 kubelet.go:2419] "Error getting node" err="node \"minikube\" not found"