* * ==> Audit <== * |---------|--------------------------------|----------|--------------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------|----------|--------------|---------|---------------------|---------------------| | start | --driver=qemu2 | minikube | caerulescens | v1.32.0 | 31 Jan 24 13:57 EST | 31 Jan 24 13:58 EST | | | --network=builtin --nodes=1 | | | | | | | | --alsologtostderr | | | | | | |---------|--------------------------------|----------|--------------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2024/01/31 13:57:02 Running on machine: hpc Binary: Built with gc go1.21.3 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0131 13:57:02.709999 235348 out.go:296] Setting OutFile to fd 1 ... I0131 13:57:02.710196 235348 out.go:348] isatty.IsTerminal(1) = true I0131 13:57:02.710209 235348 out.go:309] Setting ErrFile to fd 2... I0131 13:57:02.710224 235348 out.go:348] isatty.IsTerminal(2) = true I0131 13:57:02.710513 235348 root.go:338] Updating PATH: /home/caerulescens/.minikube/bin W0131 13:57:02.710681 235348 root.go:314] Error reading config file at /home/caerulescens/.minikube/config/config.json: open /home/caerulescens/.minikube/config/config.json: no such file or directory I0131 13:57:02.711295 235348 out.go:303] Setting JSON to false I0131 13:57:02.724030 235348 start.go:128] hostinfo: {"hostname":"hpc","uptime":22258,"bootTime":1706705165,"procs":540,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"12.4","kernelVersion":"6.1.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"43f27311-08f8-4675-bc7c-07b3d71ac9a3"} I0131 13:57:02.724137 235348 start.go:138] virtualization: kvm host I0131 13:57:02.727769 235348 out.go:177] 😄 minikube v1.32.0 on Debian 12.4 W0131 13:57:02.730978 235348 preload.go:295] Failed to list preload files: open /home/caerulescens/.minikube/cache/preloaded-tarball: no such file or directory I0131 13:57:02.731026 235348 notify.go:220] Checking for updates... I0131 13:57:02.731217 235348 driver.go:378] Setting default libvirt URI to qemu:///system I0131 13:57:02.734447 235348 out.go:177] ✨ Using the qemu2 driver based on user configuration I0131 13:57:02.740103 235348 start.go:298] selected driver: qemu2 I0131 13:57:02.740131 235348 start.go:902] validating driver "qemu2" against I0131 13:57:02.740150 235348 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0131 13:57:02.741630 235348 start_flags.go:309] no existing cluster config was found, will generate one from the flags W0131 13:57:02.741713 235348 out.go:239] ❗ You are using the QEMU driver without a dedicated network, which doesn't support `minikube service` & `minikube tunnel` commands. I0131 13:57:02.744414 235348 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=64144MB, container=0MB I0131 13:57:02.744609 235348 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true] I0131 13:57:02.744640 235348 cni.go:84] Creating CNI manager for "" I0131 13:57:02.744660 235348 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0131 13:57:02.744673 235348 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0131 13:57:02.744694 235348 start_flags.go:323] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/caerulescens:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0131 13:57:02.744965 235348 iso.go:125] acquiring lock: {Name:mk1740523c59e03d8c54140cae0a0a85e92a5211 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0131 13:57:02.751019 235348 out.go:177] 💿 Downloading VM boot image ... I0131 13:57:02.753870 235348 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso.sha256 -> /home/caerulescens/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso I0131 13:57:06.764505 235348 out.go:177] 👍 Starting control plane node minikube in cluster minikube I0131 13:57:06.767582 235348 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0131 13:57:06.823614 235348 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 I0131 13:57:06.823638 235348 cache.go:56] Caching tarball of preloaded images I0131 13:57:06.823749 235348 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0131 13:57:06.826993 235348 out.go:177] 💾 Downloading Kubernetes v1.28.3 preload ... I0131 13:57:06.832916 235348 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ... I0131 13:57:06.897295 235348 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /home/caerulescens/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 I0131 13:57:14.520116 235348 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ... I0131 13:57:14.520209 235348 preload.go:256] verifying checksum of /home/caerulescens/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ... I0131 13:57:15.135848 235348 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker I0131 13:57:15.136147 235348 profile.go:148] Saving config to /home/caerulescens/.minikube/profiles/minikube/config.json ... I0131 13:57:15.136172 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/config.json: {Name:mkb4c835d3ef9a84824c464ad333bc4b49016801 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:57:15.136318 235348 start.go:365] acquiring machines lock for minikube: {Name:mk730b7b7d5d10067abf794b343f53c46eaa903c Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0131 13:57:15.136343 235348 start.go:369] acquired machines lock for "minikube" in 12.844µs I0131 13:57:15.136361 235348 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/caerulescens:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0131 13:57:15.136451 235348 start.go:125] createHost starting for "" (driver="qemu2") I0131 13:57:15.156299 235348 out.go:204] 🔥 Creating qemu2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I0131 13:57:15.156462 235348 start.go:159] libmachine.API.Create for "minikube" (driver="qemu2") I0131 13:57:15.156480 235348 client.go:168] LocalClient.Create starting I0131 13:57:15.156570 235348 main.go:141] libmachine: Creating CA: /home/caerulescens/.minikube/certs/ca.pem I0131 13:57:15.295284 235348 main.go:141] libmachine: Creating client certificate: /home/caerulescens/.minikube/certs/cert.pem I0131 13:57:15.520453 235348 main.go:141] libmachine: port range: 0 -> 65535 I0131 13:57:15.520641 235348 main.go:141] libmachine: Downloading /home/caerulescens/.minikube/cache/boot2docker.iso from file:///home/caerulescens/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso... I0131 13:57:15.734017 235348 main.go:141] libmachine: Creating SSH key... I0131 13:57:15.887622 235348 main.go:141] libmachine: Creating Disk image... I0131 13:57:15.887639 235348 main.go:141] libmachine: Creating 20000 MB hard disk image... I0131 13:57:15.887751 235348 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /home/caerulescens/.minikube/machines/minikube/disk.qcow2.raw /home/caerulescens/.minikube/machines/minikube/disk.qcow2 I0131 13:57:15.922956 235348 main.go:141] libmachine: STDOUT: I0131 13:57:15.922975 235348 main.go:141] libmachine: STDERR: I0131 13:57:15.923036 235348 main.go:141] libmachine: executing: qemu-img resize /home/caerulescens/.minikube/machines/minikube/disk.qcow2 +20000M I0131 13:57:15.939675 235348 main.go:141] libmachine: STDOUT: Image resized. I0131 13:57:15.939699 235348 main.go:141] libmachine: STDERR: I0131 13:57:15.939721 235348 main.go:141] libmachine: DONE writing to /home/caerulescens/.minikube/machines/minikube/disk.qcow2.raw and /home/caerulescens/.minikube/machines/minikube/disk.qcow2 I0131 13:57:15.939746 235348 main.go:141] libmachine: Starting QEMU VM... I0131 13:57:15.939859 235348 main.go:141] libmachine: executing: qemu-system-x86_64 -cpu max -display none -accel kvm -m 6000 -smp 2 -boot d -cdrom /home/caerulescens/.minikube/machines/minikube/boot2docker.iso -qmp unix:/home/caerulescens/.minikube/machines/minikube/monitor,server,nowait -pidfile /home/caerulescens/.minikube/machines/minikube/qemu.pid -nic user,model=virtio,hostfwd=tcp::41275-:22,hostfwd=tcp::42115-:2376,hostname=minikube -daemonize /home/caerulescens/.minikube/machines/minikube/disk.qcow2 I0131 13:57:16.030043 235348 main.go:141] libmachine: STDOUT: I0131 13:57:16.030070 235348 main.go:141] libmachine: STDERR: I0131 13:57:16.030084 235348 main.go:141] libmachine: Waiting for VM to start (ssh -p 41275 docker@127.0.0.1)... I0131 13:57:48.299066 235348 machine.go:88] provisioning docker machine ... I0131 13:57:48.299105 235348 buildroot.go:166] provisioning hostname "minikube" I0131 13:57:48.299180 235348 main.go:141] libmachine: Using SSH client type: native I0131 13:57:48.299629 235348 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} localhost 41275 } I0131 13:57:48.299648 235348 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0131 13:57:48.400627 235348 main.go:141] libmachine: SSH cmd err, output: : minikube I0131 13:57:48.400716 235348 main.go:141] libmachine: Using SSH client type: native I0131 13:57:48.401137 235348 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} localhost 41275 } I0131 13:57:48.401171 235348 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0131 13:57:48.490271 235348 main.go:141] libmachine: SSH cmd err, output: : I0131 13:57:48.490303 235348 buildroot.go:172] set auth options {CertDir:/home/caerulescens/.minikube CaCertPath:/home/caerulescens/.minikube/certs/ca.pem CaPrivateKeyPath:/home/caerulescens/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/caerulescens/.minikube/machines/server.pem ServerKeyPath:/home/caerulescens/.minikube/machines/server-key.pem ClientKeyPath:/home/caerulescens/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/caerulescens/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/caerulescens/.minikube} I0131 13:57:48.490367 235348 buildroot.go:174] setting up certificates I0131 13:57:48.490389 235348 provision.go:83] configureAuth start I0131 13:57:48.490407 235348 provision.go:138] copyHostCerts I0131 13:57:48.490501 235348 exec_runner.go:151] cp: /home/caerulescens/.minikube/certs/ca.pem --> /home/caerulescens/.minikube/ca.pem (1094 bytes) I0131 13:57:48.490707 235348 exec_runner.go:151] cp: /home/caerulescens/.minikube/certs/cert.pem --> /home/caerulescens/.minikube/cert.pem (1139 bytes) I0131 13:57:48.490873 235348 exec_runner.go:151] cp: /home/caerulescens/.minikube/certs/key.pem --> /home/caerulescens/.minikube/key.pem (1675 bytes) I0131 13:57:48.491357 235348 provision.go:112] generating server cert: /home/caerulescens/.minikube/machines/server.pem ca-key=/home/caerulescens/.minikube/certs/ca.pem private-key=/home/caerulescens/.minikube/certs/ca-key.pem org=caerulescens.minikube san=[127.0.0.1 localhost localhost 127.0.0.1 minikube minikube] I0131 13:57:48.805100 235348 provision.go:172] copyRemoteCerts I0131 13:57:48.805165 235348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0131 13:57:48.805182 235348 sshutil.go:53] new ssh client: &{IP:localhost Port:41275 SSHKeyPath:/home/caerulescens/.minikube/machines/minikube/id_rsa Username:docker} I0131 13:57:48.855039 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1094 bytes) I0131 13:57:48.870422 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes) I0131 13:57:48.885977 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0131 13:57:48.901189 235348 provision.go:86] duration metric: configureAuth took 410.781348ms I0131 13:57:48.901211 235348 buildroot.go:189] setting minikube options for container-runtime I0131 13:57:48.901428 235348 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0131 13:57:48.901503 235348 main.go:141] libmachine: Using SSH client type: native I0131 13:57:48.901914 235348 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} localhost 41275 } I0131 13:57:48.901932 235348 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0131 13:57:48.992823 235348 main.go:141] libmachine: SSH cmd err, output: : tmpfs I0131 13:57:48.992855 235348 buildroot.go:70] root file system type: tmpfs I0131 13:57:48.993018 235348 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0131 13:57:48.993110 235348 main.go:141] libmachine: Using SSH client type: native I0131 13:57:48.993579 235348 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} localhost 41275 } I0131 13:57:48.993697 235348 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0131 13:57:49.097505 235348 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0131 13:57:49.097618 235348 main.go:141] libmachine: Using SSH client type: native I0131 13:57:49.098048 235348 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} localhost 41275 } I0131 13:57:49.098075 235348 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0131 13:57:49.887803 235348 main.go:141] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. I0131 13:57:49.887839 235348 machine.go:91] provisioned docker machine in 1.588746756s I0131 13:57:49.887854 235348 client.go:171] LocalClient.Create took 34.731364764s I0131 13:57:49.887870 235348 start.go:167] duration metric: libmachine.API.Create for "minikube" took 34.731406552s I0131 13:57:49.887880 235348 start.go:300] post-start starting for "minikube" (driver="qemu2") I0131 13:57:49.887892 235348 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0131 13:57:49.887981 235348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0131 13:57:49.888002 235348 sshutil.go:53] new ssh client: &{IP:localhost Port:41275 SSHKeyPath:/home/caerulescens/.minikube/machines/minikube/id_rsa Username:docker} I0131 13:57:49.939139 235348 ssh_runner.go:195] Run: cat /etc/os-release I0131 13:57:49.942578 235348 info.go:137] Remote host: Buildroot 2021.02.12 I0131 13:57:49.942609 235348 filesync.go:126] Scanning /home/caerulescens/.minikube/addons for local assets ... I0131 13:57:49.942702 235348 filesync.go:126] Scanning /home/caerulescens/.minikube/files for local assets ... I0131 13:57:49.942737 235348 start.go:303] post-start completed in 54.849085ms I0131 13:57:49.943159 235348 profile.go:148] Saving config to /home/caerulescens/.minikube/profiles/minikube/config.json ... I0131 13:57:49.943328 235348 start.go:128] duration metric: createHost completed in 34.806865305s I0131 13:57:49.943408 235348 main.go:141] libmachine: Using SSH client type: native I0131 13:57:49.943836 235348 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} localhost 41275 } I0131 13:57:49.943854 235348 main.go:141] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0131 13:57:50.033031 235348 main.go:141] libmachine: SSH cmd err, output: : 1706727470.028266359 I0131 13:57:50.033060 235348 fix.go:206] guest clock: 1706727470.028266359 I0131 13:57:50.033073 235348 fix.go:219] Guest: 2024-01-31 13:57:50.028266359 -0500 EST Remote: 2024-01-31 13:57:49.943345661 -0500 EST m=+47.287549519 (delta=84.920698ms) I0131 13:57:50.033102 235348 fix.go:190] guest clock delta is within tolerance: 84.920698ms I0131 13:57:50.033112 235348 start.go:83] releasing machines lock for "minikube", held for 34.896759551s I0131 13:57:50.033343 235348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0131 13:57:50.033405 235348 sshutil.go:53] new ssh client: &{IP:localhost Port:41275 SSHKeyPath:/home/caerulescens/.minikube/machines/minikube/id_rsa Username:docker} I0131 13:57:50.033507 235348 ssh_runner.go:195] Run: cat /version.json I0131 13:57:50.033533 235348 sshutil.go:53] new ssh client: &{IP:localhost Port:41275 SSHKeyPath:/home/caerulescens/.minikube/machines/minikube/id_rsa Username:docker} I0131 13:57:50.138867 235348 ssh_runner.go:195] Run: systemctl --version I0131 13:57:50.143862 235348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" W0131 13:57:50.148228 235348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found I0131 13:57:50.148334 235348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0131 13:57:50.160766 235348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s) I0131 13:57:50.160801 235348 start.go:472] detecting cgroup driver to use... I0131 13:57:50.160931 235348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0131 13:57:50.176209 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0131 13:57:50.184885 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0131 13:57:50.193276 235348 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver... I0131 13:57:50.193391 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0131 13:57:50.201681 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0131 13:57:50.209943 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0131 13:57:50.218397 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0131 13:57:50.226600 235348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0131 13:57:50.235305 235348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0131 13:57:50.242533 235348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0131 13:57:50.247588 235348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0131 13:57:50.252977 235348 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0131 13:57:50.357637 235348 ssh_runner.go:195] Run: sudo systemctl restart containerd I0131 13:57:50.367992 235348 start.go:472] detecting cgroup driver to use... I0131 13:57:50.368113 235348 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0131 13:57:50.377998 235348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0131 13:57:50.387211 235348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd I0131 13:57:50.401516 235348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0131 13:57:50.408732 235348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0131 13:57:50.416438 235348 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0131 13:57:50.477794 235348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0131 13:57:50.489008 235348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0131 13:57:50.504553 235348 ssh_runner.go:195] Run: which cri-dockerd I0131 13:57:50.507917 235348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0131 13:57:50.515612 235348 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0131 13:57:50.529840 235348 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0131 13:57:50.633889 235348 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0131 13:57:50.713973 235348 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver... I0131 13:57:50.714139 235348 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0131 13:57:50.724797 235348 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0131 13:57:50.812451 235348 ssh_runner.go:195] Run: sudo systemctl restart docker I0131 13:57:52.183874 235348 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.371377539s) I0131 13:57:52.183972 235348 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0131 13:57:52.273069 235348 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0131 13:57:52.377118 235348 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0131 13:57:52.456402 235348 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0131 13:57:52.558950 235348 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0131 13:57:52.568918 235348 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0131 13:57:52.651301 235348 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0131 13:57:52.701585 235348 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock I0131 13:57:52.701659 235348 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0131 13:57:52.705321 235348 start.go:540] Will wait 60s for crictl version I0131 13:57:52.705398 235348 ssh_runner.go:195] Run: which crictl I0131 13:57:52.707974 235348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0131 13:57:52.750330 235348 start.go:556] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 24.0.7 RuntimeApiVersion: v1 I0131 13:57:52.750431 235348 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0131 13:57:52.765663 235348 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0131 13:57:52.783041 235348 out.go:204] 🐳 Preparing Kubernetes v1.28.3 on Docker 24.0.7 ... I0131 13:57:52.783171 235348 ssh_runner.go:195] Run: grep 10.0.2.2 host.minikube.internal$ /etc/hosts I0131 13:57:52.785556 235348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0131 13:57:52.792254 235348 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0131 13:57:52.792334 235348 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0131 13:57:52.803175 235348 docker.go:671] Got preloaded images: I0131 13:57:52.803201 235348 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded I0131 13:57:52.803268 235348 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0131 13:57:52.808254 235348 ssh_runner.go:195] Run: which lz4 I0131 13:57:52.810370 235348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0131 13:57:52.812337 235348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0131 13:57:52.812367 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes) I0131 13:57:54.977429 235348 docker.go:635] Took 2.167132 seconds to copy over tarball I0131 13:57:54.977523 235348 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0131 13:57:57.249206 235348 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.271635749s) I0131 13:57:57.249260 235348 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0131 13:57:57.280725 235348 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0131 13:57:57.289708 235348 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes) I0131 13:57:57.300215 235348 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0131 13:57:57.383841 235348 ssh_runner.go:195] Run: sudo systemctl restart docker I0131 13:57:59.859309 235348 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.475423638s) I0131 13:57:59.859427 235348 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0131 13:57:59.877061 235348 docker.go:671] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0131 13:57:59.877094 235348 cache_images.go:84] Images are preloaded, skipping loading I0131 13:57:59.877178 235348 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0131 13:57:59.900687 235348 cni.go:84] Creating CNI manager for "" I0131 13:57:59.900719 235348 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0131 13:57:59.900750 235348 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0131 13:57:59.900781 235348 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0131 13:57:59.900940 235348 kubeadm.go:181] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.0.2.15 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 10.0.2.15 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "10.0.2.15"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.28.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0131 13:57:59.901132 235348 kubeadm.go:976] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15 [Install] config: {KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0131 13:57:59.901253 235348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3 I0131 13:57:59.907652 235348 binaries.go:44] Found k8s binaries, skipping transfer I0131 13:57:59.907739 235348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0131 13:57:59.912934 235348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (366 bytes) I0131 13:57:59.922434 235348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0131 13:57:59.931181 235348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2082 bytes) I0131 13:57:59.940202 235348 ssh_runner.go:195] Run: grep 10.0.2.15 control-plane.minikube.internal$ /etc/hosts I0131 13:57:59.942267 235348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0131 13:57:59.949147 235348 certs.go:56] Setting up /home/caerulescens/.minikube/profiles/minikube for IP: 10.0.2.15 I0131 13:57:59.949190 235348 certs.go:190] acquiring lock for shared ca certs: {Name:mk64b385ec688d7401f4709104c32dac3b04a23d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:57:59.949300 235348 certs.go:204] generating minikubeCA CA: /home/caerulescens/.minikube/ca.key I0131 13:58:00.316024 235348 crypto.go:156] Writing cert to /home/caerulescens/.minikube/ca.crt ... I0131 13:58:00.316050 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/ca.crt: {Name:mkea392895a8bb3bc0ee1e2c8902e4da1727c85f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.316190 235348 crypto.go:164] Writing key to /home/caerulescens/.minikube/ca.key ... I0131 13:58:00.316199 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/ca.key: {Name:mk3596b46616e55e3ac9aa9af1007cbb4b1cad1f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.316254 235348 certs.go:204] generating proxyClientCA CA: /home/caerulescens/.minikube/proxy-client-ca.key I0131 13:58:00.377338 235348 crypto.go:156] Writing cert to /home/caerulescens/.minikube/proxy-client-ca.crt ... I0131 13:58:00.377357 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/proxy-client-ca.crt: {Name:mkb6e88edd27b248d7348503c93f5f9ad6fbfe43 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.377455 235348 crypto.go:164] Writing key to /home/caerulescens/.minikube/proxy-client-ca.key ... I0131 13:58:00.377465 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/proxy-client-ca.key: {Name:mk1395edbb3c953f2ae8b6c386a19e7f024b8b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.377544 235348 certs.go:319] generating minikube-user signed cert: /home/caerulescens/.minikube/profiles/minikube/client.key I0131 13:58:00.377557 235348 crypto.go:68] Generating cert /home/caerulescens/.minikube/profiles/minikube/client.crt with IP's: [] I0131 13:58:00.592582 235348 crypto.go:156] Writing cert to /home/caerulescens/.minikube/profiles/minikube/client.crt ... I0131 13:58:00.592611 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/client.crt: {Name:mka910e33bfcf0286bb5219c275f2bc84005f8e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.592741 235348 crypto.go:164] Writing key to /home/caerulescens/.minikube/profiles/minikube/client.key ... I0131 13:58:00.592752 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/client.key: {Name:mk934c30dfef619fa88fad0531fe4872d903d0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.592802 235348 certs.go:319] generating minikube signed cert: /home/caerulescens/.minikube/profiles/minikube/apiserver.key.49504c3e I0131 13:58:00.592823 235348 crypto.go:68] Generating cert /home/caerulescens/.minikube/profiles/minikube/apiserver.crt.49504c3e with IP's: [10.0.2.15 10.96.0.1 127.0.0.1 10.0.0.1] I0131 13:58:00.637228 235348 crypto.go:156] Writing cert to /home/caerulescens/.minikube/profiles/minikube/apiserver.crt.49504c3e ... I0131 13:58:00.637248 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/apiserver.crt.49504c3e: {Name:mk9efce07b4d27f85d20bf0f4ae78c18c70e78c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.637565 235348 crypto.go:164] Writing key to /home/caerulescens/.minikube/profiles/minikube/apiserver.key.49504c3e ... I0131 13:58:00.637577 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/apiserver.key.49504c3e: {Name:mk92bbe8d42e10892bf6f7ff921cc17b6aab0ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.637630 235348 certs.go:337] copying /home/caerulescens/.minikube/profiles/minikube/apiserver.crt.49504c3e -> /home/caerulescens/.minikube/profiles/minikube/apiserver.crt I0131 13:58:00.637694 235348 certs.go:341] copying /home/caerulescens/.minikube/profiles/minikube/apiserver.key.49504c3e -> /home/caerulescens/.minikube/profiles/minikube/apiserver.key I0131 13:58:00.637736 235348 certs.go:319] generating aggregator signed cert: /home/caerulescens/.minikube/profiles/minikube/proxy-client.key I0131 13:58:00.637750 235348 crypto.go:68] Generating cert /home/caerulescens/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0131 13:58:00.773443 235348 crypto.go:156] Writing cert to /home/caerulescens/.minikube/profiles/minikube/proxy-client.crt ... I0131 13:58:00.773463 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/proxy-client.crt: {Name:mk86a56137ce970ae22a6a1b3bad3a378cb36cc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.773584 235348 crypto.go:164] Writing key to /home/caerulescens/.minikube/profiles/minikube/proxy-client.key ... I0131 13:58:00.773593 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.minikube/profiles/minikube/proxy-client.key: {Name:mk498e30f07b2cd7bc0c33a762e376a56a700723 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:00.773713 235348 certs.go:437] found cert: /home/caerulescens/.minikube/certs/home/caerulescens/.minikube/certs/ca-key.pem (1679 bytes) I0131 13:58:00.773742 235348 certs.go:437] found cert: /home/caerulescens/.minikube/certs/home/caerulescens/.minikube/certs/ca.pem (1094 bytes) I0131 13:58:00.773762 235348 certs.go:437] found cert: /home/caerulescens/.minikube/certs/home/caerulescens/.minikube/certs/cert.pem (1139 bytes) I0131 13:58:00.773783 235348 certs.go:437] found cert: /home/caerulescens/.minikube/certs/home/caerulescens/.minikube/certs/key.pem (1675 bytes) I0131 13:58:00.774529 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0131 13:58:00.798549 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0131 13:58:00.819250 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0131 13:58:00.839925 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0131 13:58:00.861161 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0131 13:58:00.881262 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0131 13:58:00.901427 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0131 13:58:00.921599 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0131 13:58:00.941986 235348 ssh_runner.go:362] scp /home/caerulescens/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0131 13:58:00.962084 235348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0131 13:58:00.976308 235348 ssh_runner.go:195] Run: openssl version I0131 13:58:00.981011 235348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0131 13:58:00.989493 235348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0131 13:58:00.993437 235348 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 18:58 /usr/share/ca-certificates/minikubeCA.pem I0131 13:58:00.993493 235348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0131 13:58:00.998405 235348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0131 13:58:01.004508 235348 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd I0131 13:58:01.006849 235348 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory I0131 13:58:01.006895 235348 kubeadm.go:404] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:35335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/caerulescens:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0131 13:58:01.007032 235348 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0131 13:58:01.018913 235348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0131 13:58:01.026925 235348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0131 13:58:01.034363 235348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0131 13:58:01.041893 235348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0131 13:58:01.041950 235348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem" I0131 13:58:01.080802 235348 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3 I0131 13:58:01.080901 235348 kubeadm.go:322] [preflight] Running pre-flight checks I0131 13:58:01.169223 235348 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0131 13:58:01.169407 235348 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0131 13:58:01.169570 235348 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0131 13:58:01.365305 235348 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0131 13:58:01.369297 235348 out.go:204] ▪ Generating certificates and keys ... I0131 13:58:01.369434 235348 kubeadm.go:322] [certs] Using existing ca certificate authority I0131 13:58:01.369554 235348 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0131 13:58:01.532011 235348 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0131 13:58:01.723378 235348 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0131 13:58:02.168095 235348 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0131 13:58:02.295091 235348 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0131 13:58:02.474416 235348 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0131 13:58:02.474635 235348 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1] I0131 13:58:02.853199 235348 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0131 13:58:02.853399 235348 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1] I0131 13:58:03.057943 235348 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0131 13:58:03.267631 235348 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0131 13:58:03.437137 235348 kubeadm.go:322] [certs] Generating "sa" key and public key I0131 13:58:03.437243 235348 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0131 13:58:03.570656 235348 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0131 13:58:03.646366 235348 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0131 13:58:03.884408 235348 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0131 13:58:04.124476 235348 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0131 13:58:04.126468 235348 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0131 13:58:04.129921 235348 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0131 13:58:04.146872 235348 out.go:204] ▪ Booting up control plane ... I0131 13:58:04.147063 235348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0131 13:58:04.147191 235348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0131 13:58:04.147304 235348 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0131 13:58:04.147471 235348 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0131 13:58:04.147608 235348 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0131 13:58:04.147673 235348 kubeadm.go:322] [kubelet-start] Starting the kubelet I0131 13:58:04.263375 235348 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0131 13:58:10.269013 235348 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.005809 seconds I0131 13:58:10.269219 235348 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0131 13:58:10.295979 235348 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0131 13:58:10.839755 235348 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs I0131 13:58:10.840199 235348 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0131 13:58:11.366027 235348 kubeadm.go:322] [bootstrap-token] Using token: hizwfy.0rdkoxytwtebdc50 I0131 13:58:11.372044 235348 out.go:204] ▪ Configuring RBAC rules ... I0131 13:58:11.372228 235348 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0131 13:58:11.380754 235348 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0131 13:58:11.391304 235348 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0131 13:58:11.396158 235348 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0131 13:58:11.401164 235348 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0131 13:58:11.410726 235348 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0131 13:58:11.427110 235348 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0131 13:58:11.650298 235348 kubeadm.go:322] [addons] Applied essential addon: CoreDNS I0131 13:58:11.785396 235348 kubeadm.go:322] [addons] Applied essential addon: kube-proxy I0131 13:58:11.786058 235348 kubeadm.go:322] I0131 13:58:11.786153 235348 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully! I0131 13:58:11.786170 235348 kubeadm.go:322] I0131 13:58:11.786318 235348 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user: I0131 13:58:11.786335 235348 kubeadm.go:322] I0131 13:58:11.786381 235348 kubeadm.go:322] mkdir -p $HOME/.kube I0131 13:58:11.786487 235348 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0131 13:58:11.786584 235348 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0131 13:58:11.786602 235348 kubeadm.go:322] I0131 13:58:11.786700 235348 kubeadm.go:322] Alternatively, if you are the root user, you can run: I0131 13:58:11.786712 235348 kubeadm.go:322] I0131 13:58:11.786797 235348 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf I0131 13:58:11.786816 235348 kubeadm.go:322] I0131 13:58:11.786904 235348 kubeadm.go:322] You should now deploy a pod network to the cluster. I0131 13:58:11.787026 235348 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0131 13:58:11.787134 235348 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0131 13:58:11.787145 235348 kubeadm.go:322] I0131 13:58:11.787309 235348 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities I0131 13:58:11.787465 235348 kubeadm.go:322] and service account keys on each node and then running the following as root: I0131 13:58:11.787475 235348 kubeadm.go:322] I0131 13:58:11.787599 235348 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hizwfy.0rdkoxytwtebdc50 \ I0131 13:58:11.787776 235348 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:582c9fc9c53268a0bb9b2b580aba9df8d844401a9b7493e7681bdab671a89413 \ I0131 13:58:11.787820 235348 kubeadm.go:322] --control-plane I0131 13:58:11.787832 235348 kubeadm.go:322] I0131 13:58:11.787929 235348 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root: I0131 13:58:11.787942 235348 kubeadm.go:322] I0131 13:58:11.788028 235348 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hizwfy.0rdkoxytwtebdc50 \ I0131 13:58:11.788191 235348 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:582c9fc9c53268a0bb9b2b580aba9df8d844401a9b7493e7681bdab671a89413 I0131 13:58:11.788353 235348 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0131 13:58:11.788368 235348 cni.go:84] Creating CNI manager for "" I0131 13:58:11.788398 235348 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0131 13:58:11.791468 235348 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0131 13:58:11.794404 235348 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0131 13:58:11.800212 235348 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0131 13:58:11.814421 235348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0131 13:58:11.814491 235348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2024_01_31T13_58_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0131 13:58:11.814493 235348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0131 13:58:11.876569 235348 ops.go:34] apiserver oom_adj: -16 I0131 13:58:11.954482 235348 kubeadm.go:1081] duration metric: took 140.068916ms to wait for elevateKubeSystemPrivileges. I0131 13:58:11.954546 235348 host.go:66] Checking if "minikube" exists ... I0131 13:58:11.955744 235348 main.go:141] libmachine: Using SSH client type: external I0131 13:58:11.955795 235348 main.go:141] libmachine: Using SSH private key: /home/caerulescens/.minikube/machines/minikube/id_rsa (-rw-------) I0131 13:58:11.955873 235348 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /home/caerulescens/.minikube/machines/minikube/id_rsa -p 41275] /usr/bin/ssh } I0131 13:58:11.955929 235348 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /home/caerulescens/.minikube/machines/minikube/id_rsa -p 41275 -f -NTL 35335:localhost:8443 I0131 13:58:12.050842 235348 kubeadm.go:406] StartCluster complete in 11.043938253s I0131 13:58:12.050894 235348 settings.go:142] acquiring lock: {Name:mk8d27447c1c9e545903b1773b29083297adc301 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:12.050996 235348 settings.go:150] Updating kubeconfig: /home/caerulescens/.kube/config I0131 13:58:12.051994 235348 lock.go:35] WriteFile acquiring /home/caerulescens/.kube/config: {Name:mkd625f49e00c6b458a797492be28d65bff7f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0131 13:58:12.052241 235348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0131 13:58:12.052283 235348 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] I0131 13:58:12.052382 235348 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0131 13:58:12.052411 235348 addons.go:231] Setting addon storage-provisioner=true in "minikube" I0131 13:58:12.052386 235348 addons.go:69] Setting default-storageclass=true in profile "minikube" I0131 13:58:12.052475 235348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0131 13:58:12.052481 235348 host.go:66] Checking if "minikube" exists ... I0131 13:58:12.052576 235348 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0131 13:58:12.057413 235348 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0131 13:58:12.054594 235348 addons.go:231] Setting addon default-storageclass=true in "minikube" I0131 13:58:12.057471 235348 host.go:66] Checking if "minikube" exists ... I0131 13:58:12.060833 235348 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml I0131 13:58:12.060858 235348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0131 13:58:12.060884 235348 sshutil.go:53] new ssh client: &{IP:localhost Port:41275 SSHKeyPath:/home/caerulescens/.minikube/machines/minikube/id_rsa Username:docker} I0131 13:58:12.066297 235348 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml I0131 13:58:12.066321 235348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0131 13:58:12.066340 235348 sshutil.go:53] new ssh client: &{IP:localhost Port:41275 SSHKeyPath:/home/caerulescens/.minikube/machines/minikube/id_rsa Username:docker} I0131 13:58:12.112166 235348 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0131 13:58:12.112214 235348 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0131 13:58:12.115192 235348 out.go:177] 🔎 Verifying Kubernetes components... I0131 13:58:12.118267 235348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0131 13:58:12.149310 235348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0131 13:58:12.184793 235348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0131 13:58:12.205828 235348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 10.0.2.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0131 13:58:12.206588 235348 api_server.go:52] waiting for apiserver process to appear ... I0131 13:58:12.206676 235348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0131 13:58:13.121749 235348 start.go:926] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap I0131 13:58:13.121799 235348 api_server.go:72] duration metric: took 1.009547047s to wait for apiserver process to appear ... I0131 13:58:13.121824 235348 api_server.go:88] waiting for apiserver healthz status ... I0131 13:58:13.121848 235348 api_server.go:253] Checking apiserver healthz at https://localhost:35335/healthz ... I0131 13:58:13.130179 235348 api_server.go:279] https://localhost:35335/healthz returned 200: ok I0131 13:58:13.133169 235348 api_server.go:141] control plane version: v1.28.3 I0131 13:58:13.133196 235348 api_server.go:131] duration metric: took 11.354984ms to wait for apiserver health ... I0131 13:58:13.133214 235348 system_pods.go:43] waiting for kube-system pods to appear ... I0131 13:58:13.139903 235348 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0131 13:58:13.143044 235348 addons.go:502] enable addons completed in 1.090770119s: enabled=[storage-provisioner default-storageclass] I0131 13:58:13.140940 235348 system_pods.go:59] 5 kube-system pods found I0131 13:58:13.143105 235348 system_pods.go:61] "etcd-minikube" [e851f902-7ea2-4583-862e-47959bfc7b16] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0131 13:58:13.143133 235348 system_pods.go:61] "kube-apiserver-minikube" [2ff26bf7-5ba4-4ac9-839c-d4dbd60ee3f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0131 13:58:13.143153 235348 system_pods.go:61] "kube-controller-manager-minikube" [685ae7fe-6ac2-4b9b-a301-032544921588] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0131 13:58:13.143171 235348 system_pods.go:61] "kube-scheduler-minikube" [48f753bf-cefd-424e-b5dd-b8b880be5d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0131 13:58:13.143185 235348 system_pods.go:61] "storage-provisioner" [693daffa-d0b4-4779-a5ae-639f5e7fc9dc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..) I0131 13:58:13.143205 235348 system_pods.go:74] duration metric: took 9.977512ms to wait for pod list to return data ... I0131 13:58:13.143218 235348 kubeadm.go:581] duration metric: took 1.030972948s to wait for : map[apiserver:true system_pods:true] ... I0131 13:58:13.143235 235348 node_conditions.go:102] verifying NodePressure condition ... I0131 13:58:13.146618 235348 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki I0131 13:58:13.146645 235348 node_conditions.go:123] node cpu capacity is 2 I0131 13:58:13.146666 235348 node_conditions.go:105] duration metric: took 3.422548ms to run NodePressure ... I0131 13:58:13.146682 235348 start.go:228] waiting for startup goroutines ... I0131 13:58:13.146696 235348 start.go:233] waiting for cluster config update ... I0131 13:58:13.146718 235348 start.go:242] writing updated cluster config ... I0131 13:58:13.147022 235348 ssh_runner.go:195] Run: rm -f paused I0131 13:58:13.213862 235348 start.go:600] kubectl: 1.29.1, cluster: 1.28.3 (minor skew: 1) I0131 13:58:13.217442 235348 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Journal begins at Wed 2024-01-31 18:57:38 UTC, ends at Wed 2024-01-31 18:59:16 UTC. -- Jan 31 18:58:05 minikube dockerd[1109]: time="2024-01-31T18:58:05.739111879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:05 minikube dockerd[1109]: time="2024-01-31T18:58:05.739151944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:05 minikube dockerd[1109]: time="2024-01-31T18:58:05.739187261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:05 minikube dockerd[1109]: time="2024-01-31T18:58:05.739208310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:05 minikube cri-dockerd[994]: time="2024-01-31T18:58:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac45cce04e852f23b3ddb586fcb2be6c7e8b8e6a0dbb7468a321e19ebed932a1/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.060046724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.060133407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.060157762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.060176557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube cri-dockerd[994]: time="2024-01-31T18:58:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/571f11b5f7b3a68d60a86a86fb49a2b8029dbf91974e9165039a178adfcf40bf/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:06 minikube cri-dockerd[994]: time="2024-01-31T18:58:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86013561e3c342698a5cc0b697fd85a3d07dadd4adb4df4886bc874e0b75ef34/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:06 minikube cri-dockerd[994]: time="2024-01-31T18:58:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae6943ee617d5b062ce0ad4d53cd6eeb3f14a8d3c1ca6388043f58cfa1d63d74/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.389583828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.389734440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.389942761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.390114132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.514532226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.514620932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.514649105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.514667970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.604829763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.604941722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.604967390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:06 minikube dockerd[1109]: time="2024-01-31T18:58:06.604985996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.333501557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.333550618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.333567540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.333576667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.450400077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.450478915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.450499784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.450511797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube cri-dockerd[994]: time="2024-01-31T18:58:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29fd6eede3dfdfad74262cf25fa48e066fc629dd9cba457ad3fde8b2771cc504/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:24 minikube cri-dockerd[994]: time="2024-01-31T18:58:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/003643e6f6302adf4d2748468b77eec4bdd3721966b09b3f2fa533f70b4bcafd/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.786717907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.786775566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.786790374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.786799951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.799248575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.799319237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.799343052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.799684041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.834038444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.834271781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.834368793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:24 minikube dockerd[1109]: time="2024-01-31T18:58:24.834456969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:25 minikube cri-dockerd[994]: time="2024-01-31T18:58:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/642fdacce3963c5b2689fd74549570c317ac2aa84b8bbe58e6d3b8814e093937/resolv.conf as [nameserver 10.0.2.3]" Jan 31 18:58:25 minikube dockerd[1109]: time="2024-01-31T18:58:25.296494113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:25 minikube dockerd[1109]: time="2024-01-31T18:58:25.296583521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:25 minikube dockerd[1109]: time="2024-01-31T18:58:25.296607176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:25 minikube dockerd[1109]: time="2024-01-31T18:58:25.296627163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:32 minikube cri-dockerd[994]: time="2024-01-31T18:58:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.053617130Z" level=info msg="shim disconnected" id=27cb80e9ce2b8c575411f55042e10ecf8a3d9ae9674caf9517d69df4c05d88db namespace=moby Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.053720045Z" level=warning msg="cleaning up after shim disconnected" id=27cb80e9ce2b8c575411f55042e10ecf8a3d9ae9674caf9517d69df4c05d88db namespace=moby Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.053737998Z" level=info msg="cleaning up dead shim" namespace=moby Jan 31 18:58:55 minikube dockerd[1103]: time="2024-01-31T18:58:55.054056962Z" level=info msg="ignoring event" container=27cb80e9ce2b8c575411f55042e10ecf8a3d9ae9674caf9517d69df4c05d88db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.496276876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.496383458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.496425898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 31 18:58:55 minikube dockerd[1109]: time="2024-01-31T18:58:55.496511721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 63395e7bc630b 6e38f40d628db 22 seconds ago Running storage-provisioner 1 003643e6f6302 storage-provisioner 701ac6713ab01 ead0a4a53df89 52 seconds ago Running coredns 0 642fdacce3963 coredns-5dd5756b68-4hflx 27cb80e9ce2b8 6e38f40d628db 53 seconds ago Exited storage-provisioner 0 003643e6f6302 storage-provisioner 4c23277d297aa bfc896cf80fba 53 seconds ago Running kube-proxy 0 29fd6eede3dfd kube-proxy-597th cb0e4bf8fe12a 73deb9a3f7025 About a minute ago Running etcd 0 ae6943ee617d5 etcd-minikube c950b0c8e7421 10baa1ca17068 About a minute ago Running kube-controller-manager 0 86013561e3c34 kube-controller-manager-minikube 9f3ab3ecd2731 5374347291230 About a minute ago Running kube-apiserver 0 571f11b5f7b3a kube-apiserver-minikube 078a3c6c8d526 6d1b4fd1b182d About a minute ago Running kube-scheduler 0 ac45cce04e852 kube-scheduler-minikube * * ==> coredns [701ac6713ab0] <== * .:53 [INFO] plugin/reload: Running configuration SHA512 = 4369d49e705690634e66dc4876ba448687add67b4e702a1c8bd9cbe26bf81de42209d08c6b52f2167c69004abbe79b233480d7bb5830c218d455f30e7efd3686 CoreDNS-1.10.1 linux/amd64, go1.20, 055b2c3 [INFO] 127.0.0.1:57631 - 53570 "HINFO IN 5917772123254788665.5460374687365259801. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028468956s * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_01_31T13_58_11_0700 minikube.k8s.io/version=v1.32.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 31 Jan 2024 18:58:08 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 31 Jan 2024 18:59:12 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 31 Jan 2024 18:58:32 +0000 Wed, 31 Jan 2024 18:58:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 31 Jan 2024 18:58:32 +0000 Wed, 31 Jan 2024 18:58:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 31 Jan 2024 18:58:32 +0000 Wed, 31 Jan 2024 18:58:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 31 Jan 2024 18:58:32 +0000 Wed, 31 Jan 2024 18:58:12 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 10.0.2.15 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 17784760Ki hugepages-2Mi: 0 memory: 5925660Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17784760Ki hugepages-2Mi: 0 memory: 5925660Ki pods: 110 System Info: Machine ID: aeab9b0e17c64381bde5fc57866ada89 System UUID: aeab9b0e17c64381bde5fc57866ada89 Boot ID: 2c89648b-04d3-4d0c-908f-9642eba0e965 Kernel Version: 5.10.57 OS Image: Buildroot 2021.02.12 Operating System: linux Architecture: amd64 Container Runtime Version: docker://24.0.7 Kubelet Version: v1.28.3 Kube-Proxy Version: v1.28.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-5dd5756b68-4hflx 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 53s kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 66s kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 68s kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 66s kube-system kube-proxy-597th 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54s kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 66s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 64s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (2%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 52s kube-proxy Normal NodeHasSufficientMemory 73s (x8 over 73s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 73s (x8 over 73s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 73s (x7 over 73s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 73s kubelet Updated Node Allocatable limit across pods Normal Starting 66s kubelet Starting kubelet. Normal NodeAllocatableEnforced 66s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 66s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 66s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 66s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeReady 65s kubelet Node minikube status is now: NodeReady Normal RegisteredNode 54s node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [Jan31 18:57] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +2.922737] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.958927] systemd-fstab-generator[113]: Ignoring "noauto" for root device [ +0.052045] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +1.845487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery [ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2) [ +9.383685] systemd-fstab-generator[535]: Ignoring "noauto" for root device [ +0.100042] systemd-fstab-generator[546]: Ignoring "noauto" for root device [ +0.949928] systemd-fstab-generator[719]: Ignoring "noauto" for root device [ +0.282028] systemd-fstab-generator[757]: Ignoring "noauto" for root device [ +0.095949] systemd-fstab-generator[768]: Ignoring "noauto" for root device [ +0.093271] systemd-fstab-generator[781]: Ignoring "noauto" for root device [ +1.462196] systemd-fstab-generator[939]: Ignoring "noauto" for root device [ +0.084705] systemd-fstab-generator[950]: Ignoring "noauto" for root device [ +0.101947] systemd-fstab-generator[961]: Ignoring "noauto" for root device [ +0.077829] systemd-fstab-generator[972]: Ignoring "noauto" for root device [ +0.115397] systemd-fstab-generator[986]: Ignoring "noauto" for root device [ +4.731411] systemd-fstab-generator[1093]: Ignoring "noauto" for root device [ +2.374325] kauditd_printk_skb: 53 callbacks suppressed [Jan31 18:58] systemd-fstab-generator[1459]: Ignoring "noauto" for root device [ +7.295450] systemd-fstab-generator[2392]: Ignoring "noauto" for root device [ +13.535106] kauditd_printk_skb: 39 callbacks suppressed * * ==> etcd [cb0e4bf8fe12] <== * {"level":"warn","ts":"2024-01-31T18:58:06.713545Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-01-31T18:58:06.713637Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://10.0.2.15:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://10.0.2.15:2380","--initial-cluster=minikube=https://10.0.2.15:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://10.0.2.15:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://10.0.2.15:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2024-01-31T18:58:06.713719Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-01-31T18:58:06.713858Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.2.15:2380"]} {"level":"info","ts":"2024-01-31T18:58:06.713883Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-01-31T18:58:06.71428Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"]} {"level":"info","ts":"2024-01-31T18:58:06.714504Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://10.0.2.15:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-01-31T18:58:06.726525Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"11.690739ms"} {"level":"info","ts":"2024-01-31T18:58:06.741516Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"f074a195de705325","cluster-id":"ef296cf39f5d9d66"} {"level":"info","ts":"2024-01-31T18:58:06.741645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=()"} {"level":"info","ts":"2024-01-31T18:58:06.741718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became follower at term 0"} {"level":"info","ts":"2024-01-31T18:58:06.741769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f074a195de705325 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2024-01-31T18:58:06.741836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became follower at term 1"} {"level":"info","ts":"2024-01-31T18:58:06.741917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"} {"level":"warn","ts":"2024-01-31T18:58:06.757479Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-01-31T18:58:06.766542Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2024-01-31T18:58:06.775474Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-01-31T18:58:06.783733Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"f074a195de705325","local-server-version":"3.5.9","cluster-version":"to_be_decided"} {"level":"info","ts":"2024-01-31T18:58:06.78548Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-01-31T18:58:06.785657Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-01-31T18:58:06.785744Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-01-31T18:58:06.785814Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-01-31T18:58:06.786227Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-01-31T18:58:06.788467Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"10.0.2.15:2380"} {"level":"info","ts":"2024-01-31T18:58:06.788737Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"10.0.2.15:2380"} {"level":"info","ts":"2024-01-31T18:58:06.788639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"} {"level":"info","ts":"2024-01-31T18:58:06.788906Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]} {"level":"info","ts":"2024-01-31T18:58:06.789463Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-01-31T18:58:06.789551Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-01-31T18:58:07.042393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"} {"level":"info","ts":"2024-01-31T18:58:07.042596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"} {"level":"info","ts":"2024-01-31T18:58:07.042763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"} {"level":"info","ts":"2024-01-31T18:58:07.042861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"} {"level":"info","ts":"2024-01-31T18:58:07.042983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"} {"level":"info","ts":"2024-01-31T18:58:07.043069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"} {"level":"info","ts":"2024-01-31T18:58:07.043207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"} {"level":"info","ts":"2024-01-31T18:58:07.056756Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2024-01-31T18:58:07.058206Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:minikube ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"} {"level":"info","ts":"2024-01-31T18:58:07.05832Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-01-31T18:58:07.059643Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.0.2.15:2379"} {"level":"info","ts":"2024-01-31T18:58:07.060335Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-01-31T18:58:07.060571Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-01-31T18:58:07.060669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-01-31T18:58:07.061655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"} {"level":"info","ts":"2024-01-31T18:58:07.061858Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-01-31T18:58:07.062001Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2024-01-31T18:58:07.066351Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} * * ==> kernel <== * 18:59:17 up 1 min, 0 users, load average: 0.48, 0.24, 0.09 Linux minikube 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.12" * * ==> kube-apiserver [9f3ab3ecd273] <== * I0131 18:58:08.148025 1 secure_serving.go:213] Serving securely on [::]:8443 I0131 18:58:08.148115 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0131 18:58:08.148146 1 controller.go:80] Starting OpenAPI V3 AggregationController I0131 18:58:08.148236 1 aggregator.go:164] waiting for initial CRD sync... I0131 18:58:08.148852 1 gc_controller.go:78] Starting apiserver lease garbage collector I0131 18:58:08.148878 1 apf_controller.go:372] Starting API Priority and Fairness config controller I0131 18:58:08.148966 1 controller.go:78] Starting OpenAPI AggregationController I0131 18:58:08.149153 1 controller.go:116] Starting legacy_token_tracking_controller I0131 18:58:08.149165 1 shared_informer.go:311] Waiting for caches to sync for configmaps I0131 18:58:08.149297 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0131 18:58:08.149362 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0131 18:58:08.149475 1 available_controller.go:423] Starting AvailableConditionController I0131 18:58:08.149485 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0131 18:58:08.149898 1 controller.go:134] Starting OpenAPI controller I0131 18:58:08.149963 1 controller.go:85] Starting OpenAPI V3 controller I0131 18:58:08.149980 1 naming_controller.go:291] Starting NamingConditionController I0131 18:58:08.150020 1 establishing_controller.go:76] Starting EstablishingController I0131 18:58:08.150036 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0131 18:58:08.150077 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0131 18:58:08.150093 1 crd_finalizer.go:266] Starting CRDFinalizer I0131 18:58:08.151407 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0131 18:58:08.151436 1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller I0131 18:58:08.151759 1 system_namespaces_controller.go:67] Starting system namespaces controller I0131 18:58:08.151952 1 gc_controller.go:78] Starting apiserver lease garbage collector I0131 18:58:08.151985 1 handler_discovery.go:412] Starting ResourceDiscoveryManager I0131 18:58:08.152217 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0131 18:58:08.152241 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0131 18:58:08.152264 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0131 18:58:08.152275 1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister I0131 18:58:08.177959 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0131 18:58:08.178514 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0131 18:58:08.249102 1 apf_controller.go:377] Running API Priority and Fairness config worker I0131 18:58:08.249119 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process I0131 18:58:08.249182 1 shared_informer.go:318] Caches are synced for configmaps I0131 18:58:08.249798 1 cache.go:39] Caches are synced for AvailableConditionController controller I0131 18:58:08.255374 1 shared_informer.go:318] Caches are synced for crd-autoregister I0131 18:58:08.260649 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller I0131 18:58:08.260931 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0131 18:58:08.260984 1 aggregator.go:166] initial CRD sync complete... I0131 18:58:08.261028 1 autoregister_controller.go:141] Starting autoregister controller I0131 18:58:08.261060 1 cache.go:32] Waiting for caches to sync for autoregister controller I0131 18:58:08.261090 1 cache.go:39] Caches are synced for autoregister controller I0131 18:58:08.263432 1 controller.go:624] quota admission added evaluator for: namespaces I0131 18:58:08.270795 1 shared_informer.go:318] Caches are synced for node_authorizer I0131 18:58:08.300806 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io I0131 18:58:09.184171 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0131 18:58:09.190174 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0131 18:58:09.190338 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0131 18:58:09.835298 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0131 18:58:09.891599 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0131 18:58:09.977591 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0131 18:58:09.993712 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [10.0.2.15] I0131 18:58:09.994685 1 controller.go:624] quota admission added evaluator for: endpoints I0131 18:58:10.000334 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io I0131 18:58:10.268514 1 controller.go:624] quota admission added evaluator for: serviceaccounts I0131 18:58:11.628583 1 controller.go:624] quota admission added evaluator for: deployments.apps I0131 18:58:11.644307 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0131 18:58:11.658299 1 controller.go:624] quota admission added evaluator for: daemonsets.apps I0131 18:58:23.933036 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps I0131 18:58:23.992101 1 controller.go:624] quota admission added evaluator for: replicasets.apps * * ==> kube-controller-manager [c950b0c8e742] <== * I0131 18:58:23.063154 1 shared_informer.go:311] Waiting for caches to sync for garbage collector I0131 18:58:23.072261 1 shared_informer.go:318] Caches are synced for certificate-csrapproving I0131 18:58:23.074812 1 shared_informer.go:318] Caches are synced for TTL I0131 18:58:23.081138 1 shared_informer.go:318] Caches are synced for node I0131 18:58:23.081341 1 range_allocator.go:174] "Sending events to api server" I0131 18:58:23.081491 1 range_allocator.go:178] "Starting range CIDR allocator" I0131 18:58:23.081589 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator I0131 18:58:23.081676 1 shared_informer.go:318] Caches are synced for cidrallocator I0131 18:58:23.081742 1 shared_informer.go:318] Caches are synced for namespace I0131 18:58:23.089170 1 shared_informer.go:318] Caches are synced for expand I0131 18:58:23.092598 1 range_allocator.go:380] "Set node PodCIDR" node="minikube" podCIDRs=["10.244.0.0/24"] I0131 18:58:23.101647 1 shared_informer.go:318] Caches are synced for service account I0131 18:58:23.124327 1 shared_informer.go:318] Caches are synced for PV protection I0131 18:58:23.125483 1 shared_informer.go:318] Caches are synced for crt configmap I0131 18:58:23.132722 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving I0131 18:58:23.133938 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client I0131 18:58:23.136202 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client I0131 18:58:23.137405 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown I0131 18:58:23.137420 1 shared_informer.go:318] Caches are synced for bootstrap_signer I0131 18:58:23.167864 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator I0131 18:58:23.183255 1 shared_informer.go:318] Caches are synced for endpoint_slice I0131 18:58:23.184366 1 shared_informer.go:318] Caches are synced for taint I0131 18:58:23.184519 1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone="" I0131 18:58:23.184772 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="minikube" I0131 18:58:23.184852 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal" I0131 18:58:23.184918 1 taint_manager.go:206] "Starting NoExecuteTaintManager" I0131 18:58:23.185009 1 taint_manager.go:211] "Sending events to api server" I0131 18:58:23.185266 1 event.go:307] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0131 18:58:23.186735 1 shared_informer.go:318] Caches are synced for stateful set I0131 18:58:23.218639 1 shared_informer.go:318] Caches are synced for persistent volume I0131 18:58:23.224610 1 shared_informer.go:318] Caches are synced for ReplicationController I0131 18:58:23.224831 1 shared_informer.go:318] Caches are synced for daemon sets I0131 18:58:23.224929 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring I0131 18:58:23.233206 1 shared_informer.go:318] Caches are synced for resource quota I0131 18:58:23.239080 1 shared_informer.go:318] Caches are synced for GC I0131 18:58:23.263489 1 shared_informer.go:318] Caches are synced for attach detach I0131 18:58:23.272407 1 shared_informer.go:318] Caches are synced for disruption I0131 18:58:23.272430 1 shared_informer.go:318] Caches are synced for ephemeral I0131 18:58:23.272489 1 shared_informer.go:318] Caches are synced for HPA I0131 18:58:23.274646 1 shared_informer.go:318] Caches are synced for deployment I0131 18:58:23.274647 1 shared_informer.go:318] Caches are synced for PVC protection I0131 18:58:23.280836 1 shared_informer.go:318] Caches are synced for ReplicaSet I0131 18:58:23.281418 1 shared_informer.go:318] Caches are synced for endpoint I0131 18:58:23.288314 1 shared_informer.go:318] Caches are synced for TTL after finished I0131 18:58:23.309768 1 shared_informer.go:318] Caches are synced for resource quota I0131 18:58:23.319943 1 shared_informer.go:318] Caches are synced for job I0131 18:58:23.329317 1 shared_informer.go:318] Caches are synced for cronjob I0131 18:58:23.633398 1 shared_informer.go:318] Caches are synced for garbage collector I0131 18:58:23.633959 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" I0131 18:58:23.670736 1 shared_informer.go:318] Caches are synced for garbage collector I0131 18:58:23.947832 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-597th" I0131 18:58:24.000905 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1" I0131 18:58:24.145793 1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4hflx" I0131 18:58:24.176369 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="176.397073ms" I0131 18:58:24.219251 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.814827ms" I0131 18:58:24.219387 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.133µs" I0131 18:58:24.238262 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.406µs" I0131 18:58:26.172866 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.977µs" I0131 18:58:26.234141 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.161933ms" I0131 18:58:26.235320 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.056µs" * * ==> kube-proxy [4c23277d297a] <== * I0131 18:58:24.961876 1 server_others.go:69] "Using iptables proxy" I0131 18:58:24.969626 1 node.go:141] Successfully retrieved node IP: 10.0.2.15 I0131 18:58:24.993266 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6" I0131 18:58:24.993284 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4" I0131 18:58:24.994356 1 server_others.go:152] "Using iptables Proxier" I0131 18:58:24.994414 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0131 18:58:24.994629 1 server.go:846] "Version info" version="v1.28.3" I0131 18:58:24.994770 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0131 18:58:24.995311 1 config.go:188] "Starting service config controller" I0131 18:58:24.995365 1 shared_informer.go:311] Waiting for caches to sync for service config I0131 18:58:24.995432 1 config.go:97] "Starting endpoint slice config controller" I0131 18:58:24.995508 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config I0131 18:58:24.997199 1 config.go:315] "Starting node config controller" I0131 18:58:24.997236 1 shared_informer.go:311] Waiting for caches to sync for node config I0131 18:58:25.096524 1 shared_informer.go:318] Caches are synced for endpoint slice config I0131 18:58:25.096560 1 shared_informer.go:318] Caches are synced for service config I0131 18:58:25.097825 1 shared_informer.go:318] Caches are synced for node config * * ==> kube-scheduler [078a3c6c8d52] <== * W0131 18:58:08.191784 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0131 18:58:08.191802 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. W0131 18:58:08.191815 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0131 18:58:08.225151 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3" I0131 18:58:08.225178 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0131 18:58:08.233886 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0131 18:58:08.234347 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0131 18:58:08.237377 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0131 18:58:08.237635 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0131 18:58:08.253657 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0131 18:58:08.253745 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0131 18:58:08.254059 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0131 18:58:08.254108 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0131 18:58:08.254195 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0131 18:58:08.254252 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0131 18:58:08.254338 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0131 18:58:08.254383 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0131 18:58:08.254469 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0131 18:58:08.255426 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0131 18:58:08.255572 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0131 18:58:08.255701 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0131 18:58:08.255789 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0131 18:58:08.255836 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0131 18:58:08.255918 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0131 18:58:08.255965 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0131 18:58:08.256048 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0131 18:58:08.256096 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0131 18:58:08.256169 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0131 18:58:08.256215 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0131 18:58:08.256367 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0131 18:58:08.258473 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0131 18:58:08.258592 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0131 18:58:08.258641 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0131 18:58:08.258715 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0131 18:58:08.258764 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0131 18:58:08.258837 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0131 18:58:08.258890 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0131 18:58:08.258966 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0131 18:58:08.259013 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0131 18:58:09.170699 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0131 18:58:09.170735 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0131 18:58:09.246648 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0131 18:58:09.246700 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0131 18:58:09.272222 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0131 18:58:09.272291 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0131 18:58:09.344682 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0131 18:58:09.344708 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0131 18:58:09.353056 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0131 18:58:09.353086 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0131 18:58:09.372205 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0131 18:58:09.372234 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0131 18:58:09.380990 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0131 18:58:09.381158 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0131 18:58:09.382134 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0131 18:58:09.382224 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0131 18:58:09.511096 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0131 18:58:09.511124 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0131 18:58:09.594135 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0131 18:58:09.594315 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope I0131 18:58:11.034974 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Journal begins at Wed 2024-01-31 18:57:38 UTC, ends at Wed 2024-01-31 18:59:17 UTC. -- Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.762805 2411 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.762853 2411 policy_none.go:49] "None policy: Start" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.765289 2411 memory_manager.go:169] "Starting memorymanager" policy="None" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.765375 2411 state_mem.go:35] "Initializing new in-memory state store" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.765563 2411 state_mem.go:75] "Updated machine memory state" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.766765 2411 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.767028 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.828112 2411 kubelet_node_status.go:70] "Attempting to register node" node="minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.835474 2411 topology_manager.go:215] "Topology Admit Handler" podUID="d751ca6ec9cbeb15e51b214833bee7cc" podNamespace="kube-system" podName="etcd-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.835659 2411 topology_manager.go:215] "Topology Admit Handler" podUID="842672fed952e0ab68deac178344590a" podNamespace="kube-system" podName="kube-apiserver-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.835700 2411 topology_manager.go:215] "Topology Admit Handler" podUID="11fc41667a2819cdb15b7270cb5cd200" podNamespace="kube-system" podName="kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.835750 2411 topology_manager.go:215] "Topology Admit Handler" podUID="75ac196d3709dde303d8a81c035c2c28" podNamespace="kube-system" podName="kube-scheduler-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.851423 2411 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.851517 2411 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Jan 31 18:58:11 minikube kubelet[2411]: E0131 18:58:11.859273 2411 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: E0131 18:58:11.865676 2411 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910241 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d751ca6ec9cbeb15e51b214833bee7cc-etcd-data\") pod \"etcd-minikube\" (UID: \"d751ca6ec9cbeb15e51b214833bee7cc\") " pod="kube-system/etcd-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910326 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/842672fed952e0ab68deac178344590a-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"842672fed952e0ab68deac178344590a\") " pod="kube-system/kube-apiserver-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910348 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/842672fed952e0ab68deac178344590a-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"842672fed952e0ab68deac178344590a\") " pod="kube-system/kube-apiserver-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910406 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11fc41667a2819cdb15b7270cb5cd200-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"11fc41667a2819cdb15b7270cb5cd200\") " pod="kube-system/kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910425 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11fc41667a2819cdb15b7270cb5cd200-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"11fc41667a2819cdb15b7270cb5cd200\") " pod="kube-system/kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910500 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11fc41667a2819cdb15b7270cb5cd200-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"11fc41667a2819cdb15b7270cb5cd200\") " pod="kube-system/kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910524 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d751ca6ec9cbeb15e51b214833bee7cc-etcd-certs\") pod \"etcd-minikube\" (UID: \"d751ca6ec9cbeb15e51b214833bee7cc\") " pod="kube-system/etcd-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910541 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/842672fed952e0ab68deac178344590a-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"842672fed952e0ab68deac178344590a\") " pod="kube-system/kube-apiserver-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910589 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11fc41667a2819cdb15b7270cb5cd200-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"11fc41667a2819cdb15b7270cb5cd200\") " pod="kube-system/kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910607 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11fc41667a2819cdb15b7270cb5cd200-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"11fc41667a2819cdb15b7270cb5cd200\") " pod="kube-system/kube-controller-manager-minikube" Jan 31 18:58:11 minikube kubelet[2411]: I0131 18:58:11.910664 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75ac196d3709dde303d8a81c035c2c28-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"75ac196d3709dde303d8a81c035c2c28\") " pod="kube-system/kube-scheduler-minikube" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.686646 2411 apiserver.go:52] "Watching apiserver" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.708994 2411 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.726960 2411 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jan 31 18:58:12 minikube kubelet[2411]: E0131 18:58:12.805389 2411 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.824988 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-minikube" podStartSLOduration=1.824925379 podCreationTimestamp="2024-01-31 18:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:12.814226658 +0000 UTC m=+1.229221919" watchObservedRunningTime="2024-01-31 18:58:12.824925379 +0000 UTC m=+1.239920639" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.841683 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-minikube" podStartSLOduration=1.841629749 podCreationTimestamp="2024-01-31 18:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:12.825751908 +0000 UTC m=+1.240747168" watchObservedRunningTime="2024-01-31 18:58:12.841629749 +0000 UTC m=+1.256624999" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.860051 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-minikube" podStartSLOduration=1.860003861 podCreationTimestamp="2024-01-31 18:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:12.842130568 +0000 UTC m=+1.257125818" watchObservedRunningTime="2024-01-31 18:58:12.860003861 +0000 UTC m=+1.274999111" Jan 31 18:58:12 minikube kubelet[2411]: I0131 18:58:12.876264 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-minikube" podStartSLOduration=3.876219053 podCreationTimestamp="2024-01-31 18:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:12.860479401 +0000 UTC m=+1.275474661" watchObservedRunningTime="2024-01-31 18:58:12.876219053 +0000 UTC m=+1.291214313" Jan 31 18:58:23 minikube kubelet[2411]: I0131 18:58:23.210169 2411 topology_manager.go:215] "Topology Admit Handler" podUID="693daffa-d0b4-4779-a5ae-639f5e7fc9dc" podNamespace="kube-system" podName="storage-provisioner" Jan 31 18:58:23 minikube kubelet[2411]: I0131 18:58:23.382087 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/693daffa-d0b4-4779-a5ae-639f5e7fc9dc-tmp\") pod \"storage-provisioner\" (UID: \"693daffa-d0b4-4779-a5ae-639f5e7fc9dc\") " pod="kube-system/storage-provisioner" Jan 31 18:58:23 minikube kubelet[2411]: I0131 18:58:23.382222 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9znm\" (UniqueName: \"kubernetes.io/projected/693daffa-d0b4-4779-a5ae-639f5e7fc9dc-kube-api-access-f9znm\") pod \"storage-provisioner\" (UID: \"693daffa-d0b4-4779-a5ae-639f5e7fc9dc\") " pod="kube-system/storage-provisioner" Jan 31 18:58:23 minikube kubelet[2411]: E0131 18:58:23.490571 2411 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 31 18:58:23 minikube kubelet[2411]: E0131 18:58:23.490613 2411 projected.go:198] Error preparing data for projected volume kube-api-access-f9znm for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jan 31 18:58:23 minikube kubelet[2411]: E0131 18:58:23.490692 2411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/693daffa-d0b4-4779-a5ae-639f5e7fc9dc-kube-api-access-f9znm podName:693daffa-d0b4-4779-a5ae-639f5e7fc9dc nodeName:}" failed. No retries permitted until 2024-01-31 18:58:23.990665752 +0000 UTC m=+12.405661022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f9znm" (UniqueName: "kubernetes.io/projected/693daffa-d0b4-4779-a5ae-639f5e7fc9dc-kube-api-access-f9znm") pod "storage-provisioner" (UID: "693daffa-d0b4-4779-a5ae-639f5e7fc9dc") : configmap "kube-root-ca.crt" not found Jan 31 18:58:23 minikube kubelet[2411]: I0131 18:58:23.958643 2411 topology_manager.go:215] "Topology Admit Handler" podUID="dc4d164d-3af5-4f75-906a-59526b31580f" podNamespace="kube-system" podName="kube-proxy-597th" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.085310 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5mnd\" (UniqueName: \"kubernetes.io/projected/dc4d164d-3af5-4f75-906a-59526b31580f-kube-api-access-x5mnd\") pod \"kube-proxy-597th\" (UID: \"dc4d164d-3af5-4f75-906a-59526b31580f\") " pod="kube-system/kube-proxy-597th" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.085387 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc4d164d-3af5-4f75-906a-59526b31580f-lib-modules\") pod \"kube-proxy-597th\" (UID: \"dc4d164d-3af5-4f75-906a-59526b31580f\") " pod="kube-system/kube-proxy-597th" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.085465 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc4d164d-3af5-4f75-906a-59526b31580f-kube-proxy\") pod \"kube-proxy-597th\" (UID: \"dc4d164d-3af5-4f75-906a-59526b31580f\") " pod="kube-system/kube-proxy-597th" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.085497 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc4d164d-3af5-4f75-906a-59526b31580f-xtables-lock\") pod \"kube-proxy-597th\" (UID: \"dc4d164d-3af5-4f75-906a-59526b31580f\") " pod="kube-system/kube-proxy-597th" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.170621 2411 topology_manager.go:215] "Topology Admit Handler" podUID="9cd8cb34-4a35-47b8-aa97-6b346a68b0ab" podNamespace="kube-system" podName="coredns-5dd5756b68-4hflx" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.287043 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cd8cb34-4a35-47b8-aa97-6b346a68b0ab-config-volume\") pod \"coredns-5dd5756b68-4hflx\" (UID: \"9cd8cb34-4a35-47b8-aa97-6b346a68b0ab\") " pod="kube-system/coredns-5dd5756b68-4hflx" Jan 31 18:58:24 minikube kubelet[2411]: I0131 18:58:24.287091 2411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89mw\" (UniqueName: \"kubernetes.io/projected/9cd8cb34-4a35-47b8-aa97-6b346a68b0ab-kube-api-access-d89mw\") pod \"coredns-5dd5756b68-4hflx\" (UID: \"9cd8cb34-4a35-47b8-aa97-6b346a68b0ab\") " pod="kube-system/coredns-5dd5756b68-4hflx" Jan 31 18:58:25 minikube kubelet[2411]: I0131 18:58:25.113412 2411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="642fdacce3963c5b2689fd74549570c317ac2aa84b8bbe58e6d3b8814e093937" Jan 31 18:58:26 minikube kubelet[2411]: I0131 18:58:26.175467 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-597th" podStartSLOduration=3.175399889 podCreationTimestamp="2024-01-31 18:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:25.144194176 +0000 UTC m=+13.559189426" watchObservedRunningTime="2024-01-31 18:58:26.175399889 +0000 UTC m=+14.590395159" Jan 31 18:58:26 minikube kubelet[2411]: I0131 18:58:26.175586 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4hflx" podStartSLOduration=2.175562313 podCreationTimestamp="2024-01-31 18:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:26.174843104 +0000 UTC m=+14.589838374" watchObservedRunningTime="2024-01-31 18:58:26.175562313 +0000 UTC m=+14.590557583" Jan 31 18:58:26 minikube kubelet[2411]: I0131 18:58:26.217101 2411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.217052337 podCreationTimestamp="2024-01-31 18:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-31 18:58:26.200793498 +0000 UTC m=+14.615788768" watchObservedRunningTime="2024-01-31 18:58:26.217052337 +0000 UTC m=+14.632047607" Jan 31 18:58:32 minikube kubelet[2411]: I0131 18:58:32.573908 2411 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jan 31 18:58:32 minikube kubelet[2411]: I0131 18:58:32.575279 2411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jan 31 18:58:55 minikube kubelet[2411]: I0131 18:58:55.353836 2411 scope.go:117] "RemoveContainer" containerID="27cb80e9ce2b8c575411f55042e10ecf8a3d9ae9674caf9517d69df4c05d88db" Jan 31 18:59:11 minikube kubelet[2411]: E0131 18:59:11.764082 2411 iptables.go:575] "Could not set up iptables canary" err=< Jan 31 18:59:11 minikube kubelet[2411]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Jan 31 18:59:11 minikube kubelet[2411]: Perhaps ip6tables or your kernel needs to be upgraded. Jan 31 18:59:11 minikube kubelet[2411]: > table="nat" chain="KUBE-KUBELET-CANARY" * * ==> storage-provisioner [27cb80e9ce2b] <== * I0131 18:58:25.031750 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0131 18:58:55.039057 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [63395e7bc630] <== * I0131 18:58:55.554391 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0131 18:58:55.561356 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0131 18:58:55.561384 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0131 18:58:55.577010 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0131 18:58:55.577156 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_8575ff6b-a0a8-4c50-9cf3-3ee5a0adf39d! I0131 18:58:55.578275 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc65fc02-e947-4675-ad86-47982f2a2c1f", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_8575ff6b-a0a8-4c50-9cf3-3ee5a0adf39d became leader I0131 18:58:55.678287 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_8575ff6b-a0a8-4c50-9cf3-3ee5a0adf39d!