Debug School

BHAVNA MEHTA
BHAVNA MEHTA

Posted on

Error on executing kubeadm init

I1218 00:30:10.785071 984164 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8masterdec-2022 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.29.211]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8masterdec-2022 localhost] and IPs [192.168.29.211 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8masterdec-2022 localhost] and IPs [192.168.29.211 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
Enter fullscreen mode Exit fullscreen mode

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
root@k8masterdec-2022:/etc/docker#
root@k8masterdec-2022:/etc/docker#
root@k8masterdec-2022:/etc/docker# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2022-12-18 00:30:18 IST; 4min 15s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 984315 (kubelet)
Tasks: 17 (limit: 5215)
Memory: 37.4M
CGroup: /system.slice/kubelet.service
└─984315 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni >

Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664718 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpuset/kubepods.slice/kubepods-burstable.slice/kubepods-burst>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664829 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/hugetlb/kubepods.slice/kubepods-burstable.slice/kubepods-burs>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664842 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/perf_event/kubepods.slice/kubepods-burstable.slice/kubepods-b>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664900 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods->
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664911 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/kubepods.slice/kubepods-burstable.slice/kubepods-bursta>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664923 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/kubepods.slice/kubepods-burstable.slice/kubepods-burst>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664932 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstab>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: W1218 00:34:33.664942 984315 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpuset/kubepods.slice/kubepods-burstable.slice/kubepods-burst>
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: E1218 00:34:33.666062 984315 kubelet.go:2461] "Error getting node" err="node \"k8masterdec-2022\" not found"
Dec 18 00:34:33 k8masterdec-2022 kubelet[984315]: E1218 00:34:33.712229 984315 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with
CrashLoopBackOff: \">
lines 1-22/22 (END)

How To fix this?

Top comments (0)