Debug School

DanielOu
DanielOu

Posted on

How Kubernetes works?

Kubernetes works by managing and orchestrating containerized applications within a cluster of nodes. The cluster consists of one or more master nodes and multiple worker nodes. The master node(s) are responsible for controlling and coordinating the cluster, while the worker nodes are where the containers run.

Here's an overview of how Kubernetes works:

Master Node Components:
    API Server: The central control point for all interactions with the cluster. It validates and processes API requests, maintaining the desired state of the system.
    etcd: A distributed key-value store that stores the configuration data and state of the cluster.
    Scheduler: Responsible for selecting suitable worker nodes for newly created pods (groups of one or more containers) based on resource requirements and constraints.
    Controller Manager: Runs various controllers that handle different aspects of the cluster, such as node control, replication, endpoints, and more.
    Cloud Controller Manager (optional): When running in cloud environments, this component integrates with the cloud provider's API to manage external resources like load balancers and storage.

Worker Node Components:
    Kubelet: The agent running on each worker node, responsible for interacting with the master node, starting and stopping pods, and reporting node status.
    Container Runtime: The software responsible for running containers, such as Docker or containerd.
    kube-proxy: Manages network communication to and from the pods. It enables service discovery and load balancing among the pods.

Pod:
    The basic scheduling unit in Kubernetes, representing one or more containers that are deployed together on the same host and share the same network namespace.
    Containers within a pod can communicate with each other using localhost, simplifying network configurations.
Enter fullscreen mode Exit fullscreen mode

The typical workflow of how Kubernetes operates:

Define the Desired State: Users or administrators define the desired state of the application and its components in Kubernetes. This is done using YAML or JSON files that describe the pods, deployments, services, and other resources.

API Server Receives Requests: Users interact with the Kubernetes cluster through the API server. They can create, update, or delete resources using commands or tools like kubectl.

etcd Stores the Configuration: The API server stores the desired state in etcd, which acts as the persistent data store for the entire cluster.

Scheduler Assigns Nodes: When a new pod is created, the scheduler determines the appropriate worker node(s) on which to place it based on resource requirements, node availability, and other factors.

Kubelet Launches Containers: The kubelet on the selected worker node receives instructions from the API server and ensures the desired containers are running within the pod.

kube-proxy for Networking: The kube-proxy sets up networking rules to enable communication between pods and services.

Monitoring and Self-Healing: Kubernetes continuously monitors the state of the cluster and automatically takes action to maintain the desired state. If a pod or node fails, Kubernetes reschedules the affected containers elsewhere to maintain application availability.
Enter fullscreen mode Exit fullscreen mode

By continuously monitoring and reconciling the actual state with the desired state, Kubernetes ensures that applications run reliably, are highly available, and can scale seamlessly to meet changing demands.

Top comments (0)