Debug School


Posted on

Assignment - Day 3

What is Kubernetes?
Kubernetes also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
The main objective of Kubernetes is to hide the complexity of managing a fleet of containers by providing REST APIs for the required functionalities. Kubernetes is portable in nature, meaning it can run on various public or private cloud platforms such as AWS, Azure, OpenStack, or Apache Mesos.

Why Kubernetes
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.

• Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
• Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
• Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
• Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
• Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
• Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration

Explained Kubernetes Architecture
Kubernetes is an architecture that offers a loosely coupled mechanism for service discovery across a cluster. A Kubernetes cluster has one or more control planes, and one or more compute nodes. Overall, the control plane is responsible for managing the overall cluster, exposing the application program interface (API), and for scheduling the initiation and shutdown of compute nodes based on a desired configuration
The main components of a Kubernetes cluster include:
Nodes: Nodes are VMs or physical servers that host containerized applications. Each node in a cluster can run one or more application instance. There can be as few as one node, however, a typical Kubernetes cluster will have several nodes (and deployments with hundreds or more nodes are not uncommon).
Image Registry: Container images are kept in the registry and transferred to nodes by the control plane for execution in container pods.
Pods: Pods are where containerized applications run. They can include one or more containers and are the smallest unit of deployment for applications in a Kubernetes cluster.

Master Components
Kubernetes master components are the core components that make up the control plane of a Kubernetes cluster. They are responsible for managing the overall state of the cluster and making decisions about how to schedule and deploy containers. The main Kubernetes master components are:
1. API Server: The API server is the central management point for Kubernetes. It exposes the Kubernetes API and is responsible for validating and processing API requests from users and other Kubernetes components. The API server acts as the front end for the Kubernetes control plane and is responsible for all communication between the various Kubernetes components.
2. etcd: etcd is a distributed key-value store that is used by Kubernetes to store all configuration data and state information for the cluster. This includes information about the nodes in the cluster, the containers running on each node, and the current state of Kubernetes objects such as deployments and services. The etcd database is highly available and can be distributed across multiple nodes for increased fault tolerance.
3. Controller Manager: The controller manager is responsible for managing various controllers that automate the state of the system. These controllers are responsible for tasks such as maintaining the desired state of deployments and replica sets, performing rolling updates, and managing endpoints and services. The controller manager watches the state of the cluster and makes changes as necessary to ensure that the desired state is always maintained.
4. Scheduler: The scheduler is responsible for scheduling containers onto nodes based on available resources and other constraints. It selects the best node for each container and schedules the container to run on that node. The scheduler takes into account factors such as the resource requirements of each container, the available resources on each node, and any affinity or anti-affinity rules that have been defined.
Together, these Kubernetes master components provide the foundation for managing a Kubernetes cluster. They are responsible for managing the overall state of the cluster, handling user requests, and automating the management of containers and other Kubernetes objects.
Worker Components?
Kubernetes worker node components are the components that run on each worker node in a Kubernetes cluster. They are responsible for running containers and providing the necessary resources for the containers to function properly. The main Kubernetes worker node components are:
1. Kubelet: The kubelet is an agent that runs on each node and is responsible for managing the containers on that node. It communicates with the Kubernetes API server to receive instructions on what containers to run, and then ensures that the containers are running and healthy. The kubelet also monitors the containers for any issues and reports back to the API server if there are any problems.
2. Container Runtime: The container runtime is responsible for running the containers on each node. Kubernetes supports multiple container runtimes, including Docker, containerd, and CRI-O. The container runtime is responsible for pulling the container images from a registry and then creating and managing the containers themselves.
3. Kube-proxy: The kube-proxy is a network proxy that runs on each node and is responsible for implementing Kubernetes services and network policies. It manages network traffic to and from the containers and ensures that the traffic is routed to the correct destination. The kube-proxy also enforces network policies that have been defined in the cluster, such as restricting access to certain services.
4. Add-ons: Add-ons are optional components that provide additional functionality to Kubernetes worker nodes, such as logging and monitoring. Common add-ons include the Kubernetes Dashboard, which provides a web-based UI for managing the cluster, and various logging and monitoring tools that help administrators to monitor the health of the cluster and troubleshoot issues. Together, these Kubernetes worker node components provide the necessary resources for running containers and ensuring that they are healthy and functioning properly. The kubelet manages the containers, the container runtime runs the containers, the kube-proxy manages network traffic, and add-ons provide additional functionality as needed.

Workstation Components?
Here are some of the components.
1. Kubernetes CLI (kubectl): The Kubernetes CLI, or kubectl, is a command-line tool that allows developers and administrators to interact with Kubernetes clusters from their local workstations. With kubectl, users can create, modify, and delete Kubernetes objects such as pods, services, and deployments.
2. Kubernetes Dashboard: The Kubernetes Dashboard is a web-based UI that provides a graphical interface for managing Kubernetes clusters. It allows users to view the state of the cluster, create and modify objects, and monitor the health of the cluster.
3. Container Registry: A container registry is a service that allows users to store and distribute container images. Developers can use a container registry to store their container images and then deploy those images to a Kubernetes cluster.
4. Text Editor or Integrated Development Environment (IDE): Developers can use a text editor or IDE to write and edit Kubernetes manifests and other configuration files. This can be useful for creating and modifying objects, debugging issues, and troubleshooting problems.
5. Continuous Integration/Continuous Deployment (CI/CD) Tools: CI/CD tools such as Jenkins, CircleCI, and GitLab can be used to automate the deployment of containerized applications to Kubernetes clusters. These tools can be used to build container images, test them, and then deploy them to a Kubernetes cluster.

What is POD?

A pod is the smallest execution unit in Kubernetes. A pod encapsulates one or more applications. Pods are ephemeral by nature, if a pod (or the node it executes on) fails, Kubernetes can automatically create a new replica of that pod to continue operations. Pods include one or more containers (such as Docker containers).
Pods also provide environmental dependencies, including persistent storage volumes (storage that is permanent and available to all pods in the cluster) and configuration data needed to run the container(s) within the pod.
What does a Pod do?
Pods represent the processes running on a cluster. By limiting pods to a single process, Kubernetes can report on the health of each process running in the cluster. Pods have:
• a unique IP address (which allows them to communicate with each other)
• persistent storage volumes (as required)
• configuration information that determine how a container should run.
Although most pods contain a single container, many will have a few containers that work closely together to execute a desired function.

Top comments (0)