Debug School

chandrasekaran j
chandrasekaran j

Posted on

Kubernetes

1.What is Kubernetes?

In Development (Dev) environments, running containers on a single host for development and testing of applications may be a suitable option. However, when migrating to Quality Assurance (QA) and Production (Prod) environments, that is no longer a viable option because the applications and services need to meet specific requirements:
• Fault-tolerance
• On-demand scalability
• Optimal resource usage
• Auto-discovery to automatically discover and communicate with each other
• Accessibility from the outside world
• Seamless updates/rollbacks without any downtime.

Container orchestrators are tools which group systems together to form clusters where containers' deployment and management is automated at scale while meeting the requirements mentioned above.

Kubernets is one such Container Orchestor that is very famous
.

Kubernetes comes from the Greek word κυβερνήτης, which means helmsman or ship pilot. With this analogy in mind, we can think of Kubernetes as the pilot on a ship of container

Kubernetes is highly inspired by the Google Borg system, a container and workload orchestrator for its global operations for more than a decade. It is an open source project written in the Go language and licensed under the Apache License, Version 2.0.

Kubernetes was started by Google and, with its v1.0 release in July 2015, Google donated it to the Cloud Native Computing Foundation (CNCF), one of the largest sub-foundations of the Linux Foundation.

New Kubernetes versions are released in 4 month cycles. The current stable version is 1.26 (as of December 2022).

2.Why We need Kubernetes?

Although we can manually maintain a couple of containers or write scripts to manage the lifecycle of dozens of containers, orchestrators make things much easier for users especially when it comes to managing hundreds and thousands of containers running on a global infrastructure.

Most container orchestrators can:
• Group hosts together while creating a cluster.
• Schedule containers to run on hosts in the cluster based on resources availability.
• Enable containers in a cluster to communicate with each other regardless of the host they are deployed to in the cluster.
• Bind containers and storage resources.
• Group sets of similar containers and bind them to load-balancing constructs to simplify access to containerized applications by creating an interface, a level of abstraction between the containers and the client.
• Manage and optimize resource usage.
• Allow for implementation of policies to secure access to applications running inside containers.

With all these configurable yet flexible features, container orchestrators are an obvious choice when it comes to managing containerized applications at scale.

Most container orchestrators can be deployed on the infrastructure of our choice - on bare metal, Virtual Machines, on-premises, on public and hybrid clouds. Kubernetes, for example, can be deployed on a workstation, with or without an isolation layer such as a local hypervisor or container runtime, inside a company's data center, in the cloud on AWS Elastic Compute Cloud (EC2) instances, Google Compute Engine (GCE) VMs, DigitalOcean Droplets, OpenStack, etc.

There are turnkey solutions which allow Kubernetes clusters to be installed, with only a few commands, on top of cloud Infrastructures-as-a-Service, such as GCE, AWS EC2, IBM Cloud, Rancher, VMware Tanzu, and multi-cloud solutions through IBM Cloud Private or StackPointCloud.

Last but not least, there is the managed container orchestration as-a-Service, more specifically the managed Kubernetes as-a-Service solution, offered and hosted by the major cloud providers, such as Amazon Elastic Kubernetes Service (Amazon EKS), Azure Kubernetes Service (AKS), DigitalOcean Kubernetes, Google Kubernetes Engine (GKE), IBM Cloud Kubernetes Service, Oracle Container Engine for Kubernetes, or VMware Tanzu Kubernetes Grid.

3. How Kubernetes works?

At a very high level, Kubernetes is a cluster of compute systems categorized by their distinct roles:

One or more control plane nodes
One or more worker nodes (optional, but recommended).

The control plane node provides a running environment for the control plane agents responsible for managing the state of a Kubernetes cluster, and it is the brain behind all operations inside the cluster. The control plane components are agents with very distinct roles in the cluster's management. In order to communicate with the Kubernetes cluster, users send requests to the control plane via a Command Line Interface (CLI) tool, a Web User-Interface (Web UI) Dashboard, or an Application Programming Interface (API).

It is important to keep the control plane running at all costs. Losing the control plane may introduce downtime, causing service disruption to clients, with possible loss of business. To ensure the control plane's fault tolerance, control plane node replicas can be added to the cluster, configured in High-Availability (HA) mode. While only one of the control plane nodes is dedicated to actively managing the cluster, the control plane components stay in sync across the control plane node replicas. This type of configuration adds resiliency to the cluster's control plane, should the active control plane node fail.

A control plane node runs the following essential control plane components and agents:

API Server
Scheduler
Controller Managers
Key-Value Data Store.
In addition, the control plane node runs:

Container Runtime
Node Agent
Proxy
Optional add-ons for cluster-level monitoring and logging.

A worker node provides a running environment for client applications. These applications are microservices running as application containers. In Kubernetes the application containers are encapsulated in Pods, controlled by the cluster control plane agents running on the control plane node. Pods are scheduled on worker nodes, where they find required compute, memory and storage resources to run, and networking to talk to each other and the outside world. A Pod is the smallest scheduling work unit in Kubernetes. It is a logical collection of one or more containers scheduled together, and the collection can be started, stopped, or rescheduled as a single unit of work.

Also, in a multi-worker Kubernetes cluster, the network traffic between client users and the containerized applications deployed in Pods is handled directly by the worker nodes, and is not routed through the control plane node.

4.What is Pods?

A Kubernetes pod is a collection of one or more Linux® containers, and is the smallest unit of a Kubernetes application. Any given pod can be composed of multiple, tightly coupled containers (an advanced use case) or just a single container (a more common use case). Containers are grouped into Kubernetes pods in order to increase the intelligence of resource sharing, as described below.

Within the Kubernetes system, containers in the same pod will share the same compute resources. These compute resources are pooled together in Kubernetes to form clusters, which can provide a more powerful and intelligently distributed system for executing applications. The pieces of Kubernetes, from containers to pods and nodes to clusters, can be challenging to understand at first, but the most relevant pieces to understanding the benefits of Kubernetes pods break down as follows:

Hardware units
Node: the smallest unit of computing hardware in Kubernetes, easily thought of as one individual machine.

Cluster: a collection of nodes that are grouped together to provide intelligent resources sharing and balancing.

Software units
Linux container: a set of one or more processes, including all necessary files to run, making them portable across machines.

Kubernetes pod: a collection of one or more Linux containers, packaged together to maximize the benefits of resource sharing via cluster management.

In essence, individual hardware is represented in Kubernetes as a node. Multiple of those nodes are collected into clusters, allowing compute power to be distributed as needed. Running on those clusters are pods, which ensures that any tightly coupled containers within them will be run together on the same cluster

Top comments (0)