What is Kubernetes?
Kubernetes is an open-source platform that manages Docker containers in the form of a cluster. Along with the automated deployment and scaling of containers, it provides healing by automatically restarting failed containers and rescheduling them when their hosts die. This capability improves the application’s availability.
Features of Kubernetes:
Automated Scheduling
Resource optimization.
Self-Healing Capabilities
Automated Rollouts and Rollbacks
Horizontal Scaling and Load Balancing
Resource Utilization.
Support for multiple clouds and hybrid clouds
Extensibility
Community Support
**What we need K8s? (Scenarios we can use K8s)
Application of Kubernetes
- Microservices architecture: Kubernetes is well-suited for managing microservices architectures, which involve breaking down complex applications into smaller, modular components that can be independently deployed and managed.
- Cloud-native development: Kubernetes is a key component of cloud-native development, which involves building applications that are designed to run on cloud infrastructure and take advantage of the scalability, flexibility, and resilience of the cloud.
- Continuous integration and delivery: Kubernetes integrates well with CI/CD pipelines, making it easier to automate the deployment process and roll out new versions of your application with minimal downtime.
- Hybrid and multi-cloud deployments: Kubernetes provides a consistent deployment and management experience across different cloud providers, on-premise data centers, and even developer laptops, making it easier to build and manage hybrid and multi-cloud deployments.
- High-performance computing: Kubernetes can be used to manage high-performance computing workloads, such as scientific simulations, machine learning, and big data processing.
- Edge computing: Kubernetes is also being used in edge computing applications, where it can be used to manage containerized applications running on edge devices such as IoT devices or network appliances.
How K8s works?
Kubernetes follows the client-server architecture where we have the master installed on one machine and the node on separate Linux machines. It follows the master-slave model, which uses a master to manage Docker containers across multiple Kubernetes nodes. A master and its controlled nodes(worker nodes) constitute a “Kubernetes cluster”. A developer can deploy an application in the docker containers with the assistance of the Kubernetes master.
- Kubernetes master is responsible for managing the entire cluster, coordinates all activities inside the cluster, and communicates with the worker nodes to keep the Kubernetes and your application running. This is the entry point of all administrative tasks. When we install Kubernetes on our system we have four primary components of Kubernetes Master that will get installed. The components of the Kubernetes Master node are:
a.) API Server
The API server is the entry point for all the REST commands used to control the cluster. All the administrative tasks are done by the API server within the master node.
b.) Scheduler– It is a service in the master responsible for distributing the workload. It is responsible for tracking the utilization of the working load of each worker node and then placing the workload on which resources are available and can accept the workload.
c.) Controller Manager– Also known as controllers. It is a daemon that runs in a non terminating loop and is responsible for collecting and sending information to the API server.
d.) etc– It is a distributed key-value lightweight database. In Kubernetes, it is a central database for storing the current cluster state at any point in time and is also used to store the configuration details such as subnets, config maps, etc.
- Kubernetes- Worker Node Components – a.) Kubelet– It is a primary node agent which communicates with the master node and executes on each worker node inside the cluster. b.) Kube-Proxy– It is the core networking component inside the Kubernetes cluster. c.) Pods– A pod is a group of containers that are deployed together on the same host. d.) Docker– Docker is the containerization platform that is used to package your application and all its dependencies together in the form of containers to make sure that your application works seamlessly in any environment which can be development or test or production.
**What is Pods?
Kubernetes pods are the smallest deployable computing units.
In Kubernetes, the term "pod" describes one or more containers that operate together. Although a pod can encapsulate many containers, each pod is typically home to only one container or a small number of tightly integrated containers.
A pod's contents are scheduled and located together, modeling an application-specific logical host. Kubernetes users should host tightly integrated application containers in the same pod because, without containers, these applications or services would run on the same virtual or physical machine.
Top comments (0)