<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Debug School: Subham Chowdhury</title>
    <description>The latest articles on Debug School by Subham Chowdhury (@contactsubham_750).</description>
    <link>https://www.debug.school/contactsubham_750</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://www.debug.school/feed/contactsubham_750"/>
    <language>en</language>
    <item>
      <title>DaemonSet, Job, CronJob,Services , Ingress, Ingress Controller by Subham</title>
      <dc:creator>Subham Chowdhury</dc:creator>
      <pubDate>Wed, 18 Oct 2023 12:05:39 +0000</pubDate>
      <link>https://www.debug.school/contactsubham_750/daemonset-job-cronjobservices-ingress-ingress-controller-by-subham-3idb</link>
      <guid>https://www.debug.school/contactsubham_750/daemonset-job-cronjobservices-ingress-ingress-controller-by-subham-3idb</guid>
      <description>&lt;p&gt;Daemon Set :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.  A Daemon Set is a type of workload in Kubernetes that ensures that one and only Pod runs on all nodes within a cluster, It is basically used for proxy kind of services.
2.  Daemon Sets ensure that exactly one instance of a specified Pod is scheduled and running on each node that matches the Pod's node selector.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Jobs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. A job is something that runs the specified task using the container image provided in the template.
2. Once the task is completed successfully, the Job is considered finished.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Crone Jobs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Cron Job is a resource that allows you to run Jobs at scheduled intervals.
2. The job is created as a one-time Job when the schedule is met and then terminated.
3. The Schedule field is used to specify when the job should run i.e.(schedule: "*/5 * * * *"). In this case, it runs every 5 minutes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Config Map:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. In Kubernetes, a Config Map is a resource that allows you to store configuration data separately from your application code.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Services :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Service is also known as network load balancer.
2. Service only send traffic to healthy pods.
3. Services can be accessible from outside the cluster.
4. It uses TCP by default and load balancing is Random load balancing by default.
5. Service can be configured for session affinity.
6. The labels of pods should be same as the selector labels of service to balance the load.
7. Expose command automatically take the labels from deployment and create a service and balance the load.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Node Port:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Node port create a service with cluster IP and type is node port which can be accessible from outside.
2. Node port is something which is used to communicate from outside to the pod.
3. To access the pod inside the cluster we need to give the Service IP : port and to access from outside we need to give Node IP.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ingress Controller:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. An Ingress Controller in Kubernetes is a component that manages and configures access to services within the cluster from external network traffic. 
2. It acts as a reverse proxy, handling incoming HTTP and HTTPS traffic, and then forwarding requests to the appropriate services and Pods based on the rules defined in the Ingress resource.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>What is Namespace,troubleshoot PODs,ReplicationController,ReplicaSet &amp; Deployement by Subham</title>
      <dc:creator>Subham Chowdhury</dc:creator>
      <pubDate>Tue, 17 Oct 2023 12:06:46 +0000</pubDate>
      <link>https://www.debug.school/contactsubham_750/what-is-namespacetroubleshoot-podsreplicationcontrollerreplicaset-deployement-by-subham-3kd4</link>
      <guid>https://www.debug.school/contactsubham_750/what-is-namespacetroubleshoot-podsreplicationcontrollerreplicaset-deployement-by-subham-3kd4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Namespace:&lt;/strong&gt;- &lt;br&gt;
Namespace is the logical separation of the resources of k8s cluster which will be shared across multiple pods.&lt;br&gt;
There are 4 types of Namespaces&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;System namespace - Used for System PODS&lt;/li&gt;
&lt;li&gt;Public Namespace - Used for PODs to be shared across Namespace&lt;/li&gt;
&lt;li&gt;User Namespace - Used for User defined pods&lt;/li&gt;
&lt;li&gt;Default Namespace - Used for PODs which are not created mapped to any Namespace&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Troubleshoot PODs:&lt;/strong&gt;&lt;br&gt;
In order to troubleshoot PODs we have few commands which are frequently Used. These commands are as follows:&lt;br&gt;
Logs&lt;br&gt;
exec&lt;br&gt;
cp&lt;br&gt;
port-forward&lt;br&gt;
auth&lt;br&gt;
debug&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReplicationController&lt;/strong&gt;:&lt;br&gt;
With this feature we can create n number of Pods through replication controller yaml file. Here the kind is ReplicationController. The controller will automatically detect if there is mismatch in the desire vs actual number of pods and do the needful to make it same. It is currently deprecated as it w3as too much buggy and the function can be achieved by ReplicaSet&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReplicaSet&lt;/strong&gt;:&lt;br&gt;
The job of this resource is same as Replication Controller however this is more stable and also can match label under specs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: &lt;br&gt;
This is the most preferred way of deploying the POds and we can provide the replica index , image all in a single command to do so. It consists of 5 part &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Replica &lt;/li&gt;
&lt;li&gt;Controller&lt;/li&gt;
&lt;li&gt;Versioning &lt;/li&gt;
&lt;li&gt;Rollout &lt;/li&gt;
&lt;li&gt;Rollback
It follows Rolling Update Deployment Pattern which is the default one. It also has another pattern which recreate. 
In the previous pattern the one container is destroyed  and created , say out of 3 container and hence there is zero downtime in update. IN recreate pattern all the container are brought down and then created. SO there is some downtime.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>What is Pod by Subham</title>
      <dc:creator>Subham Chowdhury</dc:creator>
      <pubDate>Mon, 16 Oct 2023 11:52:58 +0000</pubDate>
      <link>https://www.debug.school/contactsubham_750/what-is-pod-by-subham-2gpe</link>
      <guid>https://www.debug.school/contactsubham_750/what-is-pod-by-subham-2gpe</guid>
      <description>&lt;p&gt;What is a Pod?&lt;/p&gt;

&lt;p&gt;Pods are the atomic unit of work/scheduling. A pod can contain one or more Containers. Pods are atomic in nature. Pods are also ephemeral. Pods have IP and that same IP is applicable for all the containers inside the POD. However each container will have different port for communication. PODs are alive till all the containers are alive. Pods dont have any state. PODS can communicate using POD network. Inside a POD two containers can talk to each other via localhost. We must design one container per pod as best practice. We cannot log into the pod but you can log into the containers in the POD. K8s checks the desired state of the count of the POD . If the number of PODS required is X and current count reported by the Controller Mgr is X-1 then Api server will instruct the scheduler to make it to X and this will continue till the desired state is not met. If in a desired state is to have 1 POD and 5 containers inside  and for some reason out if 5 container 1 didn't come up for some reason then the POD becomes useless and the K8s will try to bring up the failed container only but wont fix it. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is Kubernetes by Subham</title>
      <dc:creator>Subham Chowdhury</dc:creator>
      <pubDate>Mon, 16 Oct 2023 07:32:32 +0000</pubDate>
      <link>https://www.debug.school/contactsubham_750/what-is-kubernetes-by-subham-10e8</link>
      <guid>https://www.debug.school/contactsubham_750/what-is-kubernetes-by-subham-10e8</guid>
      <description>&lt;h2&gt;
  
  
  What is Kubernetes ?
&lt;/h2&gt;

&lt;p&gt;Ans: Kubernetes in simple term is an container Orchestrator tool. It was developed by Google and is now part of CNCF since 2014. It developed in Go/Golang.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do we need Kubernetes? Explain in 10 lines
&lt;/h2&gt;

&lt;p&gt;Ans: Docker helped us to run our application in a single container. Now in a real production application typically you need 100'snof docker container to run your application efficiently. However you need some one who will ensure that all these container are working as per expectation. There has to be someone who needs to ensure that when there is downtime , these container can be re initialized without any human intervention. And there comes the Kubernetes to help us in this matter. Kubernetes also ensures that whatever X number of container are desired to be executed at any given point of time , it will make sure that desire is always fulfilled. So K8s will listen, persist, monitor and schedule all the containers in a POD and this will be a guaranteed service.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Kubernetes Works?
&lt;/h2&gt;

&lt;p&gt;Ans: Kubernetes works in the following manner &lt;/p&gt;

&lt;p&gt;Humans Shares Instructions via a deployment script ----&amp;gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;              K8s Cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&amp;lt;-------------------------------------------------------------&amp;gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Master node Receives the instruction &lt;/li&gt;
&lt;li&gt;Master deciphers the instruction and delegates the instruction to be performed to a scheduler .&lt;/li&gt;
&lt;li&gt;Master also ensure that this instructions are persisted in etcd cluster.&lt;/li&gt;
&lt;li&gt;Master instructs controller manager to to monitor and report the status of minions &lt;/li&gt;
&lt;li&gt;scheduler now gets the instruction to NOde worker which inturn makes the desired PODS where the container will run. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Kubernetes Architecture. Explain each component with  1 one-liner.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.debug.school/images/ORalt1OpDXBGqT6wWUY3n8uWgnPT2c16ZzdZApT9c3o/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNzEzNTB5/cnZuMDIwYzlkampn/a20uSlBH" class="article-body-image-wrapper"&gt;&lt;img src="https://www.debug.school/images/ORalt1OpDXBGqT6wWUY3n8uWgnPT2c16ZzdZApT9c3o/rt:fit/w:800/g:sm/mb:500000/ar:1/aHR0cHM6Ly93d3cu/ZGVidWcuc2Nob29s/L3VwbG9hZHMvYXJ0/aWNsZXMvNzEzNTB5/cnZuMDIwYzlkampn/a20uSlBH" alt="Image description" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The K8s architecture is divided into two parts &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master Node:&lt;/strong&gt;&lt;br&gt;
Master Node Comprises of 4 major sw components &lt;br&gt;
1) &lt;strong&gt;API Server&lt;/strong&gt; - This is the heart of the Master node and it is a conglomeration of many API. All the communications from external world and internal are done via apiserver. It accepts json. It communicates to other components of the Master Node. viz Scheduler, etcd, and Controller Manager.&lt;br&gt;
2) &lt;strong&gt;etcd&lt;/strong&gt; - This is the DB of the Master node. It maintains a Key Value way of storage of all information in the k8s cluster. It is single source of Truth , meaning if any any entry is present in it then it is for sure it available in the cluster and vice versa. There can be cluster if etcd for HA. &lt;br&gt;
3) &lt;strong&gt;Controller Manager&lt;/strong&gt; - This component's job is to monitor and report the state of the desired items as per the client request. The manager contains 100's of controller which are having mutually exclusive responsibility to monitor individual parameter and report back to apiserver .&lt;br&gt;
4) &lt;strong&gt;Scheduler&lt;/strong&gt; - The job of the scheduler is to tell the worker what work it needs to be done. It only instructs and does not do any job itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Worker Node:&lt;/strong&gt;&lt;br&gt;
Worker node comprises of 3 main components :&lt;br&gt;
1) &lt;strong&gt;Kubelet&lt;/strong&gt; : Kubelet ensures that it register the worker node with master. it connects with api server which inturn publishes a certificate that allows the worker to be part of the k8s cluster as a worker node. It Constantly communicates with Master to get the required instruction.&lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Container Engine&lt;/strong&gt; - This is the runtime of the container which sis responsible for pulling and uploading images from a trusted repository. The container runtime can be Docker, containerd , Rocket . etc.&lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;Kube Proxy&lt;/strong&gt; : This is the network adapter that is required to communicate with Master nodes api server. It is not bundled as part of the K8s and we have to install it from CNCF option like calico, etc &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
