<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Debug School: Nagendra</title>
    <description>The latest articles on Debug School by Nagendra (@nagendrakr29_569).</description>
    <link>https://www.debug.school/nagendrakr29_569</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://www.debug.school/feed/nagendrakr29_569"/>
    <language>en</language>
    <item>
      <title>Services Resources in K8s</title>
      <dc:creator>Nagendra</dc:creator>
      <pubDate>Thu, 19 Oct 2023 04:31:32 +0000</pubDate>
      <link>https://www.debug.school/nagendrakr29_569/services-resources-in-k8s-3j67</link>
      <guid>https://www.debug.school/nagendrakr29_569/services-resources-in-k8s-3j67</guid>
      <description>&lt;p&gt;Basically Services means the Network load balancer.&lt;/p&gt;

&lt;p&gt;There will be a problem like if some one has to call the rest call where inside the POD its running in that case he needs the ipaddress and POrt.&lt;/p&gt;

&lt;p&gt;But these ip address keeps on changing when the pod fone k8s will create one more pod with different ip.&lt;/p&gt;

&lt;p&gt;In this case we have the use the Services Basically this will works based on the labels&lt;/p&gt;

&lt;p&gt;" POd label should match with the services selector label " then those Services provides one of the ip with that ip we can access that "&lt;/p&gt;

&lt;p&gt;like --type=clusterIp : kubectl expose rs nagendrareplica --port=1234 --target-port=80&lt;/p&gt;

&lt;p&gt;There are three types:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clusterip : We can use this cluster ip to access tha application inside the cluster with the port not outside of it&lt;br&gt;
&lt;a href="http://10.100.6.37:80"&gt;http://10.100.6.37:80&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;--type=nodeport : This one will do the cluster ip step and also opens the node port which can be accessible by outside &lt;br&gt;
root@ip-172-31-46-170:/home/ubuntu/nagendra# kubectl expose rs nagendrareplica --port=1234 --target-port=80 --type=NodePort -n=nagendra&lt;br&gt;
service/nagendrareplica exposed&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From Clusterip with port we can access inside the cluster&lt;br&gt;
From nodeip where services is running with the port given by node port we can access outside whcih will redirect that traffic into the correspoding application&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;type=--loadbalancer : This will create the loadbalancer ip and the node port. Here k8s not able to create the loadbalancer thats why its coming as pending 
--type=LoadBalancer : ubectl expose rs nagendrareplica --port=1234 --target-port=80 --type=LoadBalancer -n=nagendra
servicel1           LoadBalancer   10.111.196.233        1234:30649/TCP   5s&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Workflow like : f user do the curl of &lt;a href="http://www.app.xxx"&gt;www.app.xxx&lt;/a&gt;. ( which was added in DNS mapping there we have to mention the loadbalancerip and node port)&lt;/p&gt;

&lt;p&gt;Then the request goes to the loadbalancer ip to the one of the node and that corresponding nodeport and then it redirected that info to the 80 port and gives the result&lt;/p&gt;

&lt;p&gt;Note : There is a problem like if there are too many services resources like if its 100 then it has to create 100 loadbalancer which is costlier.&lt;/p&gt;

&lt;p&gt;Exapmples :&lt;/p&gt;

&lt;p&gt;root@ip-172-31-46-170:/home/ubuntu/nagendra# kubectl get services -n=nagendra&lt;br&gt;
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE&lt;br&gt;
nag-cs              ClusterIP   10.100.6.37              5678/TCP         66m&lt;br&gt;
nagendrareplica     NodePort    10.100.64.125            1234:30710/TCP   25s&lt;br&gt;
nagendrareplicav2   ClusterIP   10.111.113.155           80/TCP           56m&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deamon Sets, Jobs , Cron Jobs</title>
      <dc:creator>Nagendra</dc:creator>
      <pubDate>Thu, 19 Oct 2023 04:16:48 +0000</pubDate>
      <link>https://www.debug.school/nagendrakr29_569/deamon-sets-jobs-cron-jobs-35g3</link>
      <guid>https://www.debug.school/nagendrakr29_569/deamon-sets-jobs-cron-jobs-35g3</guid>
      <description>&lt;p&gt;DeamonSets :&lt;br&gt;
This deamon sets creates only one POD in all the nodes not more than that Which was managed by the deamon controller (Here controller make sure all the nodes contains one pod of it)&lt;/p&gt;

&lt;p&gt;We can create deaomnsets using the yaml file like below (kind: DaemonSet)&lt;/p&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: DaemonSet&lt;br&gt;
metadata:&lt;br&gt;
  name: logging&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: httpd-logging&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: httpd-logging&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
        - name: webserver&lt;br&gt;
          image: httpd&lt;br&gt;
          ports:&lt;br&gt;
          - containerPort: 80&lt;br&gt;
In the same we can descibes , edit , delete the deamset resources&lt;br&gt;
LIke we can list &lt;/p&gt;

&lt;p&gt;kubectl get ds (ds is a shortname of deamon sets)&lt;/p&gt;

&lt;p&gt;JOBS/ CronJobs : &lt;br&gt;
JObs is a resources which used only to run do the task and done with that POD &lt;/p&gt;

&lt;p&gt;CronJobs : This one scheduling that POD every 1 hour u execute the container like that. &lt;/p&gt;

&lt;p&gt;kubectl get jobs&lt;br&gt;
 2030  kubectl get pods&lt;br&gt;
 2031  kubectl apply -f job.yaml&lt;br&gt;
 2032  kubectl get pods&lt;br&gt;
 2033  clear&lt;br&gt;
 2034  kubectl describe job pi&lt;br&gt;
 2035  clear&lt;br&gt;
 2036  vi cron.yaml&lt;br&gt;
 2037  kubectl apply -f cron.yaml&lt;br&gt;
 2038  kubectl get cronjobs&lt;br&gt;
 2039  kubectl get pods&lt;br&gt;
 2040  clear&lt;br&gt;
 2041  kubectl get cronjobs&lt;br&gt;
 2042  kubectl get pods&lt;br&gt;
 2043  kubectl get cronjobs&lt;br&gt;
 2044  kubectl describe cronjob hello&lt;/p&gt;

&lt;p&gt;Config Map: &lt;/p&gt;

&lt;p&gt;This config map which stores in the k8s cluster whcih is in the format of key-value pair (Which can be accessible by all in the cluster)&lt;/p&gt;

&lt;p&gt;If we want this in pod please attach to the POD and mount to the container then we can use that key-value inside the container&lt;/p&gt;

&lt;p&gt;Commands :  ls&lt;br&gt;
 2055  vi reverseproxy.conf&lt;br&gt;
 2056  kubectl create configmap my-config --from-file=reverseproxy.conf&lt;br&gt;
 2057  kubectl get cm&lt;br&gt;
 2058  kubectl describe cm my-config&lt;br&gt;
 2059  clear&lt;br&gt;
 2060  ls&lt;br&gt;
 2061  vi cm.yaml&lt;br&gt;
 2062  kubectl apply -f cm.yaml&lt;br&gt;
 2063  kubectl get pods&lt;br&gt;
 2064  history&lt;br&gt;
 2065  kubectl exec  helloworld-nginx ls /etc/nginx/conf.d&lt;br&gt;
 2066  kubectl exec  helloworld-nginx more /etc/nginx/conf.d/myconfo.conf&lt;br&gt;
 2067  history&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Working With Namespace,POD,ReplicationController and Deployment</title>
      <dc:creator>Nagendra</dc:creator>
      <pubDate>Wed, 18 Oct 2023 04:31:19 +0000</pubDate>
      <link>https://www.debug.school/nagendrakr29_569/working-with-namespacepodreplicationcontroller-and-deployment-26ak</link>
      <guid>https://www.debug.school/nagendrakr29_569/working-with-namespacepodreplicationcontroller-and-deployment-26ak</guid>
      <description>&lt;p&gt;NameSpace : (its kind of resources) (logical seperatin of cluster&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There are situation like each k8s cluster used by multiple teams. If one team container consuming more memory than it affects the other team also and also every team member can access all the team pods which is not correct&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On that time we will use the namespace one where each team can create there own process and allocate there resources (like it cant consume more than some amount of CPU and memory)&lt;/p&gt;

&lt;p&gt;To create a NameSpace : &lt;br&gt;
kubectl create ns  ex kubectl create ns nagendra (which creates the namespace call nagendra)&lt;/p&gt;

&lt;p&gt;To describe the namespace : &lt;br&gt;
 kubectl describe ns nagendra&lt;/p&gt;

&lt;p&gt;To edit the namespace&lt;br&gt;
  kubectl edit ns nagendra&lt;/p&gt;

&lt;p&gt;To delete the namespace &lt;br&gt;
   kubectl delete ns nagendra&lt;/p&gt;

&lt;p&gt;TO list all the namespace &lt;br&gt;
   kubectl get ns&lt;/p&gt;

&lt;p&gt;Note : For all the resources we use the CRUD operations are common&lt;/p&gt;

&lt;p&gt;POD :&lt;/p&gt;

&lt;p&gt;POD is a logical unit which cant be created. We can instantiate the POD (which create a own network space like port space) &lt;/p&gt;

&lt;p&gt;Write pod.yaml file (Attach the content of pod.yaml once we got the access)&lt;/p&gt;

&lt;p&gt;To create a POD : &lt;br&gt;
kubectl create -f pod.yaml ( whcih creates the pod mention in the pod.yaml file here ex nagendrapod) &lt;/p&gt;

&lt;p&gt;To describe the POD: &lt;br&gt;
 kubectl describe pod nagendrapod&lt;/p&gt;

&lt;p&gt;To edit the pod&lt;br&gt;
  kubectl edit pod  nagendrapod (Which edit directly the eecd)&lt;/p&gt;

&lt;p&gt;To delete the pod&lt;br&gt;
   kubectl delete -f pod.yaml&lt;/p&gt;

&lt;p&gt;TO list all the pod&lt;br&gt;
   kubectl get pod&lt;/p&gt;

&lt;p&gt;kubectl get pods -o wide (Shows the ip address fll details of the pods)&lt;/p&gt;

&lt;p&gt;TroubleShoot Of PODS ( Each pod contains multiple containers&amp;gt;&lt;/p&gt;

&lt;p&gt;kubectl logs  --- &amp;gt; to get the logs of the pod&lt;br&gt;
kubectl attach  --&amp;gt; Which will attach to the pod to the PID1 (like docker)&lt;br&gt;
kubectl exec  ls --&amp;gt; Execute the command internal to the POD (like docker)&lt;br&gt;
kubectl exec -it  /bin/bash --&amp;gt; Interactive mode the POD&lt;br&gt;
 kubectl port-forward --address 0.0.0.0 pod/rajesh 8888:80 (Explaination) --address (Any address) pod/nameofthepod nodeport:containerport (Explaination of the above command) Any address can communicate with the pod (0.0.0.0 means) if it outside guy send data to the 8888 it forwards to 80)&lt;/p&gt;

&lt;p&gt;kubectl auth can-i ( give the persmission of user )&lt;/p&gt;

&lt;p&gt;kubectl auth can-i create ns (If the outpput is true ) then user have the access to create NS&lt;/p&gt;

&lt;p&gt;Replication controller : (Another resources Replication means 1 to many and controller means it watches and maintains the desired state mentioned in the .yaml)&lt;/p&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: ReplicationController&lt;br&gt;
metadata:&lt;br&gt;
  name: nagendrarc&lt;br&gt;
spec:&lt;br&gt;
  replicas: 5&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: nginx&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: nginx&lt;br&gt;
        image: scmgalaxy/nginx-devopsschoolv1&lt;/p&gt;

&lt;p&gt;kubectl scale --replicas=1 nagendrarc ( in above once we execute it creates 5 pods that image After that we execute this command then pod value come to 1)&lt;/p&gt;

&lt;p&gt;kubectl create -f rc.yaml&lt;br&gt;
kubectl get rc&lt;br&gt;
kubectl edit rc nagendrarc &lt;br&gt;
kubectl get rc nagendrarc -o yaml (Which gives the yaml file )&lt;br&gt;
kubectl delete rc nagendrarc &lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master"&gt;https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOw replicationController is deprecated instead of that we use the resource ReplicaSet means in .yaml file we use the ReplicaSet&lt;/p&gt;

&lt;p&gt;Same command as replication controller&lt;/p&gt;

&lt;p&gt;Deployment (Another resource most of the time we use this one)&lt;/p&gt;

&lt;p&gt;We can do the deployment without yaml file like below &lt;/p&gt;

&lt;p&gt;kubectl create deployment   --image=scmgalaxy/nginx-devopsschoolv1 --replicas=5&lt;/p&gt;

&lt;p&gt;Which will create the 5pods where that image container is running&lt;/p&gt;

&lt;p&gt;kubectl describe deploy my-dep (which will describes the deployment)&lt;/p&gt;

&lt;p&gt;kubectl scale --replicas=2 deploy/my-dep ( which will bring back the replica 5 to 2)&lt;/p&gt;

&lt;p&gt;kubectl rollout history dep/my-dep ( whcih will tell the version of the deployment)&lt;/p&gt;

&lt;p&gt;Suppose there is a situation we have to upgrade to higher version then it will be easy using this&lt;/p&gt;

&lt;p&gt;kubectl patch deployment my-dep --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"scmgalaxy/nginx-devopsschoolv2"}]' ( Patch apply the patch )&lt;br&gt;
kubectl rollout status deploy/my-dep ( which will status of the rollout)&lt;/p&gt;

&lt;p&gt;IF we want to go back to previous version of deployment (means basically previous image in PODS ) it will be easy &lt;br&gt;
kubectl rollout undo deploy/my-dep --to-revision=2 &lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is POD?</title>
      <dc:creator>Nagendra</dc:creator>
      <pubDate>Mon, 16 Oct 2023 11:48:07 +0000</pubDate>
      <link>https://www.debug.school/nagendrakr29_569/what-is-pod-2c0i</link>
      <guid>https://www.debug.school/nagendrakr29_569/what-is-pod-2c0i</guid>
      <description>&lt;ol&gt;
&lt;li&gt;POD cant create its only can instantiate by kubetl&lt;/li&gt;
&lt;li&gt;EaCh Node can contain multiple pods&lt;/li&gt;
&lt;li&gt;Each pod can contains multiple container (or) single containers&lt;/li&gt;
&lt;li&gt;If container is not there then POD also not there viceversa&lt;/li&gt;
&lt;li&gt;POD is the logical unit (not like docker and all)&lt;/li&gt;
&lt;li&gt;kubernetes will only schedulde the pods not others&lt;/li&gt;
&lt;li&gt;Ideal design is each pod contains single container not more than that. (We can add but its not ideal)&lt;/li&gt;
&lt;li&gt;Internally POD means containers in pods communicate with through localhost&lt;/li&gt;
&lt;li&gt;When 2 pods communicates it will go through the pod network &lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Kubernetes From Nagendra</title>
      <dc:creator>Nagendra</dc:creator>
      <pubDate>Mon, 16 Oct 2023 07:07:12 +0000</pubDate>
      <link>https://www.debug.school/nagendrakr29_569/kubernetes-from-nagendra-4dp6</link>
      <guid>https://www.debug.school/nagendrakr29_569/kubernetes-from-nagendra-4dp6</guid>
      <description>&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;What is Kubernetes?&lt;/strong&gt;
Kubernetes invented from google but now managing by the CNFC (opensource) which manages/orchestrator the dockers in a physical/ virtual machines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;why do we need kuberenetes?&lt;/strong&gt;
Kubernetes solves the below problems
a. It will manage both the work (dockers ) and workers (node) in efficient manner. If it manage by the ppl takes lot of time but it does in a seconds.
b. Only dockers manages on one node if its multiple nodes then this k8s will take care&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How Kubernetes Works? and architecture ?&lt;/strong&gt;
K8s has master and worker nodes
All the k8s communication goes through the master nodes (from outside world to the minion (node)) while managing the server
Each master nodes has 4 components

&lt;ol&gt;
&lt;li&gt;Apiserver -- &amp;gt; Worker nodes and the workstation (who given the instrucation) can talk only through the api server&lt;/li&gt;
&lt;li&gt;CLuster store --&amp;gt; Which stores all the information (Which is SOT) related to Worker nodes (like how many pods and there state information everything)&lt;/li&gt;
&lt;li&gt;Controller --&amp;gt; Which looks up the status of the nodes and the pods and sending that information to the apiserver and resposible to manintain the desired state&lt;/li&gt;
&lt;li&gt;Schedular --&amp;gt; Which schedules the work to nodes  has given by the apiserver (like which pod has to run in which node ) and all.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
  </channel>
</rss>
