Debug School

Adam R
Adam R

Posted on

Day 3 summary - Adam

Configmap

How can we save data in a k8 cluster (such as certificates, passwords, important configuration files) that can be used by pods?

  • This is where configmap is useful. You can get the configmaps using kubectl get cm
  • Note that configmaps exist at the cluster level.
  • By default, one configmap is created per [[namespace]]. You can create additional configmaps as needed using kubectl create configmap adam-cm --from-file=configFile.conf. This will create a configmap called adam-cm, and the contents of this cm will be taken from the configuration file we specified.
  • Alternately, you can also create a configmap declaratively using something like kubectl apply -f cm.yaml
  • A [[pod]] can be access the contents of a cm by having a volume specified in the pod's spec mapped to the configmap. All the containers within the pod can then access the contents of the configmap via volume mounting. Here is an example of a pod that accesses a cm via volume mounting: pod2.yaml
  • Once it is mounted in a pod, you can validate it with kubectl exec -it helloworld-nginx /bin/bash and cd /etc/nginx/conf.d

DaemonSet

  • kubectl get ds will show you the daemonsets, therefore ds is a shortcut to for daemonsets in k8
  • deamonset allows you to run a deamon [[Pod]] in each worker (i.e. each node). It is useful for running logging and monitoring applications to keep an eye on the nodes. Daemonset ensures that exactly one pod is active per [[worker]] node.
  • daemonsets pods, like any other pod, are also logically separated by namespaces. Therefore, if you have multiple namespaces running on a worker node, each of those namespaces can have their own deamonset pod even if they're running on the same node.
  • You can start a daemonset using kubectl apply -f ds.yaml optionally specifying namespaces. Here's a sample yaml: ds.yaml
  • Like most other k8 entities, you can kubectl describe ds on daemonsets.

** Service/Ingress/NodePort

Service in kubernetes is essentially network load balancing for the various [[pod]] in the cluster.

  • Application load balancer in kubernetes is known as egress.
  • Service acts as a bridge for network communication between pods. The service also acts similar to a pod, but it is managed by the cluster.
  • For example if you have 5 frontend pods and 3 backend pods, and the IP addresses of all eight pods are changing as pods come up and go down. In this case, the frontend pods will not know which backend pod address to call. Service solves this problem by having the frontend call the service IP address, and service will redirect to one of the backend pods.
  • Labels are used with services to setup filtering and inform the service of new pods. If the labels pods matches the selector label of the service, those pods will be load balanced by the service.
    • Therefore, if you have pods and a service running and you want the service to load balance those pods, you can either update all the pod labels to match the service selector, or you could update the service selector to match all the pods.
    • Once any pods with a label matching the service selector come up, the service will automatically discover those new pods and start load balancing them as well.
  • Services should only be used to load balance similar or interchangeable pods! Therefore, all the pod labels and service selector should be identical. This is obvious, since the whole point of load balancing is that any of the pods can be used to service the request. For example, you should not have a service load balancing both backend pods and DB pods since they are not the same thing!
  • The load balancing algorithm used by service is random allocation.
  • The service is therefore the entry point for a particular type of microservice (i.e. a particular type of pod). The service can also be accessed outside the cluster if configured.
  • You can create a service using command by typing kubectl create svc
  • Once you create a service, it will have a cluster IP address and port number. This IP address is named as such because it is only accessible at that address within the cluster.
  • To start a service for existing pods:
    • First check the pods exist and get their labels using kubectl get pods -n=adam --show-labels. This command will display the pods in [[namespace]] adam assuming it exists.
    • Check the IP addresses of these nodes with kubectl get pods -o wide -n=adam. You can try hitting one of them with curl to make sure it's responsive (assuming these pods are running web servers)
    • Create a service with kubectl create service clusterip adam-svc --tcp=5678:80 -n=adam to create a service that listens to port 5678, and forwards the requests to port 80 on the pods.
    • Verify the service exists with kubectl get svc -n=adam. Then get more details about it with kubectl describe svc adam-svc -n=adam. It will show the service's cluster IP, it's selector, and other info.
    • If you hit the service's cluster IP at the configured port, for example with curl http://10.103.249.69:5678, nothing much happens because the service has not yet started load balancing any pods.
    • To make the service start load balancing our pods, edit it with kubectl edit svc adam-svc -n=adam and set the selector in spec/selector/app to match the label of our pods. Verify that the service selector has been changed with kubectl describe svc adam-svc --show-labels -n=adam
    • Watch the service's cluster IP and port, using for example watch curl http://10.96.48.229:5678 to see the results. Note that if both pods running completely identical containers, you may not see anything because even though both pods are being hit, they both return the same thing!
  • The command kubectl expose can be used to expose a resource as a new service.
  • Note: It's important to understand that the service acts as a load balancer between pods within the cluster! It does not act as a load balancer between the actual nodes.

NodePort

  • By default, pods running in a cluster are not accessible outside the cluster.
  • NodePort is a type of service that acts a bridge between outside users and the cluster. If you just set the spec.type of a service as NodePort, it enables this functionality. You could do this by adding option --type=NodePort. Or you can kubectl edit svc an existing service and change the spec.type field to NodePort.
  • When you create a NodePort, all the nodes in the cluster will start listening to the NodePort port number for external traffic! When any of the nodes receives traffic at that port number, kube-proxy will forward it to the cluster IP address of the service.
  • Now kubectl describe the NodePort svc you created to get it's NodePort, it'll be something like 31319/TCP as an example. Now if you go to http://nodeip:31319, you are hitting the NodePort service from outside the cluster. Note that services, including NodePorts, only do load balancing at the pod level, not the node level. So if you keep hitting http://nodeip:31319, the it will be serviced by that particular node.
  • Therefore, to achieve true node level load balancing we would need an outside network load balancer that resides outside the cluster, which would balance the traffic between different nodes, all of which are listening at port 31319.

Ingress

  • Ingress is another mechanism to allow external traffic to the services and to achieve application layer load balancing.
  • Ingress allows you set rules to redirect external traffic to various services. This eliminate the problem where you'd need one load balancer for each of the services inside the cluster, which is too expensive. With ingress you only need one load balancer.
  • You need a domain name to use Ingress
  • Ingress can be name based, path based, certificate based...

Top comments (0)