Debug School

Cover image for Moving resources in OpenShift
Suyash Sambhare
Suyash Sambhare

Posted on

Moving resources in OpenShift

Moving resources to infrastructure machine sets

Some infrastructure resources are installed in your cluster by default. You can relocate them to the infrastructure machine sets you've constructed.

Moving the router

You can move the router pod to a different compute machine set. By default, the pod is deployed on a worker node.
Configure additional computing machine sets for your OpenShift Container Platform cluster.
View the IngressController custom resource for the router operator. - oc get ingresscontroller default -n openshift-ingress-operator -o yaml

apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  creationTimestamp: 2019-04-18T12:35:39Z
  finalizers:
  - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
  generation: 1
  name: default
  namespace: openshift-ingress-operator
  resourceVersion: "11341"
  selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
  uid: 79509e05-61d6-11e9-bc55-02ce4781844a
spec: {}
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2019-04-18T12:36:15Z
    status: "True"
    type: Available
  domain: apps.<cluster>.example.com
  endpointPublishingStrategy:
    type: LoadBalancerService
  selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
Enter fullscreen mode Exit fullscreen mode

Modify the nodeSelector to utilise the infra label by editing the ingresscontroller resource: oc edit ingresscontroller default -n openshift-ingress-operator

  spec:
    nodePlacement:
      nodeSelector: 
        matchLabels:
          node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
Enter fullscreen mode Exit fullscreen mode

To move a component, add a nodeSelector argument to it with the appropriate value. Based on the value supplied for the node, you can use a nodeSelector in the format indicated or utilise <key>: <value> pairs. Include a matching toleration if you tainted the infrastructure node. Verify that the infra node's router pod is operational. Examine the router pods list and take note of the running pod's node name.

$ oc get pod -n openshift-ingress -o wide

NAME                              READY     STATUS        RESTARTS   AGE       IP           NODE                           NOMINATED NODE   READINESS GATES
router-default-86798b4b5d-bdlvd   1/1      Running       0          28s       10.130.2.4   ip-10-0-217-226.ec2.internal   <none>           <none>
router-default-955d875f4-255g8    0/1      Terminating   0          19h       10.129.2.4   ip-10-0-148-172.ec2.internal   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

View the operating pod's node status

$ oc get node ip-10-0-217-226.ec2.internal
NAME                          STATUS  ROLES         AGE   VERSION
ip-10-0-217-226.ec2.internal  Ready   infra,worker  17h   v1.29.4
Enter fullscreen mode Exit fullscreen mode

The pod is operating on the right node as infra is included in the role list.

Moving the default registry

The registry operator is configured to distribute its pods among several nodes. Set up more compute machine sets in the cluster of OpenShift Container Platform. See the object named config/instance. oc get configs.imageregistry.operator.openshift.io/cluster -o yaml

apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
  creationTimestamp: 2019-02-05T13:52:05Z
  finalizers:
  - imageregistry.operator.openshift.io/finalizer
  generation: 1
  name: cluster
  resourceVersion: "56174"
  selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
  uid: 36fd3724-294d-11e9-a524-12ffeee2931b
spec:
  httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
  logging: 2
  managementState: Managed
  proxy: {}
  replicas: 1
  requests:
    read: {}
    write: {}
  storage:
    s3:
      bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
      region: us-east-1
status:
...
Enter fullscreen mode Exit fullscreen mode

Edit the config/instance object: oc edit configs.imageregistry.operator.openshift.io/cluster

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          namespaces:
          - openshift-image-registry
          topologyKey: kubernetes.io/hostname
        weight: 100
  logLevel: Normal
  managementState: Managed
  nodeSelector: 
    node-role.kubernetes.io/infra: ""
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/infra
    value: reserved
  - effect: NoExecute
    key: node-role.kubernetes.io/infra
    value: reserved
Enter fullscreen mode Exit fullscreen mode

To move a component, add a nodeSelector argument to it with the appropriate value. Based on the value supplied for the node, you can use a nodeSelector in the format indicated or utilise \key>: <value> pairs. Include a matching tolerance if you added a taint to the infrastructure node.
Check to see if the infrastructure node has received the registry pod.
To find the node that houses the registry pod, issue the following command: $ oc get pods -n openshift-image-registry -o wide
Verify that the node is assigned the label that you requested: $ oc describe node ip-10-0-217-226.ec2.internal
Verify that node-role.kubernetes.io/infra is included in the LABELS list by looking at the command output.

Moving

Moving the monitoring solution

The monitoring stack consists of several parts, such as Alertmanager, Thanos Querier, and Prometheus. This stack is overseen by the Cluster Monitoring Operator. You can make a custom config map and apply it to infrastructure nodes to redeploy the monitoring stack.
Modify the nodeSelector to utilise the infra label in the cluster-monitoring-config config map: oc edit configmap cluster-monitoring-config -n openshift-monitoring

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml:
    alertmanagerMain:
      nodeSelector: 
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    prometheusK8s:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    prometheusOperator:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    metricsServer:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    kubeStateMetrics:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    telemeterClient:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    openshiftStateMetrics:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    thanosQuerier:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
    monitoringPlugin:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoExecute
Enter fullscreen mode Exit fullscreen mode

To move a component, add a nodeSelector parameter to it with the appropriate value. Based on the value supplied for the node, you can use <key>: <value> pairs or a nodeSelector in the format displayed. Include a matching toleration if you tainted the infrastructure node. Observe the tracking pods being transferred to the new devices: $ watch 'oc get pod -n openshift-monitoring -o wide'
Use this component to delete the pod if it hasn't been migrated to the infra node: $ oc delete pod -n openshift-monitoring <pod>
On the infra node, the component from the destroyed pod is recreated.

Ref: https://docs.openshift.com/container-platform/4.16/post_installation_configuration/cluster-tasks.html

Top comments (0)