Site icon NodeXperts

Cluster Upgrade

What is a Cluster?

A cluster consists of one master machine and several worker machines or nodes.

Nodes are basically VMs that contain a suitable environment for services to run in it.

Since the concept of containerization has widened up a lot in recent years, we can deploy multiple applications inside the same or different nodes using docker containers in the form of pods. The master coordinates between all the nodes.

The kubernetes community is growing very fast and now it became one of the most active projects on Github to date, having amassed more than 98k commits and 750 releases. So to take advantage of new features, security and for bugs fixes we may need to upgrade the cluster often.

Steps to Upgrade a Cluster-

Upgrading a cluster can be either done manually or in an automated way. Managed cluster upgradation processes are quite simple to perform but manual clusters created by kubeadm or any other utility can be a little tricky but high level upgrade process remains the same.

Upgrade process of a cluster includes following steps :-

Performing Azure Kubernetes Cluster Upgrade:-

It is better to perform cluster upgrades using CLI hence there are some prerequisites.

You can only upgrade one minor version at a time.

That means you can upgrade from 1.14.x to 1.15.x, but cannot upgrade from 1.14.x to 1.16.x directly. To upgrade from 1.14.x to 1.16.x, first upgrade from 1.14.x to 1.15.x, then perform another upgrade from 1.15.x to 1.16.x.

The Kubernetes version on the control plane nodes must be no more than two minor versions ahead of the Kubernetes version on the worker nodes

Prerequisites:-

( https://helm.sh/docs/topics/kubernetes_apis/ )

Once you have logged into azure you will be able to upgrade your azure cluster.

There can be two scenarios:

1-  Kubernetes cluster is using node pools

 2- Kubernetes cluster is not using node pools

Follow below steps to upgrade your cluster:-

Step 1:- Check the current version of your kubernetes cluster

$ Kubectl get nodes

Or

$ kubectl version –short | grep Server

Step 2:- To get the available upgrades present for your cluster in table format

$ az aks get-upgrades –resource-group <ResourceGroupName> –name <ClusterName> –output table

For nodepool enabled cluster

$ az aks nodepool get-upgrades –resource-group <ResourceGroupName> –cluster-name <ClusterName> –nodepool-name <NodepoolName>

If no upgrade is available, you may get error message like below one

ERROR: Table output unavailable. Use the –query option to specify an appropriate query. Use –debug for more info.

Step 3:- Fire the below command to kick the upgrade process

$ az aks upgrade –resource-group <ResourceGroupName> –name <ClusterName> –kubernetes-version <TargetVersion>  –no-wait –yes –debug –verbose

For nodepool enabled cluster

$ az aks nodepool upgrade –cluster-name <ClusterName> –name <NodepoolName> –resource-group <ResourceGroupName> –kubernetes-version <TargetVersion> –no-wait –debug –verbose

After Step 3 the control plane will be upgraded which you might not be able to see as it’s an internal process as AKS is a managed service, but after some time your Kubernetes nodes will start upgrading.


Nodes are upgraded in below manner

It is advised to keep an eye while the process is running using below command

$ kubectl get pods –all-namespaces -o wide | grep <name-of-cordoned-node>

There can be scenarios in which the daemonset pods are not terminating for that you will have to delete them manually, you can use below command-

$ kubectl get pods –all-namespaces –field-selector spec.nodeName=<Node_Name> | awk ‘{print $2 ” –namespace=” $1}’| xargs -I ‘{}’ bash -c ‘kubectl delete pods {}’

Step 4:- Confirm upgrade was successful

$ az aks show –resource-group <ResourceGroupName> –name <ClusterName> –output table

Post check:-

Check 1:- After the cluster upgrade you can try to deploy a helm chart and check if pvc and pods are creating as expected and then delete the chart.

$ helm install –name post-check stable/postgresql

Check 2:- If the manifests are not updated as the prerequisites then you can follow the the link https://helm.sh/docs/topics/kubernetes_apis/  or if using helm2 for helm releases you can use map kube apis utility ( https://github.com/hickeyma/helm-mapkubeapis )  to map the manifest according to the cluster version requirements.

First delete any failed helm releases of the chart and then run mapkubeapis command

$ kubectl delete configmap/secret <release_name>.v<failed_version_number> –namespace <tiller_namespace>  

$ helm mapkubeapis <helm-release-name> –namespace kube-system –v2