Ashish Dhirwan

Cluster Upgrade

By Ashish Dhirwan

Last updated cal_iconMay 10, 2021

What is a Cluster?

A cluster consists of one master machine and several worker machines or nodes.

Nodes are basically VMs that contain a suitable environment for services to run in it.

Since the concept of containerization has widened up a lot in recent years, we can deploy multiple applications inside the same or different nodes using docker containers in the form of pods. The master coordinates between all the nodes.

The kubernetes community is growing very fast and now it became one of the most active projects on Github to date, having amassed more than 98k commits and 750 releases. So to take advantage of new features, security and for bugs fixes we may need to upgrade the cluster often.

Steps to Upgrade a Cluster-

Upgrading a cluster can be either done manually or in an automated way. Managed cluster upgradation processes are quite simple to perform but manual clusters created by kubeadm or any other utility can be a little tricky but high level upgrade process remains the same.

Upgrade process of a cluster includes following steps :-

  • Upgrade the control plane which includes kube-api , etcd, kube scheduler and controller manager
  • Upgrade the node which includes new VMs along with kubelet ,kube-proxy , container runtime
  • Upgrade clients such as kubectl
  • Upgrade the manifest and other resources which are compatible with your Kubernetes cluster version

Performing Azure Kubernetes Cluster Upgrade:-

It is better to perform cluster upgrades using CLI hence there are some prerequisites.

You can only upgrade one minor version at a time.

That means you can upgrade from 1.14.x to 1.15.x, but cannot upgrade from 1.14.x to 1.16.x directly. To upgrade from 1.14.x to 1.16.x, first upgrade from 1.14.x to 1.15.x, then perform another upgrade from 1.15.x to 1.16.x.

The Kubernetes version on the control plane nodes must be no more than two minor versions ahead of the Kubernetes version on the worker nodes


  • Azure CLI
  • kubectl
  • az login
  • az account set -s <Subscription>
  • At least contributor access to the cluster
  • If kubernetes manifests are not updated then update them else it can also be updated after the upgrade but until then the kubernetes won’t be able to deploy things which have deprecated manifests.

( )

Once you have logged into azure you will be able to upgrade your azure cluster.

There can be two scenarios:

1-  Kubernetes cluster is using node pools

 2- Kubernetes cluster is not using node pools

Follow below steps to upgrade your cluster:-

Step 1:- Check the current version of your kubernetes cluster

$ Kubectl get nodes


$ kubectl version –short | grep Server

Step 2:- To get the available upgrades present for your cluster in table format

$ az aks get-upgrades –resource-group <ResourceGroupName> –name <ClusterName> –output table

For nodepool enabled cluster

$ az aks nodepool get-upgrades –resource-group <ResourceGroupName> –cluster-name <ClusterName> –nodepool-name <NodepoolName>

If no upgrade is available, you may get error message like below one

ERROR: Table output unavailable. Use the –query option to specify an appropriate query. Use –debug for more info.

Step 3:- Fire the below command to kick the upgrade process

$ az aks upgrade –resource-group <ResourceGroupName> –name <ClusterName> –kubernetes-version <TargetVersion>  –no-wait –yes –debug –verbose

For nodepool enabled cluster

$ az aks nodepool upgrade –cluster-name <ClusterName> –name <NodepoolName> –resource-group <ResourceGroupName> –kubernetes-version <TargetVersion> –no-wait –debug –verbose

After Step 3 the control plane will be upgraded which you might not be able to see as it’s an internal process as AKS is a managed service, but after some time your Kubernetes nodes will start upgrading.

Nodes are upgraded in below manner

  • Kubernetes will cordon and drain one node which means no new pods will be created in that node and also, all the pods in that node will be created in some other node.
  • New node will spin up with a new version and the old one will be removed
  • This process repeats until all nodes in the cluster have been upgraded.

It is advised to keep an eye while the process is running using below command

$ kubectl get pods –all-namespaces -o wide | grep <name-of-cordoned-node>

There can be scenarios in which the daemonset pods are not terminating for that you will have to delete them manually, you can use below command-

$ kubectl get pods –all-namespaces –field-selector spec.nodeName=<Node_Name> | awk ‘{print $2 ” –namespace=” $1}’| xargs -I ‘{}’ bash -c ‘kubectl delete pods {}’

Step 4:- Confirm upgrade was successful

$ az aks show –resource-group <ResourceGroupName> –name <ClusterName> –output table

Post check:-

Check 1:- After the cluster upgrade you can try to deploy a helm chart and check if pvc and pods are creating as expected and then delete the chart.

$ helm install –name post-check stable/postgresql

Check 2:- If the manifests are not updated as the prerequisites then you can follow the the link  or if using helm2 for helm releases you can use map kube apis utility ( )  to map the manifest according to the cluster version requirements.

First delete any failed helm releases of the chart and then run mapkubeapis command

$ kubectl delete configmap/secret <release_name>.v<failed_version_number> –namespace <tiller_namespace>  

$ helm mapkubeapis <helm-release-name> –namespace kube-system –v2

Get In Touch

How Can We Help ?

We make your product happen. Our dynamic, robust and scalable solutions help you drive value at the greatest speed in the market

We specialize in full-stack software & web app development with a key focus on JavaScript, Kubernetes and Microservices
Your path to drive 360° value starts from here
Enhance your market & geographic reach by partnering with NodeXperts