AKS auto update

A much requested feature for the Azure Kubernetes Service (AKS) has been the ability to automatically upgrade the Kubernetes (K8s) versions of your clusters. In situations where you are happy to trade off a controlled upgrade for the ease of administration of an automatic upgrade this is a big benefit to managed AKS.


One of the big considerations when using the AKS service is that you need to stay within the supported K8s versions so that you don't go out-of-support from a Microsoft perspective and lose their technical support and platform stability guarantees. Due to the K8s upgrade and support windows this effectively means a minor version upgrade at least every 9 months prior to K8s 1.18 and every 12 months from K8s 1.19 due to a new extended support window introduced with this version.

Kubernetes versioning

Kubernetes uses the well known semantic versioning scheme with the latest release at time of writing being 1.20.4:

The semantic versioning is defined as:

“Given a version number MAJOR.MINOR.PATCH, increment the:

  • MAJOR version when you make incompatible API changes,

  • MINOR version when you add functionality in a backwards compatible manner, and

  • PATCH version when you make backwards compatible bug fixes.”

AKS in turn follows a well known N-2 version support policy and their documentation is good here so I won't regurgitate it other than to note the current supported versions are:

So AKS supports:

  • The latest K8s GA minor version and the 2 previous versions (1.19, 1.18 & 1.17 above)
  • A maximum of 2 stable patches (1.19.7, 1.19.6, 1.18.14, 1.18.10, 1.17.16 & 1.17.13 above)
  • There is also usually a preview version available which at the moment is 1.20.2.

Auto update

Up to now you have needed to keep on top the available versions by checking the available versions via the CLI:

az aks get-versions --location uksouth --output table

...or checking the release calender, Azure updates or similar.


But on the 21st Jan 2021 Microsoft announced that Automatic upgrades are now in public preview. The auto upgrade is configurable and the channels can be set to none, patch, stable or rapid. None obviously leaves the upgrades as a manual process but the other channels are (Note: I'll run with the Microsoft examples here but draw them out visually just to make a little clearer):

Patch

"Automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same. For example, if a cluster is running version 1.17.7 and versions 1.17.9, 1.18.4, 1.18.6, and 1.19.1 are available, your cluster is upgraded to 1.17.9."

Stable

"Automatically upgrade the cluster to the latest supported patch release on minor version N-1, where N is the latest supported minor version. For example, if a cluster is running version 1.17.7 and versions 1.17.9, 1.18.4, 1.18.6, and 1.19.1 are available, your cluster is upgraded to 1.18.6."

Rapid

“Automatically upgrade the cluster to the latest supported patch release on the latest supported minor version. In cases where the cluster is at a version of Kubernetes that is at an N-2 minor version where N is the latest supported minor version, the cluster first upgrades to the latest supported patch version on N-1 minor version. For example, if a cluster is running version 1.17.7 and versions 1.17.9, 1.18.4, 1.18.6, and 1.19.1 are available, your cluster first is upgraded to 1.18.6, then is upgraded to 1.19.1.”

Enable

As this is a preview feature you need to register the feature with:

az feature register --namespace Microsoft.ContainerService -n AutoUpgradePreview

...then check (takes a few mins):

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AutoUpgradePreview')].{Name:name,State:properties.state}"

...then refresh the registration with:

az provider register --namespace Microsoft.ContainerService

Here is a terminal session of me running the commands:

The eagle-eyed will notice that the feature is already registered in the output from the first command. This is because it wasn’t the first time I had run the commands. In reality there was about 9 mins between executing the first command and the second reporting successful registration.

The Microsoft documentation then goes on to give you the AZ commands to create or update a cluster with auto-update but as we try to do this kind of thing with Terraform it's nice to see that it is already available in the azurerm_kubernetes_cluster provider (I’m on version 2.49.0). To flesh out the provider example you simply add the automatic_channel_upgrade argument setting your desired channel:

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurerm_kubernetes_cluster" "example" {
  name                = "example-aks1"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  dns_prefix          = "exampleaks1"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }

  identity {
    type = "SystemAssigned"
  }

  automatic_channel_upgrade = "stable"
}

My first thought here was to implement the change against a cluster that did not have an upgrade channel set, just to make sure it does not recreate, and all seems well here. Here is the output from my Terraform plan; existing cluster no channel set, Terraform in the plan setting the automatic_channel_upgrade to stable:

Terraform will perform the following actions:
  # azurerm_kubernetes_cluster.example will be updated in-place
  ~ resource "azurerm_kubernetes_cluster" "example" {
      + automatic_channel_upgrade       = "stable"
        id                              = "/subscriptions/<hidden-for-brevity>"
        name                            = "example-aks1"
        tags                            = {}
        # (15 unchanged attributes hidden)
        # (5 unchanged blocks hidden)
    }
Plan: 0 to add, 1 to change, 0 to destroy.

Conclusion

This is an intentionally short blog article that expands a little bit on the Microsoft examples and adds in some Terraform rather than implement with the AZ commands, hopefully some of you find it useful. There are some other considerations to cluster upgrade such as the max surge and pod disruption budget settings but we will leave this for another time.

As already noted this is still an opt-in preview feature so won’t be generally available for a little while yet, but definitely worth testing if you have clusters where auto-upgrade is suitable.

Stuart Anderson

Chief Engineer

Previous
Previous

Azure Scaffolding?

Next
Next

What’s your cloud benchmark?