HomeContainersKubernetesHow to
Upgrade the Kubernetes version on a Kapsule cluster
Jump toUpdate content

How to upgrade the Kubernetes version on a Kapsule cluster

Reviewed on 06 June 2023Published on 21 December 2022
Security & Identity (IAM):

You may need certain IAM permissions to carry out some actions described on this page. This means:

  • you are the Owner of the Scaleway Organization in which the actions will be carried out, or
  • you are an IAM user of the Organization, with a policy granting you the necessary permission sets

You can either upgrade your Kubernetes Kapsule cluster directly from the Scaleway console, or upgrade your cluster to the next minor version by using the CLI. The CLI section of this how-to guide also provides additional mandatory information for checking important components before proceeding with the upgrade of your cluster.

Upgrading a Kapsule cluster from the Scaleway console

  • You have an account and are logged into the Scaleway console
  • You have created a Kubernetes Kapsule cluster running on a Kubernetes version older than the latest release
  1. Click Kubernetes under Containers on the side menu. A list of your Kubernetes Kapsule clusters displays.
  2. Click the cluster name you wish to upgrade the Kubernetes version for. The cluster information page displays.
  3. Click Upgrade next to the Kubernetes version of your cluster. A pop-up displays.
  4. Select the latest patch or next minor version to upgrade to. Tick the Upgrade cluster node pools as well checkbox if you want to upgrade the version of Kubernetes on the node pools in your cluster to the same version.

    Be careful when upgrading the Kubernetes versions of your node pools, as it may lead to data loss on data stored locally on any node.

  5. Click Upgrade.

    It is not possible to downgrade your Kubernetes version once it has been upgraded.

Upgrading a Kapsule cluster to the next minor version using the CLI

  • You have a working CLI with your credentials set up
  • You are willing to upgrade your Kubernetes Kapsule cluster to the latest k8s version available on the Kapsule API.

First, it is essential to verify that the most recent version of Kapsule adequately supports your workload. We maintain a compatibility matrix for various components, as your current cluster might use components that are deprecated or unavailable in the latest version. For further details, consult our version policy.

We recommend you read the Kubernetes changelog to stay informed on the latest version upgrades.

Checking which components must be changed

Run the following command into your terminal to retrieve a list of the components that need to be changed.

scw k8s version list

You should get an output similar to the following, providing a list of all relevant components:

1.25.9 [cilium calico kilo] [none] [containerd]
1.24.13 [cilium calico weave flannel kilo] [none] [containerd crio]
1.23.17 [cilium calico weave flannel kilo] [none nginx traefik2] [containerd crio docker]

Ingress controllers

Managed Ingress Controllers have been deprecated since the minor 1.24 release. As a result, we provide the Easy Deploy feature instead to set up an Ingress Controller. You can also deploy an Ingress Controller by yourself and fine-tune its settings according to your needs.

Run the scw k8s cluster update $CLUSTER_ID ingress=none command to update a cluster to a version superior to 1.23. It is necessary to deactivate the managed Ingress Controller.

Container runtimes

We only provide support for containerd from version 1.25 and above. To migrate your existing pools, you must create new Kapsule pools with containerd as a runtime. Complete the following steps to do so:

  1. Create the pool:
    scw k8s pool create container-runtime=containerd zone=$POOL_ZONE size=$SIZE_OF_YOUR_OLD_POOL version=$YOUR_CLUSTER_VERSION cluster-id=$CLUSTER_ID
  2. Wait for the nodes to be provisioned:
    scw k8s pool wait $POOL_ID
  3. In parallel, you can start cordoning nodes using the old runtime. This way, the workload will get rescheduled directly on the containerd nodes.
    kubectl cordon $OLD_NODE_A $OLD_NODE_B $OLD_NODE_C ...
  4. Start draining the old nodes once the new nodes are ready:

    You may need to add the option --delete-emptydir-data if you used local disk as a scratchpad.

    kubectl drain --ignore-daemonsets $NODE_TO_DRAIN

    Do not drain all the nodes simultaneously. Make sure to do it sequentially, while checking whether your workload is behaving as expected.

  5. Delete nodes from your old pool, since at this point, they should be emptied of any workload:
    kubectl delete node $OLD_NODE_A $OLD_NODE_B $OLD_NODE_C ...
  6. Delete your old pool:
    scw k8s pool delete $OLD_POOL_ID


If your cluster is using a deprecated CNI, and you are willing to upgrade to a newer k8s version, you will most likely have to spin up a new Kapsule cluster. As of now, we do not provide an easy way to change CNI. This component is integrated within each cluster node. This results in a tricky transition. For more help, check out the following resources:

Effective upgrade

From here, two options are available: you are either upgrading one minor version or multiple ones.

One minor version

This option is the most straightforward and requires you to first upgrade your control-plane.

scw k8s cluster upgrade $CLUSTER_ID version=$NEW_K8S_VERSION

You can also upgrade the pools by appending the previous command with upgrade-pools=true.

Additionally, you can upgrade one pool independently by running the following command:

scw k8s pool upgrade $POOL_ID version=$NEW_K8S_VERSION

If you wish to migrate your workload manually, you can do so by following the steps described in the runtimes section.


Make sure to adapt the pool creation step.

scw k8s pool create zone=$OLD_POOL_ZONE size=$SIZE_OF_YOUR_OLD_POOL version=$NEW_CLUSTER_VERSION cluster-id=$CLUSTER_ID

Multiple minor versions

The process is quite similar to the previous one except you need to repeat the steps for each minor version.

See Also