How to manage Kubernetes Kapsule node pools
This documentation provides step-by-step instructions on how to manage Kubernetes Kapsule node pools using the Scaleway console.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- Created a Kubernetes Kapsule cluster
How to create a new Kubernetes Kapsule node pool
- Navigate to Kubernetes under the Containers section of the Scaleway console side menu. The Kubernetes dashboard displays.
- Click the Kapsule cluster name you want to manage. The cluster information page displays.
- Click the Pools tab to display the pool configuration of the cluster.
- Click Add pool to launch the pool creation wizard.
- Configure the pool:
- Choose the Availability Zone for the pool.
- Choose the commercial type of Instance for the pool.
- Configure the system volume.
- Configure pool options.
- Enter the pool's details.
- Click Add pool. The pool gets added to your basket. Repeat the steps above to configure additional pools.
- Click Review once you have configured the desired pools. A summary of your configuration displays.
- Verify your configuration and click Submit to add the pool(s) to your Kapsule cluster.
How to edit an existing Kubernetes Kapsule node pool
- Navigate to Kubernetes under the Containers section of the Scaleway console side menu. The Kubernetes dashboard displays.
- Click the Kapsule cluster name you want to manage. The cluster information page displays.
- Click the Pools tab to display the pool configuration of the cluster.
- Click more icon > Edit next to the node pool you want to edit.
- Configure the pool:
- Update pool tags
- Configure autoscaling
- Enable or disable the autoheal feature
- Click Update pool to update the pool configuration.
How to migrate existing workloads to a new Kubernets Kapsule node pool
- Create the new node pool with the desired configuration either from the console or by using the Scaleway CLI tool
scw
. - Run
kubectl get nodes
to check that the new nodes are in aReady
state. - Cordon the nodes in the old node pool to prevent new pods from being scheduled there. For each node, run:
kubectl cordon <node-name>
- Drain the nodes to evict the pods gracefully.
- For each node, run:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
- The
--ignore-daemonsets
flag is used because daemon sets manage pods across all nodes and will automatically reschedule them. - The
--delete-emptydir-data
flag is necessary if your pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes. - Refer to the official Kubernetes documentation for further information.
- For each node, run:
- Run
kubectl get pods -o wide
after draining, to verify that the pods have been rescheduled to the new node pool. - Delete the old node pool once you confirm that all workloads are running smoothly on the new node pool.
How to delete an existing Kubernetes Kapsule node pool
- Navigate to Kubernetes under the Containers section of the Scaleway console side menu. The Kubernetes dashboard displays.
- Click the Kapsule cluster name you want to manage. The cluster information page displays.
- Click the Pools tab to display the pool configuration of the cluster.
- Click more icon > Delete next to the node pool you want to delete.
- Click Delete pool in the pop-up to confirm deletion of the pool.
See Also
Still need help?Create a support ticket