You can find this information in the Scaleway console.
Getting started with the Cloud Controller Manager
- kubeadm
- cloud-controller-manager
A Cloud Controller Manager is a daemon that embeds cloud-specific control loops. It can be used to deploy resources in the Scaleway ecosystem.
Currently, the scaleway-cloud-controller-manager
implements:
- Instances interface: updates nodes with cloud provider-specific labels and addresses, also deletes Kubernetes nodes when deleted from the cloud provider.
- LoadBalancer interface: responsible for creating load balancers when a service of type:
LoadBalancer
is created in Kubernetes. - Zone interface: makes Kubernetes aware of the failure domain of each node.
The Scaleway Cloud Controller Manager is currently under active development and released as an open-source project on GitHub.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- An SSH key
- A valid API key
- 3 Instances running Ubuntu Bionic
Creating a Kubernetes cluster using kubeadm on Scaleway
The goal of this step is to create a Kubernetes
cluster using kubeadm
on Scaleway Instances.
To follow this example, you need to create three Ubuntu Bionic Instances:
main1
node1
node2
-
Run the following commands on each of your Instances:
apt-get update && apt-get install -y \iptables \arptables \ebtables \apt-transport-https \ca-certificates \curl \gnupg-agent \software-properties-commoncurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -add-apt-repository \"deb [arch=amd64] https://apt.kubernetes.io kubernetes-xenial main"curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) \stable"apt-get update && apt-get install -y \docker-ce docker-ce-cli containerd.io kubelet kubeadm kubectlapt-mark hold \docker-ce docker-ce-cli containerd.io kubelet kubeadm kubectlecho KUBELET_EXTRA_ARGS=\"--cloud-provider=external\" > /etc/default/kubelet -
Initialize the Kubernetes main on the Instance
main1
:root@main1:~# kubeadm init --control-plane-endpoint=$(scw-metadata PUBLIC_IP_ADDRESS) --apiserver-cert-extra-sans=$(scw-metadata PUBLIC_IP_ADDRESS)root@main1:~# mkdir -p ~/.kuberoot@main1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@main1:~# chown $(id -u):$(id -g) $HOME/.kube/configroot@main1:~# kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml -
During the installation the
kubeadm join
command displays. Note it down as it is required for the worker nodes to join the cluster. You can also copy thekubeconfig
file and save it on your local computer. -
Execute the
kubeadm join
command on the worker node to join the cluster:root@node1:~# kubeadm join 10.68.34.145:6443 --token itvo0b.kwoao79ptlj22gno \--discovery-token-ca-cert-hash sha256:07bc3f9601f1659771a7a6fd696c2969cbc757b088ec752ba95d5a42c06ed91f -
Verify the status of the cluster on the main by running the
kubectl get nodes
command.root@main1:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONmain1 Ready main 18m v1.17.4node1 Ready <none> 8m38s v1.17.4node2 Ready <none> 2m31s v1.17.4The cluster is ready and working. Continue by deploying the
cloud-controller-manager
.
Deploying the cloud-controller-manager on the cluster
To deploy the cloud-controller-manager
the following information is required:
- Your access key.
- Your secret key.
- Your organization id.
- The Scaleway region.
-
Create a
k8s-scaleway-secret.yml
file containing the following information:root@master1:~# nano k8s-scaleway-secret.ymlroot@main1:~# nano k8s-scaleway-secret.ymlapiVersion: v1kind: Secretmetadata:name: scaleway-secretnamespace: kube-systemstringData:SCW_ACCESS_KEY: 'xxxxxxxxxxxxxxxx'SCW_SECRET_KEY: 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'SCW_DEFAULT_REGION: 'fr-par'SCW_DEFAULT_ORGANIZATION_ID: 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx' -
Create the
secret
and deploy the controllerroot@main1:~# kubectl create -f k8s-scaleway-secret.ymlroot@main1:~# kubectl apply -f https://raw.githubusercontent.com/scaleway/scaleway-cloud-controller-manager/main/examples/k8s-scaleway-ccm-latest.yml
Checking that the cloud-controller-manager is working
-
Verify the
cloud-controller-manager
is running from themain1
instance:root@main1:~# kubectl get pods -n kube-system -l app=scaleway-cloud-controller-managerNAME READY STATUS RESTARTS AGEscaleway-cloud-controller-manager-584558b994-rln4j 1/1 Running 0 12sroot@main1:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONmain1 Ready main 18m v1.17.4node1 Ready <none> 8m38s v1.17.4node2 Ready <none> 2m31s v1.17.4 -
Deploy a
LoadBalancer
service and make sure a public IP is assigned to this service. The service will automatically create a managed Load Balancer on the Scaleway platform. Create alb.yml
file that contains the following information:root@main1:~# nano lb.ymlapiVersion: v1kind: Servicemetadata:name: example-servicespec:selector:app: exampleports:- port: 8765targetPort: 9376type: LoadBalancerapiVersion: v1kind: Servicemetadata:name: example-servicespec:selector:app: exampleports:- port: 8765targetPort: 9376type: LoadBalancer -
Create the service from the configuration file:
root@main1:~# kubectl create -f lb.yml -
Verify if the service has been created:
root@main1:~# kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEexample-service LoadBalancer 10.106.144.144 51.159.26.121 8765:30175/TCP 7skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21m
The LoadBalancer
service with the internal IP 10.106.144.144
and external IP 51.159.26.121
has been created.
You have successfully deployed a cluster with kubeadm
and the scaleway-cloud-controlle-manager
.
To learn more about the function of a Cloud Controller Manager within Kubernetes, refer to the official documentation.
For more information about the Scaleway Cloud Controller Manager, follow the project on GitHub.