Jump toUpdate content
Kubernetes - Getting Started with the Cloud Controller Manager
- compute
- kubernetes
- Kubernetes-cluster
- kubeadm
- cloud-controller-manager
Cloud Contoller Manager Overview
A Cloud Controller Manager is a daemon that embeds cloud-specific control loops. It can be used to deploy resources in the Scaleway ecosystem.
Currently the scaleway-cloud-controller-manager
implements:
- Instances interface: updates nodes with cloud provider specific labels and addresses, also deletes kubernetes nodes when deleted from the cloud-provider.
- LoadBalancer interface: responsible for creating load balancers when a service of type:
LoadBalancer
is created in Kubernetes. - Zone interface: makes Kubernetes aware of the failure domain of each node.
The Scaleway Cloud Controller Manager is currently under active development and released as open-source project on GitHub.
- You have an account and are logged into the Scaleway console
- You have configured your SSH Key
- You have created your API Key
- You have created three Instances running Ubuntu Bionic
Creating a Kubernetes Cluster using kubeadm on Scaleway
The goal of this step is to create a Kubernetes
cluster using kubeadm
on Scaleway Instances.
To follow this example, you need to create three Ubuntu Bionic Instances:
master1
node1
node2
Run the following commands on each of your Instances:
apt-get update && apt-get install -y \
iptables \
arptables \
ebtables \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
add-apt-repository \
"deb [arch=amd64] https://apt.kubernetes.io kubernetes-xenial main"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update && apt-get install -y \
docker-ce docker-ce-cli containerd.io kubelet kubeadm kubectl
apt-mark hold \
docker-ce docker-ce-cli containerd.io kubelet kubeadm kubectl
echo KUBELET_EXTRA_ARGS=\"--cloud-provider=external\" > /etc/default/kubeletInitialize the Kubernetes master on the Instance
master1
:root@master1:~# kubeadm init --control-plane-endpoint=$(scw-metadata PUBLIC_IP_ADDRESS) --apiserver-cert-extra-sans=$(scw-metadata PUBLIC_IP_ADDRESS)
root@master1:~# mkdir -p ~/.kube
root@master1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master1:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@master1:~# kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yamlDuring the installation the
kubeadm join
command displays. Note it down as it is required for the worker nodes to join the cluster. You can also copy thekubeconfig
file and save it on your local computer.Execute the
kubeadm join
command on worker node to join the cluster:root@node1:~# kubeadm join 10.68.34.145:6443 --token itvo0b.kwoao79ptlj22gno \
--discovery-token-ca-cert-hash sha256:07bc3f9601f1659771a7a6fd696c2969cbc757b088ec752ba95d5a42c06ed91fVerify the status of the cluster on the master by running the
kubectl get nodes
command.root@master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 18m v1.17.4
node1 Ready <none> 8m38s v1.17.4
node2 Ready <none> 2m31s v1.17.4The cluster is ready and working. Continue by deploying the
cloud-controller-manager
.
Deploying the cloud-controller-manager on the Cluster
To deploy the cloud-controller-manager
the following information are required:
- Your access key.
- Your secret key.
- Your organization id.
- The Scaleway region.
You can find this information in the Scaleway Console.
Create a
k8s-scaleway-secret.yml
file containing the following information:root@master1:~# nano k8s-scaleway-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: scaleway-secret
namespace: kube-system
stringData:
SCW_ACCESS_KEY: 'xxxxxxxxxxxxxxxx'
SCW_SECRET_KEY: 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
SCW_DEFAULT_REGION: 'fr-par'
SCW_DEFAULT_ORGANIZATION_ID: 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx'Create the
secret
and deploy the controllerroot@master1:~# kubectl create -f k8s-scaleway-secret.yml
root@master1:~# kubectl apply -f https://raw.githubusercontent.com/scaleway/scaleway-cloud-controller-manager/master/examples/k8s-scaleway-ccm-latest.yml
Checking that the cloud-controller-manager is working
Verify the
cloud-controller-manager
is running from themaster1
instance:root@master1:~# kubectl get pods -n kube-system -l app=scaleway-cloud-controller-manager
NAME READY STATUS RESTARTS AGE
scaleway-cloud-controller-manager-584558b994-rln4j 1/1 Running 0 12s
root@master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 18m v1.17.4
node1 Ready <none> 8m38s v1.17.4
node2 Ready <none> 2m31s v1.17.4Deploy a
LoadBalancer
service and make sure a public IP is assigned to this service. The service will automatically create a managed Load Balancer on the Scaleway platform.Create a
lb.yml
file that contains the following information:root@master1:~# nano lb.yml
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancerCreate the service from the configuration file:
root@master1:~# kubectl create -f lb.yml
Verify if the service has been created:
root@master1:~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service LoadBalancer 10.106.144.144 51.159.26.121 8765:30175/TCP 7s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21m
The LoadBalancer
service with the internal IP 10.106.144.144
and external IP 51.159.26.121
has been created.
You have successfully deployed a cluster with kubeadm
and the scaleway-cloud-controlle-manager
. For more information about the Scaleway Cloud Controller Manager, follow the project on GitHub.