Rancher is an open-source container management platform, providing an graphical interface making container management easier.
The GUI of Rancher makes it easy to manage secrets, handle roles and permissions. It allows to scale nodes and pods, set up load balancers without the requirement of a command line tool or the editing of hard to read YAML files.
1 . Log into the Scaleway console.
2 . Click on Instances in the menu on the left:
3 . To deploy instances with Docker pre-installed, click on + Create an instance:
4 . Click on the InstantApps tab, and choose the Docker image:
5 . Choose the region for the instance, the instance type and a name for the instance, i.e.
rancher1 click on Create a new instance.
6 . Repeat these steps two more times to spin up a total of three instances running Docker.
1 . Log into the first instance (
rancher1) via SSH.
2 . Launch the following command to fetch the docker image
rancher/rancher to run in a container with automatic restarting enabled in case the container fails. Edit the value
rancher.example.com with the personalized domain name pointing to the instance to generate a Lets Encrypt certificate automatically:
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain rancher.example.com
1 . Once Rancher is installed, open a web browser and point it to your rancher domain (i.e.
https://rancher.example.com). The following page displays:
2 . Enter a password and its confirmation and click on Continue to move forward with the installation of Rancher.
3 . Enter the domain name for Rancher. Normally this value is already pre-filled with the domain name configured when launching the docker container. Check that the value is correct and click on Save URL to continue with the configuration:
4 . The (empty) Rancher Dashboard displays:
1 . Click on Add Cluster to configure a new Kubernetes cluster.
2 . The cluster creation page displays. Click on Custom to deploy the Cluster on the already launched Scaleway compute instances.
3 . Enter a name for the cluster, choose the desired Kubernetes version and network provider and select None as cloud provider.
4 . Choose the Options for the Worker nodes. A Kubernets cluster needs to have at least one
etcd and one
etcd is a key value storage system used by Kubernetes to keep the state of the entire enivironment. It is recommended to run an odd number of copies of the etcd for redundancy (e.g. 1, 3, 5,…).
control plane maintains an record of all Objects (i.e. Pods) in a cluster and updates then with the configuration provided in the Rancher admin interface
workers run the actual workloads and monitoring tools to ensure the healthiness of the containers. All Pod deployments are made on worker node.
Choose the roles for each of the instances in the cluster and run the command shown on the page to install the required software and link them with Rancher:
Once all instances are ready, click on Done to initialize the cluster.
5 . Once the cluster is ready, the dashboard displays:
The cluster is ready now and the deployment of a first pod can take place. A Pod is the smallest and simplest execution unit of a Kubernetes application that you can create or deploy.
1 . Head over Global in the header bar, select the cluster and click on Default from the drop-down menu:
2 . The clusters dashboard displays. Click on Deploy:
3 . Enter a the details of the workload:
nginxdemos/helloto deploy a Nginx demo application.
Click on Launch to create the workload.
4 . Once deployed, open a web browser and point it to
http://<rancher.example.com>:30000/. The Nginx demo application displays:
Currently the Nginx demo application lives in a single pod and is deployed only on one instance. Rancher provides the possibility to scale your deployment to multiple pods directly from the web interface.
1 . From the cluster dashboard, click on … then on Edit in the pop-up menu:
2 . Edit the Workload type and set the number of scalable deployments to 3:
3 . Click on Save. Rancher will now send instructions to Kubernetes to update the workload configuration and to deploy 3 pods running the Nginx demo application parallel.
4 . Access the application running on the second instance by typing:
http://<second_instance_ip>:30000/ in a web browser. The Nginx demo application displays.