Jump toSuggest an edit

Setting up a Kubernetes cluster using Rancher on Ubuntu Bionic Beaver

Reviewed on 21 February 2024Published on 12 August 2019
  • Kubernetes
  • Rancher
  • k8s
  • containers

Rancher Overview

Rancher is an open-source container management platform providing a graphical interface that makes container management easier.

The Rancher UI makes it easy to manage secrets, roles, and permissions. It allows you to scale nodes and pods and set up load balancers without requiring a command line tool or editing hard-to-read YAML files.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • An SSH key
  • Configured a domain name (i.e. pointing to the first Instance

Spinning up the required Instances

  1. Click Instances in the Compute section of the side menu. The Instances page displays.
  2. Click Create Instance. The Instance creation wizard displays.
  3. To deploy Instances with Docker preinstalled, click + Create Instance:
  4. Click the InstantApps tab, and choose the Docker image:
  5. Choose a region, type, and name for your Instance (i.e. rancher1), then click Create Instance.
  6. Repeat these steps two more times to spin up a total of three Instances running Docker.

Installing Rancher

  1. Log into the first Instance (rancher1) via SSH.
  2. Launch the following command to fetch the docker image rancher/rancher to run in a container with automatic restarting enabled, in case the container fails. Edit the value with the personalized domain name pointing to the Instance to generate a Let’s Encrypt certificate automatically:
    docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /host/rancher:/var/lib/rancher rancher/rancher --acme-domain

Configuring Rancher

  1. Once Rancher is installed, open a web browser and point it to your Rancher domain (i.e. The following page displays:
  2. Enter a password and its confirmation, and click Continue to move forward with the installation.
  3. The (empty) Rancher dashboard displays:

Creating a cluster

  1. Click Add cluster to configure a new Kubernetes cluster.

  2. The cluster creation page displays. Click Custom to deploy the cluster on the already launched Scaleway Instances.

  3. Enter a name for the cluster, choose the desired Kubernetes version and network provider, and select None as cloud provider.

  4. Choose the options for the worker nodes. A Kubernetes cluster must have at least one etcd and one control plane.

    • etcd is a key value storage system used by Kubernetes to keep the state of the entire environment. For redundancy, we recommend running an odd number of copies of the etcd (e.g. 1, 3, 5…).
    • The Kubernetes control plane maintains a record of all objects (i.e. pods) in a cluster and updates them with the configuration provided in the Rancher admin interface.
    • Kubernetes workers run the actual workloads and monitoring tools to ensure the healthiness of the containers. All pod deployments happen on the worker node.

    Choose the roles for each of the Instances in the cluster and run the command shown on the page to install the required software and link them with Rancher:

    Once all Instances are ready, click Done to initialize the cluster.

  5. Once the cluster is ready, the dashboard displays:

Deploying a cluster workload

The cluster is now ready, and the deployment of the first pod can take place. A pod is the smallest and simplest execution unit of a Kubernetes application that you can create or deploy.

  1. Head over Global in the header bar, select the cluster and click Default from the drop-down menu:

  2. The clusters dashboard displays. Click Deploy:

  3. Enter the details of the workload:

    • Name: A friendly name for the workload.
    • Docker Image: Enter nginxdemos/hello to deploy a Nginx demo application.
    • Click Add port to configure the port mapping.
      • Publish the container port: Set the value to port 80
      • Protocol: Set the value to TCP
      • As a: Set the Value to NodePort
      • Listening port: Set the value to port 30000

    Click Launch to create the workload.

  4. Once deployed, open a web browser and point it to http://<>:30000/. The Nginx demo application displays:

Scaling a cluster workload

Currently, the Nginx demo application lives in a single pod and is deployed only on one Instance. Rancher provides the possibility to scale your deployment to multiple pods directly from the web interface.

  1. From the cluster dashboard, click . Then, click Edit in the pop-up menu:

  2. Edit the Workload type and set the number of scalable deployments to 3:

  3. Click Save. Rancher will now send instructions to Kubernetes to update the workload configuration and to deploy 3 pods running the Nginx demo application in parallel.

  4. Access the application running on the second Instance by typing: http://<second_instance_ip>:30000/ in a web browser. The Nginx demo application displays.

    For more information about Rancher and Kubernetes refer to the official documentation.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway