Setup SSL Offloading on Load Balancer

SSL on Load Balancers Overview

Encrypted HTTPS traffic can be handled in two ways on the Scaleway Managed Load Balancer service.

It is possible to configure:

It is also possible to configure SSL on Load Balancers directly by using the API.

Requirements

Configuring SSL Passthrough

Passtrough is the simplest form of handling HTTPS traffic on a Load Balancer. As the name suggests, traffic is passed simply through the Load Balancer without being decrypted on it. Whilst this option generates very low overhead, no layer 7 actions can be carried out. This means that no cookie based sticky sessions are possible with this method and if an application does not share sessions between servers, users’ sessions may get lost by being redirected to different servers of the group.

To configure SSL passtrough, create a Frontend Rule listening on Port 443 and a backend, listening on port 443 in TCP mode:

Configuring SSL Offloading

SSL offloading means that all HTTPS traffic is decrypted on the Load Balancer and passed to the backend servers in plain HTTP. This means any layer 7 actions may be carried onto the traffic before passing it to the backend hosts. Traffic that has gone through the offloading process is being marked with a new header, called X-Forwarded-Proto.

1 . In the Scaleway Management Console click on Load Balancer in the menu on the left.

2 . The list of your Load Balancers displays. Click on next to the Load Balancer you want to configure and click on More info.

3 . The Load Balancer information displays. Click on the SSL Certificates tab:

4 . Click on Create a SSL certificate.

5 . Choose the SSL certificate name, a common name (the main domain name linked to the certificate) and, if required, alternative names (additional domain names linked to the certificate). The certificate type is already pre-filled (Let’s Encrypt).

6 . Click on Create SSL certificate to request the certificate at the Let’s Encrypt authority.

7 . Enter the frontend section of the Load Balancer by clicking on the Frondend tab. Click on Add Frontend to create a new frontend for SSL offloading.

8 . Enter the frontend details and configure the backend:

  • For the frontend enter:
    • Frontend name: A friendly name for the frontend
    • Port: The port number on which the frontend listens. Enter 443 to configure the Load Balancer to listen on the standard HTTPS port.
    • SSL Certificate: Choose the SSL Certificate to use from the drop-down list.
  • For the backend enter:
    • Backend name: A friendly name for the backend
    • Protocol: The protocol to use for the backend. Choose HTTP from the drop-down list.
    • Port: The port on which the backend application listens. Enter 80 to configure the load balancer to communicate with the backend on the standard HTTP port.
    • Proxy: Enable this option, to use the Proxyv2 protocol.
    • Sticky Session: Enables the load balancer to bind a user’s session to a specific instance. This ensures that all subsequent sessions from the user are sent to the same instance, while there is at least one active session.
    • Health Check: Configure a HTTP health check to detect if the backend application is available.
    • Server IP(s): Enter the IP address of the server(s) running the backend application.

7 . Click on Add Frontend to setup the new frontend (and backend).

8 . Open a web browser and point it to https://common_name (replace common_name with the main domain configured in the SSL certificate). The connection is now encrypted with SSL:

Configuring SSL Offloading via the API

It is also possible to configure SSL offloading by using the Load Balancer API.

Before configuring the load balancer from the API, prepare your environment to facilitate the API usage. Recover the <secret_key> and the <organization_id> from the management console or the API and set them as environment variables. Also configure the geographical location of your Load Balancer:

export TOKEN="<secret_key>"
REGION="<choose your location (nl-ams/fr-par)>"
ORGANIZATION_ID="<your organization ID>"

1 . Create a new Load Balancer by running the following API call. Customize the name, description and tags:

curl -X POST "https://api.scaleway.com/lb/v1/regions/$REGION/lbs" -H "accept: application/json" -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" \
-d "{\"description\":\"YOUR DESCRIPTION\",\"name\":\"TEST\",\"organization_id\":\"$ORGANIZATION_ID\",\"tags\":[\"test\", \"step by step\"]}"

The output of the API call will return a JSON output, similar to this example:

{
  "id": "6208ec73-2b0e-4b60-b449-7f6bd72fd522",
  "name": "TEST",
  "description": "YOUR DESCRIPTION",
  "status": "pending",
  "instances": [],
  "organization_id": "ORGANIZATION_ID",
  "ip": [
    {
      "id": "7906bc2b-00cd-4548-8e06-ebfdf1e850be",
      "ip_address": "51.159.11.11",
      "organization_id": "a6a05c73-fa53-46a4-9ea1-e53b4f625527",
      "lb_id": "6208ec73-2b0e-4b60-b449-7f6bd72fd522",
      "reverse": "",
      "region": "fr-par"
    }
  ],
  "tags": [
    "test",
    "step by step"
  ],
  "frontend_count": 0,
  "backend_count": 0,
  "region": "fr-par"
}

The first line starting with "id" displays the ID of the newly created load balancer.

The line starting with ip_address displays the load balanced IP.

2 . Copy the "id" field of the response to use during the next steps. To simplify the use, save the ID to a variable, which will be used in the following steps:

export LB_ID="REPLACE-BY-ID-OF-YOUR-LOAD-BALANCER"

3 . Create a new backend. This tutorial supposes that a web application is running on port 80 of the backend machines. Replace <REPLACE-BY-IP-OF-YOUR-SERVER1> and <REPLACE-BY-IP-OF-YOUR-SERVER2> with the IPs of the backend servers:

curl -s -X POST "https://api.scaleway.com/lb/v1/regions/$REGION/lbs/$LB_ID/backends" -H "accept: application/json" -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" \
-d "{\"forward_port\":80,\"forward_port_algorithm\":\"roundrobin\",\"forward_protocol\":\"tcp\",\"health_check\":{\"check_delay\":2000,\"check_max_retries\":3,\"check_timeout\":1000,\"port\":80,\"tcp_config\":{}},\"name\":\"main backend\",\"send_proxy_v2\":false,\"server_ip\":[\"<REPLACE-BY-IP-OF-YOUR-SERVER1>\", \"<REPLACE-BY-IP-OF-YOUR-SERVER2>\"]} | jq ."

4 . A JSON output similar to the first request appears. Copy the value of the first line starting with id and set it as a variable:

export BACKEND_ID="<REPLACE-BY-ID-OF-YOUR-BACKEND>"

5 . Create the certificate by calling the API endpoint, after replacing <YOUR-CERTFICATE-NAME> with a friendly name for the certificate and <REPLACE-BY-YOUR-DOMAIN-NAME with your domain name (i.e. lb.example.com):

curl -X POST "https://api.scaleway.com/lb/v1/regions/$REGION/lbs/$LB_ID/certificates" -H "accept: application/json" -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d "{\"name\":\"<YOUR-CERTIFICATE-NAME>\",\"letsencrypt\":{\"common_name\":\"<REPLACE-BY-YOUR-DOMAIN-NAME>\"}}"

6 . The certificate details are presented in the form of a JSON list. Copy the value of the first line starting with id and set it as a variable:

export CERT_ID="<REPLACE-BY-ID-OF-YOUR-CERTIFICATE>"

7 . Creating a new frontend is straight forward by specifying the IDs of the load balancer, an existing backend and the certificate. Then specify the inbound_port (Port 443 for the default HTTPS port), on which the frontend will listen for incoming connections.

curl -X POST "https://api.scaleway.com/lb/v1/regions/$REGION/lbs/$LB_ID/frontends" -H "accept: application/json" -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" \
-d "{\"backend_id\":\"$BACKEND_ID\",\"inbound_port\":443,\"name\":\"main frontend\",\"timeout_client\":5000,\"certificate_id\": \"$CERT_ID\"}"

8 . The Load Balancer is now up, configured with a Let’s Encrypt SSL/TLS certificate, accepting HTTPS connections on port 443 and terminating the HTTPS sessions on the Load Balancer before connecting to the backends via a plain HTTP connection.

For more information about the configuration of a Load Balancer via the API, refer to the API documentation.

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.