Scaleway Load Balancer

Load Balancer Overview

Load Balancers are highly available and fully-managed instances which allow to distribute the workload among your various services. They ensure the scaling of all your applications while securing their continuous availability, even in the event of heavy traffic. They are commonly used to improve the performance and reliability of web sites, applications, databases and other services by distributing the workload across multiple servers.

Load Balancers are build to use an Internet-facing front-end servers to shuttle information to and from backend servers. They provide a dedicated public IP address, and forward requests automatically to one of the backend servers based on resource availability. Backend servers may include Scaleway Cloud instances, as well as Dedibox dedicated servers.

Requirements

Core Concepts

Master and Backup Load Balancer: Each Load Balancer is implemented with two instances: a master instance and backup instance, which provide an active-passive high availability. These instances are pre-configured which means that if the master fails, the backup is ready to handle the traffic. The master and backup are running on different hardware clusters to minimize the risk of a simultaneous failure and make sure that they do not share physical resources.

Frontends: Each Load Balancer is configured with one or several frontends. Each frontend listens to a configured port and has one or many backends to which the traffic is forwarded.

Backends: A backend is a set of servers that receives forwarded requests.

High Availability: A high availability (HA) setup is an infrastructure without a single point of failure. It prevents a server failure by adding redundancy to every layer of your architecture.

Highly Available IP address: Highly available address, which is, by default, routed to the master Load Balancer instance. In the event of a master instance failure this address is automatically re-routed to the backup one. Highly available IP address is automatically created by default, when a Load Balancer is created. It can also be conserved when a Load Balancer is deleted and re-used later.

Health Checks: Load balancers should only forward traffic to “healthy” backend servers. Health checks are configured for every backend according to their use in terms of protocol. Once configured, the load balancer health checks regularly according to what one asks them to. Therefore, a client never runs a health check manually.

Sticky Session: Enables the load balancer to bind a user’s session to a specific instance. This ensures that all subsequent sessions from the user are sent to the same instance, while there is at least one active session.

Operation Procedures

Creating a Load Balancer

In the Network section of the side menu, click Load Balancer. If you do not have a Load Balancer already created, the product presentation is displayed.

1 . Click Create a Load Balancer.

The creation page displays.

create load balancer page

2 . Enter the Load Balancer name and description. Optionally, you can assign tags to organize your Load Balancer.

3 . Choose the Availability Zone in which your Load Balancer will be deployed geographically.

list of availability zones

Currently we provide the following Availability Zones:

  • PAR1: Paris 1, France.
  • PAR2: Paris 2, France (innovative and sustainable availability zone).
  • AMS1: Amsterdam, The Netherlands.
  • WAW: Warsaw, Poland.

4 . Select a Load Balancer type.

5 . Select an IP address that will be assigned to your Load Balancer. If left empty, a new IP address is allocated automatically.

6 . Configure your Frontend and Backend.

Frontend rules include:

  • Frontend rule name
  • Port

Backend rules include:

  • Backend rule name
  • Protocol: TCP or HTTP
  • Port
  • Proxy
  • Health check type and Health check option.
  • Sticky session with the cookie name associated
  • Server IP

7 . Click Create a Load Balancer.

Adding Frontends and Backends

Once a Load Balancer is created, it is added to the Load Balancer’s list. To add a new rule or edit an existing one, click on the Load Balancer name or the , then More Info.

Nextcloud External Storage Files

The Load Balancer information page displays.

Adding or Editing Frontends:

You can edit the frontend’s name, its corresponding backend and the port directly on the frontend list page. You can also delete the frontends created.

1 . Click the plus (+) icon. The creation window displays.

2 . Configure the new frontend. The fields are identical to the ones displayed in the Load Balancer creation page.

3 . Click Create Frontend

Adding or Editing Backends:

You can edit the backends directly on the backend list page.You can also delete the backends created.

1 . Click on the pen icon to edit a backend.

2 . Configure the new backend. The fields are identical to the ones displayed in the Load Balancer creation page.

3 . Click Create a Backend

Deleting a Load Balancer

1 . Click the Load Balancer you want to delete.

2 . Scroll down and click Delete Load Balancer.

Delete Load Balancer

Load Balancer Limitations

The following technical limitations apply when using the Load Balancer product:

  • TLS/SSL offload is not supported. To balance HTTPS sessions use TCP mode.

  • Your external highly available IP address can only be IPv4. However it is possible to use IPv6 between IPv6 and backend servers.

  • Each Load Balancer supports only one highly available frontend IP.

For further information, refer to the Load Balancer FAQ and API documentation.

Discover the Cloud That Makes Sense