Scaleway Load Balancer

Load Balancer Overview

Load Balancers are highly available and fully-managed instances which allow to distribute the workload among your various services. They ensure the scaling of all your applications while securing their continuous availability, even in the event of heavy traffic. They are commonly used to improve the performance and reliability of web sites, applications, databases and other services by distributing the workload across multiple servers.

Load Balancers are build to use an Internet-facing front-end servers to shuttle information to and from backend servers. They provide a dedicated public IP address, and forward requests automatically to one of the backend servers based on resource availability. Backend servers may include Scaleway Cloud instances, as well as Dedibox dedicated servers.

Requirements

Core Concepts

Master and Backup Load Balancer: Each Load Balancer is implemented with two instances: a master instance and backup instance, which provide an active-passive high availability. These instances are pre-configured which means that if the master fails, the backup is ready to handle the traffic. The master and backup are running on different hardware clusters to minimize the risk of a simultaneous failure and make sure that they do not share physical resources.

Frontends: Each Load Balancer is configured with one or several frontends. Each frontend listens to a configured port and has one or many backends to which the traffic is forwarded.

Backends: A backend is a set of servers that receives forwarded requests.

High Availability: A high availability (HA) setup is an infrastructure without a single point of failure. It prevents a server failure by adding redundancy to every layer of your architecture.

Highly Available IP address: Highly available address, which is, by default, routed to the master Load Balancer instance. In the event of a master instance failure this address is automatically re-routed to the backup one. Highly available IP address is automatically created by default, when a Load Balancer is created. It can also be conserved when a Load Balancer is deleted and re-used later.

Health Checks: Load balancers should only forward traffic to “healthy” backend servers. To monitor the health of a backend server, health checks regularly attempt to connect to backend servers using the protocol and port defined by the forwarding rules to ensure that servers are listening.

Sticky Session: Enables the load balancer to bind a user’s session to a specific instance. This ensures that all subsequent sessions from the user are sent to the same instance, while there is at least one active session.

Operation Procedures

Creating a Load Balancer

In the Network section of the side menu, click Load Balancer. If you do not have a Load Balancer already created, the product presentation is displayed.

1 . Click Create a Load Balancer.

Create New Load Balancer

2 . The creation page displays. Enter the Load Balancer name and description. Optionally, you can assign tags to organize your Load Balancer.

3 . Choose a region in which your Load Balancer will be deployed geographically. Currently, Load Balancer instances are available in Amsterdam and Paris regions.

4 . Select an IP address that will be assigned to your Load Balancer. If left empty, a new IP address is allocated automatically.

5 . Configure your Frontend and Backend.

Frontend rules include:

  • Frontend rule name
  • Protocol: TCP or HTTP
  • Port

Backend rules include:

  • Backend rule name
  • Protocol: TCP or HTTP
  • Port
  • Proxy
  • Health check type and Health check option
  • Sticky session with the cookie name associated
  • Server IP

To add a new rule, click Add new rule.

6 . Click Create a Load Balancer.

Adding Frontends and Backends

Once a Load Balancer is created,, it is added to the Load Balancer’s list. To add a new rule or edit an existing one, click on the Load Balancer name or the , then More Info.

Nextcloud External Storage Files

The Load Balancer information page displays.

Click on Frontend rules or Backend rules depending on the one you want to add or edit.

Frontend rules

You can edit the rule’s name, its corresponding backend rule, the protocol and the port directly on the rule list page. Additionally, you can also delete the rule.

To add a new frontend rule:

1 . Click Add frontend rule. The creation page displays.

2 . Enter the rules option. The fields are identical to the one displayed in the Load Balancer creation page.

3 . Click Create a frontend rule

Backend rules

You can edit:

  • The rule’s name
  • The protocol
  • The port
  • The proxy
  • The health check type
  • The health check name
  • The sticky session
  • The cookie name
  • The server IP

To add a new frontend rule:

1 . Click Add backend rule. The creation page displays.

2 . Enter the rules option. The fields are identical to the one displayed in the Load Balancer creation page.

3 . Click Create a backend rule

Deleting a Load Balancer

On the Load Balancer overview page, scroll down to Delete Load Balancer.

Delete Load Balancer

Load Balancer Limitations

The following technical limitations apply when using the Load Balancer product:

  • TLS/SSL offload is not supported. To balance HTTPS sessions use TCP mode.

  • Your external highly available IP address can only be IPv4. However it is possible to use IPv6 between IPv6 and backend servers.

  • Each Load Balancer supports only one highly available frontend IP.

For further information, refer to the Load Balancer FAQ and API documentation.

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.