Jump toUpdate content

Load Balancers - Concepts

ACL

An Access Control List (ACL) is a list of permissions that control which traffic (based on source and destination IPs, or HTTPS information) is allowed to pass through your Load Balancer. This allows you to build extra security into your Load Balancer, and permit or deny access to the application(s) behind it. Learn more about ACLs in our dedicated how-to.

Backends

Each Load Balancer is configured with one or several backends. A backend is a set of servers which receives forwarded requests from the frontend. You can add and manage backends via the console.

Frontends

Each Load Balancer is configured with one or several frontends. Frontends listen on a configured port, and forward requests to one or several backends. You can add and manage frontends via the console.

Flexible IP address

Flexible IP addresses are public IP addresses that you can hold independently of any Load Balancer. By default, each Load Balancer is created automatically with a flexible IP address, that the frontend listens to. In case of a failure of the Load Balancer, a replica Load Balancer is immediately spawned and deployed and the flexible IP address is automatically rerouted to this replica.

Health checks

Load Balancers should only forward traffic to “healthy” backend servers. To monitor the health of a backend server, health checks regularly attempt to connect to backend servers using the protocol and port defined by the forwarding rules to ensure that servers are listening. Various protocols for health checks are available, including HTTP, HTTPS, MYSQL, and more.

High availability

A high availability (HA) setup is an infrastructure without a single point of failure. It prevents a server failure by adding redundancy to every layer of your architecture.

Load Balancers

Load Balancers are highly available and fully managed instances that allow you to distribute workload across multiple servers. They ensure the scaling of all your applications while securing their continuous availability, even in the event of heavy traffic. They are commonly used to improve the performance and reliability of web sites, applications, databases and other services.

Protocol

A protocol is a standard format for communication over a network. When you configure your Load Balancer’s backend, you choose a protocol (HTTP, HTTPs or TCP) which it uses to send and receive data.

Proxy protocol

Proxy protocol is an internet protocol used to transfer connection information from the client (eg the client’s IP address), through the Load Balancer and on to the destination server.

S3 failover

S3 failover is a feature that allows you to redirect users to a static website hosted on Scaleway Object storage, in the case that none of your Load Balancer’s backends are available.

SSL bridging

SSL bridging removes SSL-based encryption from HTTPS traffic coming into the Load Balancer (as with SSL offloading), but then initiates a new SSL connection to rencrypt traffic between the Load Balancer and the backend servers.

SSL offloading

SSL offloading removes SSL-based encryption from HTTPS traffic coming into the Load Balancer.

SSL passthrough

SSL passthrough is the simplest way to handle HTTPS traffic on a Load Balancer. As the name suggests, traffic is simply passed through the Load Balancer without being decrypted.

Sticky session

A sticky session enables the Load Balancer to bind a user’s session to a specific Instance. This ensures that all subsequent sessions from the user are sent to the same Instance, while there is at least one active session.