HomeNetworkLoad BalancersReference Content
Configuring your Load Balancer
Jump toUpdate content

Configuring your Load Balancer

Load Balancer Overview

Load Balancers allow you to distribute traffic and workload across multiple backend servers. They make it easy to scale your applications while ensuring continuous availability even in the event of heavy traffic. Each Load Balancer that you create is a highly available and fully managed Instance, consisting of one or more frontends and one or more backends. You can configure your Load Balancer according to your needs.

In this document, we explain some of the different configurations available when setting up your Load Balancer, and give examples of how to configure its frontends and backends for different use cases.

Configuring a frontend

The frontend listens on the port you specify, and forwards received requests to the backend(s).

When creating a frontend, aside from the name the only configurable setting is that of the port it listens on.

Choosing a port

For HTTP, a frontend will typically listen on port 80, while for HTTPS it will typically listen on port 443.

After creating your Load Balancer, you can create multiple frontends to listen on different ports, and connect them to different backends as required. You can also set up custom ACL rules and routes for each frontend.

Configuring a backend

The configuration that you choose for your Load Balancer’s backend determines the manner in which it transfers requests to its associated backend servers.

A number of configuration settings are available.

Note:

Here, the term “backend” refers to a set of configurations and target backend servers that determine how the Load Balancer forwards traffic. You can create one or more backends for your Load Balancer. The term “backend server” designates the target backend servers which the Load Balancer forwards traffic on to. Each backend must specify one or more backend servers (via their IP addresses) to forward to. All backend servers in a given backend should be able to serve the same requests in the same way.

Choosing a protocol and port

The Load Balancer initiates connections to its backend servers using the protocol and port you define.

  • HTTP protocol (typically port 80, or port 443 if also using TLS): The Load Balancer will initiate an HTTP connection with the backend server. Traffic will be forwarded without encryption.
  • TCP protocol: The Load Balancer will attempt to initiate a TCP (layer 4) connection to the backend server.

TLS encryption

Selecting TLS encryption encrypts connections between the Load Balancer and the backend server(s). Note that the backend server should have its own SSL/TLS certificate.

Choosing a Verify Certificate setting

If you select TLS encryption, you are prompted to choose a Verify Certificate setting.

When the Load Balancer initiates an encrypted connection to a backend server, the backend server presents its certificate.

  • When Verify Certificate is selected, the Load Balancer checks the backend server’s certificate and only connects if it is valid.
  • When Verify Certificate is unselected, the Load Balancer does not check the backend server’s certificate, connecting to the backend server regardless of the certificate’s validity.

This setting mostly concerns those using self-signed certificates on the backend server/s.

Note that the same Verify Certificate setting you select here will also be used for health checks, if you choose HTTPS for your health check method.

Choosing a proxy protocol

Proxy protocol enables the original client IP address to be passed to the backend server in a standardized way. However, in order to work, the backend server must support the selected proxy protocol.

  • No proxy protocol: Disable proxy protocol.
  • Proxy protocol v1: Version one (text format).
  • Proxy protocol v2: Version two (binary format).
  • Proxy protocol v2-ssl: Version two with SSL information extension added, which provides information on SSL connection settings originally sent by the client.
  • Proxy protocol v2-ssl-cn: Version two with SSL information extension added as above, as well as the common name from the sbject of the client certificate (if any).

Choosing a health check method

Load Balancers should only forward traffic to “healthy” backend servers. To monitor the health of a backend server, health checks regularly attempt to communicate with backend servers using the defined protocol to ensure that servers are up and listening. Various protocols for health checks are available:

HTTP: When selecting HTTP heath checks, you are prompted to choose a method (GET, POST or PUT), the URI to be targeted, and the response code that signals a successful health check.

HTTPS: The same method and fields as for an HTTP health check apply, but data transmission is encrypted. If you chose HTTP as the main protocol for the Load Balancer’s backend and activated TLS, the Verify Certificate setting you chose there is inherited for health checks. If you chose a different protocol for the Load Balancer’s backend, the backend server’s certificate will not be verified during health checks. See the dedicated Verify Certificate section for more information.

MYSQL: For MYSQL health checks, you are prompted to select the database username. The MYSQL health check sends two MySQL packets to the server: first a Client Authentication packet and then a QUIT packet to properly close the MySQL session. The received MySQL Handshake Initialization packet (and/or Error packet) is then parsed. This basic but useful test does not produce errors or aborted connections on the server. However, bear in mind that it does not check database presence or database consistency (an external check e.g. with xinetd would be required for this).

Tip:

The user specified for the health check must be unlocked and authorized without a password. To create a basic limited user in MySQL with optional resource limits:

CREATE USER '<username>'@'<ip_of_haproxy|network_of_haproxy/netmask>'
/*!50701 WITH MAX_QUERIES_PER_HOUR 1 MAX_UPDATES_PER_HOUR 0 */
/*M!100201 MAX_STATEMENT_TIME 0.0001 */;
Important:

This health check method requires MySQL >=3.22. For older versions, we recommend using a TCP health check.

PGSQL: For PGSQL health checks, you are prompted to select the database username. The check sends a PostgreSQL StartupMessage and waits for either an Authentication request or ErrorResponse message. This basic but useful test does not produce errors or aborted connections on the server. This check is identical to the mysql-check.

Important:

There is a known bug with PGSQL health checks where the PGSQL server verison is higher than 14. A fix is currently underway. In the meantime, we recommend using a TCP health check for PGSQL servers of version 14 or higher.

LDAP: No additional settings are required for LDAP health checks. LDAP health checks test connection to the LDAP server by sending an a LDAPv3 anonymous simple bind message. This checks the LDAP service is actually up, and not just that a TCP session can be established.

Redis: No additional settings are required for REDIS health checks. A PING redis command is sent to the server, and the response is analyzed for the +PONG response message. This checks that the server is correctly communicating with the REDIS protocol, rather than just accepting the TCP connection.

TCP: No additional settings are required for TCP health checks. This check consists of a connection attempt that either fails or succeeds.

Adding server IPs (backend servers)

You can add one or more IP addresses, either IPv4 or IPv6, of the backend servers your Load Balancer’s backend will forward traffic to.

Using advanced settings

Here, you have the option to activate sticky sessions. When activated, this bind a user’s session to a specific server in the pool of backend servers. This ensures that all subsequent sessions from the user are sent to the same backend server while there is at least one active session. Sticky sessions can be cookie based or table based.

  • Cookie-based stickiness uses an HTTP cookie to identify a session, and the server in the backend pool that it will stick to. For each incoming request without a stickiness cookie, the Load Balancer creates a cookie and selects a backend for the session. Subsequent requests providing this cookie will land on the same backend. You can customize the name of the cookie.
  • Table-based stickiness uses the source (client) IP address to stick sessions to backends.

By default, when activating sticky sessions through the console, the cookie-based method is used when HTTP protocol is selected, and the table-based method is used when TCP protocol is selected. Use the API if you want to select table-based stickiness with HTTP protocol.

Configuring a Load Balancer for SSL bridging

SSL bridging allows the user to initiate a secure, encrypted connection with the Load Balancer thanks to the Load Balancer frontend’s SSL certificate. The Load Balancer decrypts incoming HTTPS traffic. This allows the Load Balancer to carry out layer 7 actions on the received traffic. The Load Balancer’s backend then initiates a new encrypted connection to re-encrypt traffic between the Load Balancer and the backend server, this time using the backend server’s certificate.

To configure your Load Balancer for SSL bridging:

  • The frontend must have a certificate.
  • The frontend must be linked to a backend which has TLS.
  • The backend server should have its own certificate.

Configuring a Load Balancer for SSL offloading

SSL offloading, also known as SSL termination, allows the user to initiate a secure connection with the Load Balancer thanks to the Load Balancer frontend’s SSL certificate. The Load Balancer decrypts incoming HTTPS traffic. Layer 7 actions may therefore be applied to the traffic at this stage. Traffic is not re-encrypted on its way from the Load Balancer to the backend server, unlike with SSL bridging. Traffic that has gone through the offloading process is marked with a new header, called X-Forwarded-Proto, which informs the backend server that the client used HTTPS to contact the Load Balancer.

To configure your Load Balancer for SSL offloading:

  • The frontend must have a certificate.
  • The frontend must be linked to a backend which uses HTTP protocol.
  • The backend server does not need its own certificate.

Configuring a Load Balancer for SSL passthrough

Passthrough is the simplest way to handle encrypted traffic on a Load Balancer. As the name suggests, traffic is simply passed through the Load Balancer without being decrypted on it. Whilst this option generates very low overhead, no layer 7 actions can be carried out. This means that no cookie-based sticky sessions are possible with this method. In addition, if an application does not share sessions between servers, users’ sessions may get lost by being redirected to different servers of the group.

To configure your Load Balancer for SSL passthrough:

  • The frontend does not need a certificate and can listen on any port.
  • The frontend must be linked to a backend which uses TCP protocol.
  • The backend server must listen with its HTTP server process on the same port as configured for the backend.
  • The backend server must have its own certificate.