NavigationContentFooter
Jump toSuggest an edit

Configuring your Load Balancer's backend

Reviewed on 04 October 2024Published on 05 June 2023

Backend overview

Each Load Balancer is configured with one or several backends. A backend represents a set of backend servers that the Load Balancer’s frontend(s) forwards requests to using the specified configuration (port, protocol, Proxy Protocol etc.).

You can create and configure frontends and backends while creating a Load Balancer or choose to create an “empty” Load Balancer and add frontends and backends later on.

You have access to the same configuration settings for your backend whether you create it with the Load Balancer, or later. Any of these settings can be edited thereafter. Read on to find out more about the different configuration settings available for your backend.

Configuring basic settings

Backend name

You can use an automatically-generated random name suggested by the console, or choose your own name for the backend

Protocol and port

The Load Balancer initiates connections to its backend servers using the protocol and port you define, and can therefore leverage the selected protocol’s capabilities to manage traffic.

  • HTTP protocol (typically port 80, or port 443 if also using TLS): The Load Balancer forwards HTTP requests to backend servers. Traffic will be forwarded without encryption.
  • TCP protocol: The Load Balancer connects to backend servers using TCP, and forwards TCP payloads with no modification (other than encryption/decryption, where configured).

The port number must be between 1 and 65535.

You may also be interested in reading about choosing protocols (and TLS encryption settings, below) for different offloading / passthrough / bridging configurations.

TLS encryption

You can choose to activate or deactivate TLS. Activating TLS encrypts connections between the Load Balancer and the backend server(s). Note that the backend server should have its own SSL/TLS certificate.

Verify Certificate

If you activate TLS encryption, you are prompted to choose a Verify Certificate setting.

When the Load Balancer initiates an encrypted connection to a backend server, the backend server presents its certificate.

  • When Verify Certificate is selected, the Load Balancer checks the backend server’s certificate and only connects if it is valid.
  • When Verify Certificate is unselected, the Load Balancer does not check the backend server’s certificate, connecting to the backend server regardless of the certificate’s validity.

This setting mostly concerns those using self-signed certificates on the backend server/s.

Note that the same Verify Certificate setting you select here will also be used for health checks, and cannot be overridden.

Proxy Protocol

If you activate Proxy Protocol, the Load Balancer forwards information about the original client’s connection, such as their IP address and port, to the backend servers. This is achieved by adding the information to the headers of the TCP stream. The exact information included in the header depends on the version of Proxy Protocol that is chosen (see below).

Passing client connection information to the backend servers is beneficial for use cases including:

  • Preserving the client’s real IP address for your backend servers’ logs and metrics
  • Applications which require the client’s IP address for security measures or IP-based restrictions
  • Applications which require the client’s IP address for geolocation or personalized content

If none of the use cases above apply to the applications or metrics running on your backend server, it may not be necessary for you to activate Proxy Protocol.

Also, note that Proxy Protocol is more commonly activated for Load Balancers using TCP protocol. Load Balancers using HTTP protocol already pass information about the client IP address to the backend servers via an HTTP X-Forwarded-For header, without needing to activate Proxy Protocol. If your Load Balancer uses HTTP protocol and you do not require the standardized information in the Proxy Protocol headers at the backend server, the X-Forwarded-For headers may be sufficient.

In order for Proxy Protocol to work, the backend server must support the selected Proxy Protocol. Different server softwares understand and process Proxy Protocol header information in different ways, and you may need to carry out specific configuration steps to ensure it is correctly received and processed. For example, for Nginx you might need to install and configure ngx_http_realip_module. Consult the documentation for your own server software, or see our dedicated tutorial on configuring different web servers for Proxy Protocol v2 to help you get started.

Proxy Protocol versions

If you choose to activate Proxy Protocol on your Load Balancer, you are prompted to select one of the following versions:

VersionAdvantagesDisadvantages
Proxy Protocol v1- Plain text (ASCII) human-readable headers
- Widely supported
- Easy to troubleshoot
- Header format not as efficient as v2
Proxy Protocol v2- Binary, TLV format headers
- Efficient header format, reduced overhead for high throughput scenarios
- Extensible and future-proof
- Enhanced header security
- Less widely supported by backend servers
- Not human-readable: harder to troubleshoot
Proxy Protocol v2-ssl- As for v2
- SSL information extension added: information on client’s SSL connection settings
- As for v2
Proxy Protocol v2-ssl-cn- As for v2-ssl
- Common name from subject of client’s certificate added (if any)
- As for v2

For a full specification of the header formats in each case, see the HAProxy documentation.

Ensure that your backend server is correctly configured to handle whichever version of Proxy Protocol you choose.

Configuring traffic management

Balancing method

Load Balancers support three different modes of balancing load (requests) between backend servers. The table below shows the methods, and their advantages and use-cases.

MethodDescriptionSuitable for
Round robinRequests are sent to each backend server in turn, with the Load Balancer acting like a turnstile.
This is the most frequently-used balancing method, and the easiest to implement.
The majority of infrastructures, where backend servers are identical or very similar.
Least connectionsEach request is assigned to the server with the fewest active connections at the time.Infrastructures where it is expected that sessions will be long, e.g. LDAP, SQL TSE.
Less well-suited to protocols with typically short sessions, like HTTP.
First availableEach request is directed towards the first backend server with available connection slots, based on the max simultaneous setting.
Once a server reaches its limit of maximum simultaneous connections, requests are directed to the next server.
Infrastructures where you want to use the smallest number of backend servers at any given time (e.g. you want to power off extra servers during off-peak hours.)

Server IPs (backend servers)

You can add one or more IP addresses, either IPv4 or IPv6, of the backend servers your Load Balancer’s backend will forward traffic to.

Sticky sessions

When activated, sticky sessions bind a user’s session to a specific server in the pool of backend servers. This ensures that all subsequent sessions from the user are sent to the same backend server while there is at least one active session. Sticky sessions can be cookie-based or IP-based (aka table-based).

  • IP-based stickiness uses the source (client) IP address to stick sessions to backends. This is the only possible method for TCP backends.
  • Cookie-based stickiness uses an HTTP cookie to identify a session, and the server in the backend pool that it will stick to. For each incoming request without a stickiness cookie, the Load Balancer creates a cookie and selects a backend server for the session. Subsequent requests providing this cookie will land on the same backend server. You can customize the name of the cookie.

Configuring advanced settings

Recommended settings for all the parameters below are selected by default, however you can modify them if you wish.

Timeout

Timeout settings allow you to define the maximum time that backend servers should be given to establish connections and to process requests. The following timeout settings are available:

  • Tunnel timeout: Defines the maximum allowed tunnel inactivity time (in ms) between a client and a backend server after a Websocket is established. This value takes precedence over client and server timeout in determining when to close the connection. The default value is 900 000, the minimum value is 0 and the maximum value is 2 147 483 647.
  • Server timeout: Defines the maximum allowed time (in ms) that a backend server has to process a request. The default value is 300 000, the minimum value is 0 and the maximum value is 2 147 483 647.
  • Connection timeout: Defines the maximum allowed time (in ms) to establish a connection between a client and a backend server. The default value is 5 000, the minimum value is O and the maximum value is 2 147 483 647.

Backend protection

Backend protection settings define when the Load Balancer should view a backend server as having reached its maximum number of simultaneous connections / requests, and how to handle sticky sessions in this context.

A Limit backend load toggle displays in the Backend protection screen.

  • Toggle deactivated: No settings to limit backend load are used. The Load Balancer can send an unlimited number of simultaneous connections/requests to backend servers.

  • Toggle activated: Additional settings to limit backend load are activated and appear for you to configure. These additional settings are Max simultaneous and (if you previously activated sticky sessions) Queue timeout.

    • Max simultaneous: Defines the maximum number of simultaneous requests (for HTTP) or simultaneous connections (for TCP) to any single backend server. A value of 20 means that each backend server will have a limit of 20 connections (even if, for example, there are only three servers in the backend). This setting is particularly relevant when using the First available balancing method.

      The minimum value for this field is 1, and the maximum value depends on the Load Balancer type. You should choose an appropriate value based on your backend server characteristics and traffic patterns.

      When the maximum number of simultaneous connections/requests is reached for a single backend server, the Load Balancer will either:

      • Pass the request/connection to a different backend server that still has slots available, unless no backend server has available slots in which case the Load Balancer indicates to the client that the request cannot be handled (e.g. 503 service unavailable for HTTP or connection closed for TCP), or
      • If sticky sessions are enabled: put the request into a queue for the backend server in question. Therefore, if and only if you enabled activated sticky sessions, you will also be prompted to set the following value:
    • Queue timeout: Defines the maximum length of time (in ms) to queue a request/connection for a given backend where stickiness is enabled. The default value for this setting is 5 000, the minimum value is 1 and the maximum value is 2 147 483 647. Choose an appropriate value based the acceptable wait time for your users, and your application’s characteristics and traffic patterns.

      Requests will wait in the queue for an available connection slot. If the queue timeout value is reached, the Load Balancer indicates to the client that the request cannot be handled (e.g. 503 service unavailable for HTTP or connection closed for TCP).

Retries

Retry settings define how the Load Balancer should retry to establish a connection to the backend server if its initial attempt fails. The following settings are available:

  • Retry policy: Select whether the Load Balancer should try again on the same server (default) or redispatch and try different servers
  • Max. retries: Defines the maximum number of times to retry establishing a connection after the first attempt. A value of 3 for example means that an initial attempt is made, and if that fails then the Load Balancer will try three more times to establish a connection. The default value is 3, the minimum value is 0 and the maximum value is 32.

Customized error page

Activating the customized error page feature allows you to redirect users to a static website hosted on Scaleway Object Storage, in the case that none of your Load Balancer’s backends are available to serve the requested content. This website could be a simple, single webpage or else something much more complex: you build it according to your own requirements.

If you do not activate this feature, and none of your backend servers are available, your users will instead see a standard HTTP error displayed in their browser, e.g. 503 Service Unavailable.

Benefits of this feature include:

  • Displaying customized, branded and user-friendly error messages
  • Providing links to support resources or contact information
  • Providing information on service status or maintenance
  • Redirecting to a mirrored site or skeleton site

Note that when entering the Object Storage link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.https://my-bucket.s3-website.nl-ams.scw.cloud). See our dedicated documentation for further help.

Health checks

See our dedicated documentation on configuring health checks.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2024 – Scaleway