Skip to navigationSkip to main contentSkip to footerScaleway DocsSparklesIconAsk our AI
SparklesIconAsk our AI

I am experiencing problems with my Kubernetes Load Balancer

If you are experiencing errors with your Kubernetes Kapsule Load Balancer, this page may help you find solutions to some of the most common problems.

AlertCircleIcon
Important

You should never try to create or modify a Kubernetes Kapsule's Load Balancer via the Scaleway console, the API, or any other developer tools.

This leads to unexpected and unreliable behaviour, as the cluster's Cloud Controller Manager is unaware of the Load Balancer and attempts to overwrite configurations made in the console.

Always provision and modify Kubernetes Load Balancers via the CCM. Use annotations to configure your cluster’s Load Balancer.

I'm experiencing connectivity issues with my Kubernetes Load Balancer

You may find that your Load Balancer is not connecting to nodes in your Kapsule cluster, meaning that health checks are failing and your application is inaccessible from the internet.

Cause

A configuration issue is preventing successful communication between your Load Balancer and the cluster's nodes.

Solutions

  • Ensure that you provisioned and configured your Load Balancer via Kubernetes and not via the Scaleway console, which provokes unexpected behaviors and errors.
  • Verify that the required service is running on all nodes. If it is missing from some nodes, this could be causing health checks to fail.
  • Check your cluster's externalTrafficPolicy setting. If it is set to Local instead of Cluster, this could be causing the issue. Change the policy to Cluster.
  • Try enabling or disabling Cloudflare's Proxy Mode, which may be affecting connectivity.

My certificate is not being resolved when accessing my Kubernetes Load Balancer from within the cluster

You may be able to reach applications from outside your cluster, but when trying to reach your Load Balancer from inside your Kapsule cluster, experience the following error message:

routines:ss3_get_record:wrong version number:../ssl/record/ssl3_record.c:331

Cause

The Load Balancer is not properly configured to handle requests from within the cluster. Specifically, it is not using the hostname to route requests.

Solution

Add an annotation to the Load Balancer configuration, to use the hostname to route requests:

service.beta.kubernetes.io/scw-loadbalancer-use-hostname: "true"

By adding this annotation, the Load Balancer will use the hostname to route requests from within the cluster.

I'm getting a 400 Bad Request error when accessing my service over HTTPS

You may encounter a 400 Bad Request - The plain HTTP request was sent to HTTPS port error when trying to access your Kubernetes service via HTTPS, even though the setup works correctly over HTTP.

Cause

This issue occurs when using Scaleway Load Balancers with SSL offloading. The Load Balancer terminates the HTTPS connection and forwards traffic as plain HTTP to your backend service on port 443. However, if your backend (e.g., NGINX Ingress) expects HTTPS traffic on port 443, it rejects the unencrypted HTTP request, resulting in the 400 error.

Solutions

  • Never attempt to manage the Load Balancer via the Scaleway console. Ensure you always provision and configure it through Kubernetes manifests for consistent behavior.
  • Ensure your backend service is configured to accept HTTP traffic on port 443 when using SSL offload through the Load Balancer.
  • Consider using Traefik as an alternative ingress controller, which handles this setup more seamlessly. Our Traefik v2 and Cert-Manager tutorial provides a working example for secure ingress.

Review the Kubernetes Load Balancer documentation for details on SSL offload and supported configurations.

My Load Balancer is failing to update when a node is initializing or unavailable

You may experience transient failures or delays in your Scaleway Load Balancer syncing its backend nodes, with error messages like Error updating load balancer: node <name> is not yet initialized or SyncLoadBalancerFailed. This can result in temporary service unavailability, even though your Kubernetes workloads appear to be running correctly.

Cause

By default, a Scaleway Load Balancer targets all nodes in the cluster, regardless of whether your application (e.g., Traefik, NGINX Ingress) is scheduled on them. When a new node is being provisioned or an existing one becomes temporarily unreachable (e.g., due to OOM, initialization delays, or autoscaling), the Load Balancer controller attempts to register it. If that node is not ready, the update process can stall or fail, affecting the entire backend configuration, even for healthy nodes.

Solutions

Use targeted node selection: Annotate your LoadBalancer service with service.beta.kubernetes.io/scw-loadbalancer-target-node-labels to restrict backend registration to only the nodes running your ingress controller. For example:

service.beta.kubernetes.io/scw-loadbalancer-target-node-labels: "role=ingress"

This ensures the Load Balancer only tracks nodes you control, avoiding disruptions from unrelated node changes.

Label your ingress nodes: Ensure your ingress pods (e.g., Traefik) run on dedicated nodes with a specific label (e.g., role=ingress), and configure node affinity or taints accordingly.

Monitor node health: High memory or CPU pressure can delay node readiness. Ensure sufficient headroom on your nodes to avoid initialization issues.

Review logs: Check the service-controller logs in your kube-controller-manager for Load Balancer sync errors and failed node registrations.

For more details on supported annotations, see the Scaleway Cloud Controller Manager documentation.

I am experiencing a different problem

SearchIcon
No Results