Exposing a Kubernetes Kapsule ingress controller service with a Load Balancer
This page walks you through the process of deploying an NGINX ingress controller on Scaleway's Kubernetes Kapsule service. We will configure a Load Balancer that uses a persistent IP address, which is essential for maintaining consistent routing. Additionally, we will enable the PROXY protocol to preserve client information such as the original IP address and port, which is recommended for applications that need to log or act on this data.
We will explore the differences between ephemeral and persistent IP addresses, helping you understand when and why to use each type, and guide you through deploying a demo application that illustrates the entire setup.
By the end of this guide, you should have a robust and well-configured NGINX ingress controller running on Scaleway's Kubernetes platform.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- Set up a Kubernetes Kapsule cluster, deploying a TRAEFIK2 ingress controller via the application library using the Easy Deploy function
- Obtained the kubeconfig file for the cluster
- Helm installed on your local machine
- Installed kubectl and the Scaleway CLI on your local machine
Overview of key concepts
Ingress controller
An ingress controller manages external HTTP/HTTPS traffic to services within a Kubernetes cluster. The NGINX ingress controller routes traffic based on ingress resource rules.
LoadBalancer service
On Scaleway Kapsule, the LoadBalancer service provisions a Scaleway Load Balancer with an external IP, exposing the ingress controller via the Scaleway Cloud Controller Manager (CCM).
Ephemeral vs. persistent IPs
- Ephemeral IP: Dynamically assigned by Scaleway when a LoadBalancer service is created. It may change if the service is deleted and recreated, requiring DNS updates.
- Persistent IP: A flexible IP reserved via the Scaleway API, CLI or console, ensuring consistency across service recreations. This is recommended for production to maintain stable DNS records.
PROXY protocol
The PROXY protocol allows the LoadBalancer to forward the client's original IP address to the ingress controller, preserving source information for logging and security.
Deploying the ingress controller
Installation prework
Kapsule clusters use a default security group (kubernetes-<cluster-id>
) that blocks incoming traffic. To allow HTTP/HTTPS connections to the cluster:
- Go to the Scaleway console and navigate to Compute > CPU & GPU Instances > Security Groups.
- Locate the security group
kubernetes-<cluster-id>
. - Add rules to allow:
- TCP port 80 (HTTP) from
0.0.0.0/0
. - TCP port 443 (HTTPS) from
0.0.0.0/0
.
- TCP port 80 (HTTP) from
Reserve a flexible IP
To use a persistent IP with the ingress controller:
- Create a flexible IP using the Scaleway CLI:
scw lb ip create
- Note the IP address (e.g.,
195.154.72.226
) and IP ID for use in the LoadBalancer service.
Installing the NGINX ingress controller
Use Helm to deploy the NGINX ingress controller with Scaleway-specific configurations.
-
Add the NGINX ingress Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update
-
Create a file named
ingress-values.yaml
with and edit theloadBalancerIP
to your flexible IP:controller: service: type: LoadBalancer # Specify reserved flexible IP loadBalancerIP: "195.154.72.226" annotations: # Enable PROXY protocol v2 service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: "true" # Use hostname for cert-manager compatibility service.beta.kubernetes.io/scw-loadbalancer-use-hostname: "true" config: # Enable PROXY protocol in NGINX use-proxy-protocol: "true" use-forwarded-headers: "true" compute-full-forwarded-for: "true"
-
Deploy the ingress controller:
helm install ingress-nginx ingress-nginx/ingress-nginx -f ingress-values.yaml --namespace ingress-nginx --create-namespace
-
Verify the LoadBalancer IP using
kubectl
:kubectl get svc -n ingress-nginx ingress-nginx-controller
You will see an output similar to the following example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.100.0.1 195.154.72.226 80/TCP,443/TCP 5m
-
Configure DNS by setting the A-Record of your domain (e.g.,
demo.example.com
) to the flexible IP via Scaleway's Domains & DNS product or your DNS provider. Persistent IPs ensure stability and will not change as long as they are reserved.
Deploying a demo application
-
Create a file named
demo-app.yaml
and copy the following content into it to deploy a simple web application to test the ingress controller:apiVersion: apps/v1 kind: Deployment metadata: name: demo-app namespace: default spec: replicas: 2 selector: matchLabels: app: demo-app template: metadata: labels: app: demo-app spec: containers: - name: demo-app image: nginx:1.21 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: demo-app namespace: default spec: selector: app: demo-app ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-app-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: demo.example.com http: paths: - path: / pathType: Prefix backend: service: name: demo-app port: number: 80
-
Apply the configuration:
kubectl apply -f demo-app.yaml
Test the setup
-
Access the demo application:
curl http://demo.example.com # or curl http://195.154.72.226/
-
You should see the NGINX welcome page. Verify the PROXY protocol by checking logs for the client's real IP:
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
Cleanup (optional)
Once finished, you can remove the demo application and ingress controller from your cluster:
kubectl delete -f demo-app.yaml
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx
To release the flexible IP:
scw lb ip delete <IP-ID>