Deploying an NGINX ingress controller on Scaleway Kubernetes Kapsule without a LoadBalancer
Kubernetes wildcard DNS refers to a DNS configuration that allows for routing any subdomain of a domain to a particular service or set of services within a Kubernetes cluster. A wildcard DNS record is usually indicated by an asterisk (*), for example: *.yourdomain.com
.
Using wildcard DNS with Kubernetes has several advantages:
- Without wildcard DNS, each time you deploy a new service and want to expose it with a domain name, you would have to create a new DNS record. With wildcard DNS, any subdomain of
yourdomain.com
(likeservice1.yourdomain.com
,service2.yourdomain.com
, etc.) will automatically resolve to the IP address specified in the wildcard record. - Wildcard DNS is especially useful for development and staging environments where you might frequently spin up and tear down services. The wildcard DNS ensures that these services get valid DNS names without additional configuration.
- When used in conjunction with an ingress controller (like Nginx or Traefik), wildcard DNS can be powerful. The ingress controller can route traffic based on the hostname, meaning that while the wildcard DNS points all subdomains to the ingress controller, the controller itself determines which service should handle the request based on its configuration.
In short, Kubernetes wildcard DNS, combined with an ingress controller, provides a powerful way to dynamically route external traffic to different services in the cluster based on hostname patterns.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- Created a Scaleway Kubernetes cluster
- Installed
helm
on your local computer - A
TCP
orHTTP
service you want to expose
Overview of key concepts
Ingress controller
An ingress controller manages external HTTP/HTTPS traffic to services within a Kubernetes cluster based on ingress resource rules. The NGINX ingress controller is used here.
NodePort service
A NodePort
service exposes the ingress controller on a specific port (e.g., 30000–32767) on each cluster node’s IP address, allowing external access without a Scaleway LoadBalancer.
Round-robin DNS
Scaleway provides a wildcard DNS record (<cluster-id>.nodes.k8s.<region>.scw.cloud
) that resolves to the public IPs of all nodes in the default pool. This acts as a round-robin DNS, distributing traffic randomly across nodes.
However, this is not true load balancing, as DNS resolution is client-dependent and may not evenly distribute traffic. Consider deploying a LoadBalancer service for production use.
Full isolation node pools
Scaleway Kapsule supports “full isolation” node pools, where nodes lack public IPs. This is incompatible with NodePort
-based ingress without additional configuration, as external traffic cannot reach nodes directly. A node selector is required to ensure the ingress controller runs on nodes with public IPs (e.g., in the default pool).
Security groups
Kapsule clusters use a default security group (kubernetes-<cluster-id>
) that blocks incoming traffic by default. You must open ports (e.g., 80 and 443) to allow external access to the NodePort
.
Deployment of the ingress controller
Installation prework
To allow HTTP/HTTPS traffic to the NodePort
:
- In the Scaleway console, navigate to Compute > CPU & GPU Instances > Security Groups.
- Locate the security group
kubernetes-<cluster-id>
. - Add rules to allow:
- TCP port 80 (HTTP) from
0.0.0.0/0
. - TCP port 443 (HTTPS) from
0.0.0.0/0
. - TCP port range 30000–32767 (
NodePort
range) from0.0.0.0/0
for external access to the ingress controller.
- TCP port 80 (HTTP) from
Installing the NGINX ingress controller
Use Helm to deploy the NGINX ingress controller with a NodePort
service and a node selector to ensure it runs on nodes with public IPs.
-
Add the NGINX Ingress Helm Repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update
-
Create a file named
ingress-values.yaml
and copy the following values into it:controller: service: type: NodePort # Specify NodePort values (optional, Kubernetes assigns random ports if omitted) nodePorts: http: 30080 https: 30443 # Node selector to ensure pods run on nodes with public IPs nodeSelector: scaleway.com/pool-name: <name-of-your-public-ip-pool> config: use-forwarded-headers: "true" compute-full-forwarded-for: "true" admissionWebhooks: enabled: true
-
Deploy the ingress controller using
helm
:helm install ingress-nginx ingress-nginx/ingress-nginx --version 4.7.0 -f ingress-values.yaml --namespace ingress-nginx --create-namespace
-
Check the
NodePort
service:kubectl get svc -n ingress-nginx ingress-nginx-controller
You will see an output similar to the following example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller `NodePort` 10.32.12.0 <none> 80:30080/TCP,443:30443/TCP 5m
- The
PORT(S)
field shows theNodePort
s (e.g.,30080
for HTTP,30443
for HTTPS). - The
EXTERNAL-IP
is<none>
becauseNodePort
uses the nodes’ IPs.
-
Get the public IPs of the nodes in the default pool:
kubectl get nodes -o wide
You will see an output similar to the following example
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node-1 Ready <none> 1h v1.32.3 172.16.28.3 195.154.111.100 node-2 Ready <none> 1h v1.32.3 172.16.28.4 195.154.111.101
-
Configure DNS with round-robin DNS:
- Use Scaleway’s wildcard DNS (
<cluster-id>.nodes.k8s.<region>.scw.cloud
) for external access. For example, for a cluster with ID39b054d2-5255-4fe3-a327-7ccfac7ffdca
in thefr-par
region, the DNS is39b054d2-5255-4fe3-a327-7ccfac7ffdca.nodes.k8s.fr-par.scw.cloud
. This domain name resolves to the public IPs of all nodes in the default pool (e.g.,195.154.72.100
,195.154.72.101
).
-Alternatively, for a custom domain, map your domain (e.g.,
demo.example.com
) to all node IPs via your DNS provider, creating multiple A records:demo.example.com A 195.154.111.100 demo.example.com A 195.154.111.101
- Use Scaleway’s wildcard DNS (
-
Create a file named
demo-app.yaml
and copy the following manifest into it to deploy a simple web application to test the ingress controller:apiVersion: apps/v1 kind: Deployment metadata: name: demo-app namespace: default spec: replicas: 2 selector: matchLabels: app: demo-app template: metadata: labels: app: demo-app spec: containers: - name: demo-app image: nginx:1.21 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: demo-app namespace: default spec: selector: app: demo-app ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-app-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: demo.<cluster-id>.nodes.k8s.<region>.scw.cloud http: paths: - path: / pathType: Prefix backend: service: name: demo-app port: number: 80
-
Apply the configuration:
kubectl apply -f demo-app.yaml
-
Access the demo application using the wildcard DNS and the HTTP
NodePort
(e.g., 30080):curl http://demo.<cluster-id>.nodes.k8s.fr-par.scw.cloud:30080/ # or, using a node’s public IP curl http://195.154.111.100:30080/ # or, with a custom domain curl http://demo.example.com:30080/
You should see the NGINX welcome page. Test HTTPS if configured (port 30443).
Cleanup (optional)
Once finished, you can remove the demo application and ingress controller from your cluster:
kubectl delete -f demo-app.yaml
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx