Skip to navigationSkip to main contentSkip to footerScaleway Docs

Deploying an NGINX ingress controller on Scaleway Kubernetes Kapsule without a LoadBalancer

Kubernetes wildcard DNS refers to a DNS configuration that allows for routing any subdomain of a domain to a particular service or set of services within a Kubernetes cluster. A wildcard DNS record is usually indicated by an asterisk (*), for example: *.yourdomain.com.

Using wildcard DNS with Kubernetes has several advantages:

  • Without wildcard DNS, each time you deploy a new service and want to expose it with a domain name, you would have to create a new DNS record. With wildcard DNS, any subdomain of yourdomain.com (like service1.yourdomain.com, service2.yourdomain.com, etc.) will automatically resolve to the IP address specified in the wildcard record.
  • Wildcard DNS is especially useful for development and staging environments where you might frequently spin up and tear down services. The wildcard DNS ensures that these services get valid DNS names without additional configuration.
  • When used in conjunction with an ingress controller (like Nginx or Traefik), wildcard DNS can be powerful. The ingress controller can route traffic based on the hostname, meaning that while the wildcard DNS points all subdomains to the ingress controller, the controller itself determines which service should handle the request based on its configuration.

In short, Kubernetes wildcard DNS, combined with an ingress controller, provides a powerful way to dynamically route external traffic to different services in the cluster based on hostname patterns.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • Created a Scaleway Kubernetes cluster
  • Installed helm on your local computer
  • A TCP or HTTP service you want to expose

Overview of key concepts

Ingress controller

An ingress controller manages external HTTP/HTTPS traffic to services within a Kubernetes cluster based on ingress resource rules. The NGINX ingress controller is used here.

NodePort service

A NodePort service exposes the ingress controller on a specific port (e.g., 30000–32767) on each cluster node’s IP address, allowing external access without a Scaleway LoadBalancer.

Round-robin DNS

Scaleway provides a wildcard DNS record (<cluster-id>.nodes.k8s.<region>.scw.cloud) that resolves to the public IPs of all nodes in the default pool. This acts as a round-robin DNS, distributing traffic randomly across nodes. However, this is not true load balancing, as DNS resolution is client-dependent and may not evenly distribute traffic. Consider deploying a LoadBalancer service for production use.

Full isolation node pools

Scaleway Kapsule supports “full isolation” node pools, where nodes lack public IPs. This is incompatible with NodePort-based ingress without additional configuration, as external traffic cannot reach nodes directly. A node selector is required to ensure the ingress controller runs on nodes with public IPs (e.g., in the default pool).

Security groups

Kapsule clusters use a default security group (kubernetes-<cluster-id>) that blocks incoming traffic by default. You must open ports (e.g., 80 and 443) to allow external access to the NodePort.

Deployment of the ingress controller

Installation prework

To allow HTTP/HTTPS traffic to the NodePort:

  1. In the Scaleway console, navigate to Compute > CPU & GPU Instances > Security Groups.
  2. Locate the security group kubernetes-<cluster-id>.
  3. Add rules to allow:
    • TCP port 80 (HTTP) from 0.0.0.0/0.
    • TCP port 443 (HTTPS) from 0.0.0.0/0.
    • TCP port range 30000–32767 (NodePort range) from 0.0.0.0/0 for external access to the ingress controller.
Tip

Opening the NodePort range exposes all NodePort services. For production, consider restricting the source IP range or using a more specific port, if known.

Installing the NGINX ingress controller

Use Helm to deploy the NGINX ingress controller with a NodePort service and a node selector to ensure it runs on nodes with public IPs.

  1. Add the NGINX Ingress Helm Repository

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
  2. Create a file named ingress-values.yaml and copy the following values into it:

    controller:
      service:
        type: NodePort
        # Specify NodePort values (optional, Kubernetes assigns random ports if omitted)
        nodePorts:
          http: 30080
          https: 30443
      # Node selector to ensure pods run on nodes with public IPs
      nodeSelector:
        scaleway.com/pool-name: <name-of-your-public-ip-pool>
      config:
        use-forwarded-headers: "true"
        compute-full-forwarded-for: "true"
      admissionWebhooks:
        enabled: true
    Note
    • type: NodePort`` exposes the ingress controller on the specified ports (e.g., 30080 for HTTP, 30443 for HTTPS) on each node’s public IP.
    • The nodeSelector ensures the ingress controller pods run on nodes in the <name-of-your-public-ip-pool> pool, which have public IPs. Replace scaleway.com/pool-name: <name-of-your-public-ip-pool> if your public IP pool has a different name (check via kubectl get nodes -o wide).
    • Full isolation node pools lack public IPs, making them incompatible with NodePort-based ingress unless routed through nodes with public IPs (e.g., default pool).
    • admissionWebhooks.enabled: true ensures the validating webhook is enabled for ingress resource validation.
  3. Deploy the ingress controller using helm:

    helm install ingress-nginx ingress-nginx/ingress-nginx --version 4.7.0 -f ingress-values.yaml --namespace ingress-nginx --create-namespace
  4. Check the NodePort service:

    kubectl get svc -n ingress-nginx ingress-nginx-controller

    You will see an output similar to the following example:

    NAME                       TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller   `NodePort`   10.32.12.0    <none>        80:30080/TCP,443:30443/TCP   5m
  • The PORT(S) field shows the NodePorts (e.g., 30080 for HTTP, 30443 for HTTPS).
  • The EXTERNAL-IP is <none> because NodePort uses the nodes’ IPs.
  1. Get the public IPs of the nodes in the default pool:

    kubectl get nodes -o wide

    You will see an output similar to the following example

    NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP
    node-1            Ready    <none>   1h    v1.32.3   172.16.28.3   195.154.111.100
    node-2            Ready    <none>   1h    v1.32.3   172.16.28.4   195.154.111.101
  2. Configure DNS with round-robin DNS:

    • Use Scaleway’s wildcard DNS (<cluster-id>.nodes.k8s.<region>.scw.cloud) for external access. For example, for a cluster with ID 39b054d2-5255-4fe3-a327-7ccfac7ffdca in the fr-par region, the DNS is 39b054d2-5255-4fe3-a327-7ccfac7ffdca.nodes.k8s.fr-par.scw.cloud. This domain name resolves to the public IPs of all nodes in the default pool (e.g., 195.154.72.100, 195.154.72.101).

    -Alternatively, for a custom domain, map your domain (e.g., demo.example.com) to all node IPs via your DNS provider, creating multiple A records:

    demo.example.com   A   195.154.111.100
    demo.example.com   A   195.154.111.101
    Important

    DNS round-robin is not true load balancing. Client DNS resolution may favor one node, leading to uneven traffic distribution. For production, consider using a Scaleway Load Balancer instead.

  3. Create a file named demo-app.yaml and copy the following manifest into it to deploy a simple web application to test the ingress controller:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo-app
      namespace: default
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: demo-app
      template:
        metadata:
          labels:
            app: demo-app
        spec:
          containers:
          - name: demo-app
            image: nginx:1.21
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: demo-app
      namespace: default
    spec:
      selector:
        app: demo-app
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-app-ingress
      namespace: default
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
    spec:
      ingressClassName: nginx
      rules:
      - host: demo.<cluster-id>.nodes.k8s.<region>.scw.cloud
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: demo-app
                port:
                  number: 80
    Note

    Replace <cluster-id> with your Kapsule cluster ID (found in the Scaleway console) and <region> with the region of your cluster. For a custom domain, use demo.example.com.

  4. Apply the configuration:

    kubectl apply -f demo-app.yaml
  5. Access the demo application using the wildcard DNS and the HTTP NodePort (e.g., 30080):

    curl http://demo.<cluster-id>.nodes.k8s.fr-par.scw.cloud:30080/
    # or, using a node’s public IP
    curl http://195.154.111.100:30080/
    # or, with a custom domain
    curl http://demo.example.com:30080/

You should see the NGINX welcome page. Test HTTPS if configured (port 30443).

Cleanup (optional)

Once finished, you can remove the demo application and ingress controller from your cluster:

kubectl delete -f demo-app.yaml
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx
Still need help?

Create a support ticket
No Results