Using a Load Balancer to expose your Kubernetes Kapsule ingress controller service
- compute
- kapsule
- kubernetes
- ingress-controller
- k8s
- Load-balancer
- wildcard
Overview
An ingress controller is an intelligent HTTP reverse proxy allowing you to expose different websites to the Internet with a single entry point. The ingress controller is in charge of reconfiguring an HTTP web server each time you add or remove an ingress object on Kubernetes
.
Our goal in this tutorial is to:
- Deploy a test application on our cluster.
- Expose this test application through an ingress object, using the free DNS wildcard provided by
Scaleway
. - Replace the usage of the
DNS
wildcard by aScaleway
Load Balancer. - Make the Load Balancer IP persistent and re-usable between different services.
This tutorial is divided into two parts:
- In the first part, we check how to expose the ingress controller shipped with
Kapsule
using a Scaleway Load Balancer. We use a simple test application for this tutorial. - In the second part, we show you how to reserve an IP address for this Load Balancer, as by default on Scaleway Load Balancer IPs are ephemeral.
At the end of this tutorial, you are able to understand and use the ingress controller shipped with Kapsule.
You may need certain IAM permissions to carry out some actions described on this page. This means:
- you are the Owner of the Scaleway Organization in which the actions will be carried out, or
- you are an IAM user of the Organization, with a policy granting you the necessary permission sets
- You have an account and are logged into the Scaleway console
- You have created a Kapsule cluster and deployed an ingress controller using the Application Library in the Easy Deploy feature.
- You have downloaded the corresponding kubeconfig file and kubectl is working
Exposing the ingress controller through a LoadBalancer service
You need a functioning Kubernetes Kapsule cluster with an ingress controller deployed with the Easy Deploy (Application Library) feature to follow this tutorial. To deploy your ingress controller, go to the Easy Deploy tab of your existing cluster, create a new deployment, and select the Ingress controller of your choice in the application library.
By default on Kapsule, these ingress controllers are deployed using a hostPort. This means that the ingress controller will be accessible on all the machines of your cluster on port 80 and 443.
We have decided to do that because the addition of a Load Balancer adds an extra cost for the final user. By using a host port, you will be able to use the ingress objects just after deploying your cluster. We believe that it is a good solution for test and development purposes.
We will see in the second part of the tutorial on how to use a Load Balancer to make this production-ready.
Let us check on this example below:
The ingress controller is deployed on the cluster just after the creation of this one.
# kubectl get ds -n kube-systemNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE[..]nginx-ingress 1 1 1 1 1 <none> 13m
The ingress controller is exposed to the Internet with the host port configuration.
# kubectl get ds -n kube-system nginx-ingress -o yaml[..] ports: - containerPort: 80 hostPort: 80 name: http protocol: TCP - containerPort: 443 hostPort: 443 name: https protocol: TCP
Using wildcards
By default on Kapsule, a wildcard round-robin DNS record is created, pointing on all your cluster nodes. This means that every time you add or delete a node in your cluster, the DNS record is updated to reflect the state of your nodes.
Once again we can check it in on the example cluster (you can get the FQDN of your cluster in the Scaleway console):
# host test.c39a0d71-f66c-4657-8fe1-c3280012311c.nodes.k8s.fr-par.scw.cloudtest.49087273-8296-46cc-a82c-f08cb9623ce2.nodes.k8s.fr-par.scw.cloud is an alias for 49087273-8296-46cc-a82c-f08cb9623ce2.nodes.k8s.fr-par.scw.cloud49087273-8296-46cc-a82c-f08cb9623ce2.nodes.k8s.fr-par.scw.cloud has address 51.15.207.3
To demonstrate how it works, we will use a test application called cafe-ingress
, you can find it at this URL. It’s a pretty simple application just serving different web pages depending on the URL you type.
kubectl create -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/examples/complete-example/cafe.yaml
Create the ingress object with this YAML manifest. Please note that we use our DNS wildcard on the host stanza of this YAML file.
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: cafe-ingressspec: rules: - host: test.49087273-8296-46cc-a82c-f08cb9623ce2.nodes.k8s.fr-par.scw.cloud http: paths: - path: /tea pathType: Prefix backend: service: name: tea-svc port: number: 80 - path: /coffee pathType: Prefix backend: service: name: coffee-svc port: number: 80
# kubectl create -f cafe-ingress.yaml# kubectl get ingNAME CLASS HOSTS ADDRESS PORTS AGEcafe-ingress <none> test.49087273-8296-46cc-a82c-f08cb9623ce2.nodes.k8s.fr-par.scw.cloud 80 4m11s
You can test that this ingress is configured correctly by accessing your test application:
# curl http://test.49087273-8296-46cc-a82c-f08cb9623ce2.nodes.k8s.fr-par.scw.cloud/coffeeServer address: 100.64.0.181:8080Server name: coffee-5f56ff9788-68xs2Date: 28/Apr/2020:13:34:26 +0000URI: /coffeeRequest ID: 9d2ee64655b936384a64cf89e7a975b0
Using a reserved IP as the IP address of your LoadBalancer
When creating Kubernetes services, you have the possibility to create a service of the LoadBalancer
type.
This is what we have done above to expose the ingress controller.
That means that a call to the cloud-controller-manager is made. Meaning that it will spawn a new Load Balancer on Scaleway.
The “normal” behavior, in this case, is that a new public IP will be associated with this LoadBalancer, meaning that if you want to create a DNS record (for instance) with that IP, you have to do it every time you spawn a new LoadBalancer.
There is a way to avoid that and to:
- Reserve an IP address.
- Put DNS records on it.
- Re-use this IP address as many times as you want for different services (one service at a time).
Reserving a LoadBalancer IP using the Scaleway API
Use the Scaleway API to reserve an IP address (this address has to be a Load Balancer IP).
To create a secret key please use the following tutorial:
curl -X POST "https://api.scaleway.com/lb/v1/regions/$SCW_DEFAULT_REGION/ips" -H "X-Auth-Token: $SCW_SECRET_KEY" -H "Content-Type: application/json" \-d "{\"project_id\":\"$SCW_DEFAULT_PROJECT_ID\"}" | jq -r .ip_address
This IP address can be re-used every time you will create a LoadBalancer service (if there is no other Load Balancer using this IP) by using the loadBalancerIP
field on the service.
Let’s say in our case that this IP address is 51.159.24.7
.
Using this IP Address on Kubernetes LoadBalancer services
At the time you create or patch a Load Balancer service you can specify the previously “reserved” IP on the service using the loadBalancerIP
spec on the example below (the public loadBalancer IP address is in this case 51.159.24.7
):
On the example below we will:
- Create a new Load Balancer with the IP reserved before by patching the tea-svc service.
- Check the IP address was correctly set on this service and that this service is now a LoadBalancer one.
- Delete the tea-svc service (showing that the IP address is not deleted).
- Create a new Load Balancer with the IP reserved before by patching the coffee-svc service.
- By doing that, we show that we can use a reserved IP on
LoadBalancer
creation and that we can “move” this IP from one service to one another.
# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcoffee-svc ClusterIP 10.32.102.89 <none> 80/TCP 9skubernetes ClusterIP 10.32.0.1 <none> 443/TCP 3m56stea-svc ClusterIP 10.32.57.52 <none> 80/TCP 9s# kubectl patch svc tea-svc --type merge --patch '{"spec":{"loadBalancerIP": "51.159.24.7","type":"LoadBalancer"}}'service/tea-svc patched# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcoffee-svc ClusterIP 10.32.102.89 <none> 80/TCP 44skubernetes ClusterIP 10.32.0.1 <none> 443/TCP 4m31stea-svc LoadBalancer 10.32.57.52 <pending> 80:32434/TCP 44s# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcoffee-svc ClusterIP 10.32.102.89 <none> 80/TCP 45skubernetes ClusterIP 10.32.0.1 <none> 443/TCP 4m32stea-svc LoadBalancer 10.32.57.52 51.159.24.7 80:32434/TCP 45s# kubectl delete svc tea-svcservice "tea-svc" deleted# kubectl patch svc coffee-svc --type merge --patch '{"spec":{"loadBalancerIP": "51.159.24.7","type":"LoadBalancer"}}'service/coffee-svc patched# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcoffee-svc LoadBalancer 10.32.102.89 <pending> 80:31094/TCP 100skubernetes ClusterIP 10.32.0.1 <none> 443/TCP 5m27s# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcoffee-svc LoadBalancer 10.32.102.89 51.159.24.7 80:31094/TCP 103skubernetes ClusterIP 10.32.0.1 <none> 443/TCP 5m30s
As you can see, we have been able to keep the same IP on different load balancers. By doing so, we can, for instance, keep the same DNS configuration and move it from one Load Balancer instance type to one another. We have also seen that Kapsule manages the configuration of our Load Balancer for you. The cloud controller manager is in charge of managing the complete lifecycle of your Load Balancer.
You might be interested in the following tutorial: