Kubernetes Kapsule is a free service, only the resources you allocate to your cluster are billed, without any extra cost.
As microservices architectures become increasingly popular, managing a container-based software solution now comes down to managing the lifecycle of hundreds of containers. This can easily turn all anticipated benefits of container-based micro services architectures into a system administrator's nightmare. This is where container orchestration comes in and helps simplify the management of such container-based systems.
Kubernetes is currently the most popular and widely adopted container orchestration platform. When managing so many containers, networking becomes a critical aspect of the system as all these containers need to communicate efficiently between each other and the outside world. Factor in the ephemeral aspect of containers and you end up with a constantly changing networking environment.
As such, networking is one of the most complex aspects of Kubernetes. If you wish to run a publicly accessible service in Kubernetes successfully you need to understand how to leverage the tools Kubernetes gives you properly. When considering exposing an application over the internet, Kubernetes offers several options that you need to choose from, such as services, Load Balancers, and ingresses. As these can cause a lot of confusion, let's have a look at these different solutions and understand why Load Balancers are so crucial.
On the surface, Kubernetes networking can seem simple as its configuration holds in a handful of resources like service or ingress. But beneath the surface, the orchestrator is actually leveraging many complex networking concepts and technologies abstracted from the user.
If you wish to properly expose your pods, which are ephemeral resources, you need to understand how a crucial Kubernetes resource called "Service" works.
In a nutshell, a Service exposes a set of pods behind a unique IP address called a ClusterIP, and a domain name. They are both only accessible within the Kubernetes cluster and enable the balancing of traffic between the different pods. A Kubernetes service acts as an internal network Load Balancer accessible to workloads in the Kubernetes cluster.
There are different Kubernetes services to expose your pods to the Internet. One is called NodePort. A NodePort is a Kubernetes service type that exposes a specific port on each node of a cluster, allowing external traffic to access a pod running on that node, by connecting the IP of that pod to a specific port, HTTP for instance.
The nodeports opens a port on each node of the K8S cluster and links traffic arriving at that port to the service which will then forward it to a pod in the K8S cluster.
NodePort can be useful for projects which need a test on their applications locally before deploying them to production, or when you have only one node in your cluster.
Sometimes, it is enough for very low traffic and no business impact workload, for example, when you don't need advanced features like routing or certificate encryption.
The problem with NodePort is that the available ports are in the 30000 range which is not appropriate for Internet traffic usually arriving on ports 80 or 443 because firewalls would easily block this port on the Internet.
By doing so, you also create a SPOF (Single Point of Failure) because you rely on a single node. To avoid this behavior, you will need to have a public IP address for each pod, as well as dealing with balancing traffic and configuring health checks … In the end, you end up building a Load Balancer in front of your Kubernetes nodes.
This is why a “Load Balancer” service is useful to expose your pods to the outside world. You need to configure it in front of a NodePort service, and it becomes an entry point to your Kubernetes cluster that forwards traffic to your pods.
When this happens, Scaleway will automatically provision a public network-level Load Balancer that will forward external traffic to the pods behind your Kubernetes Load Balancer service. The Load Balancer is automatically kept up to date with configuration changes in your Kubernetes cluste, thanks to the Cloud Controller Manager, and is deleted when you delete your service in Kubernetes.
One important thing to note is that with Kubernetes Load Balancer, you are load balancing at layer 4, so there are no SSL/TLS certificates involved at the Load Balancer level. You are effectively doing a passthrough. The SSL/TLS certificate and all HTTP layers are managed in your pod, which can be an ingress controller.
The standard way to bring the Internet to your cluster is to use a Load Balancer which takes advantage of its benefits, to balance traffic and increase resilience.
As we just mentioned, one more component you may hear about Network routing and Kubernetes is the Ingress controller, which is also a Kubernetes resource that manages external access to services in a cluster. It acts as a reverse proxy, routing incoming traffic to the appropriate service based on the URL path or hostname.
The Ingress controller is very close to a Load Balancer but only works between worker nodes, at the application level, layer 7, balancing traffic to different services in Kubernetes.
In this case, The Load Balancer has Services as backends; while the same Kubernetes Services have the pods as the backend.
The Ingress placed between the Loadbalancer service and the node actually provides the network configuration (backend services and routes).
Ingress Controllers are well-known in the community, and names like Nginx, Traefik, and HAProxy are very common. And you can use our Application Library to deploy it seamlessly with a Helm Chart
A concrete example
Imagine your company is hosting several services behind the
mycompany.com domain. The blog is reachable via
blog.mycompany.com while the shop is accessible via
In Kubernetes, you will then have pods behind a blog service that will serve the blog, and other pods behind a shop service to serve the shop.
You then use an ingress to expose both services behind a single public IP address and have the ingress forwarding traffic to blog.mycompany.com to the blog pods and traffic to shop.mycompany.com to the shop pods.
The Load Balancer is the piece of infrastructure that you need to expose your container application or service on the Internet. Indeed your infrastructure is then able to balance the traffic between pods and endure peak traffic seamlessly. Resilience will be increased as the requests are redirected to available and healthy backend resources and finally a way to access the external network without unnecessary exposures.
Other technologies exist to provide the same service, such as NodePort, but not with the same features in terms of reliability and security. However they are not mutually incompatible. An ingress controller has to be associated with a Load Balancer or a Nodeport.
If you are interested in learning more on load balancers, here are a few usefull complementary resources:
Configuring a Load Balancer for Your Kubernetes Applications:
Getting Started with Kubernetes: Load Balancers: