HomeContainersKubernetesReference Content
secure cluster with private network
Jump toUpdate content

Securing a cluster with a regional Private Network

Reviewed on 05 May 2023 • Published on 05 May 2023
Security & Identity (IAM):

You may need certain IAM permissions to carry out some actions described on this page. This means:

  • you are the Owner of the Scaleway Organization in which the actions will be carried out, or
  • you are an IAM user of the Organization, with a policy granting you the necessary permission sets
Requirements:

Scaleway Kubernetes Kapsule provides a managed environment to create, configure and run a cluster of preconfigured machines for containerized applications. This allows you to create Kubernetes clusters without the complexity of managing the infrastructure.

All new Kubernetes clusters are deployed with a Scaleway regional Private Network. We recommend migrating existing clusters to a regional Private Network as well, using the Scaleway console.

Currently, worker nodes are still shipped with public IP addresses. These IPs are used for outgoing traffic from your nodes towards the Internet only; no services are listening on them by default. Despite having public IP addresses for limited maintenance and operational reasons, you can be assured that your cluster remains fully secure. Find more details below.

Why have a Private Network for your Kubernetes Kapsule cluster?

A Private Network brings additional features to your cluster, such as:

  • Implementation of best practices in terms of security: all Scaleway resources can communicate securely (Instances, Load Balancers, Managed Databases), with less surface area for attack
  • Compliance with expectations from the market (enterprise customers)
  • Less manual configuration work such as security group configuration, IP range configuration etc.
  • Lower latency

You can use the Scaleway console to both create new regional Private Networks, and add clusters to existing regional Private Networks. Each cluster is autoconfigured using a /22 IP subnet thanks to Scaleway’s IPAM.

Why do worker nodes still have public IPs?

Public IP addresses are currently required to maintain the administration of your worker nodes and their control plane to ensure the full functionality of your clusters. However, incoming traffic on these public IPs is blocked by default thanks to a security group set at DROP ALL, and only outgoing traffic necessary for Kubernetes to run (e.g., apt, Docker image pull, etc.) is allowed.

Can I use a Public Gateway with my Private Network to exit all outgoing traffic from the nodes?

At the moment, outgoing traffic is through the public IP of each node. Public Gateway will soon be compatible with IPAM and thus supported by Kapsule in a future release.

How can I migrate my existing clusters to regional Private Networks?

Important:

If you are managing your cluster using Terraform, be aware that the private_network_id field can only be set at cluster creation and cannot be updated later. Changes to this field will cause the cluster to be destroyed and then recreated.

You can use the Scaleway console to migrate all your previous clusters (recommended).

  1. In the Kubernetes section of the Scaleway console, locate your public clusters in the list.
  2. Click the cluster’s name to navigate to your cluster information.
  3. Use the recommended migration action in your cluster information, or go to the Private Network tab to start the migration.
    Note:

    Migrating an existing cluster is only available for clusters with a Kubernetes version above v1.22.0. Note that once a cluster is migrated, you cannot roll back this change.

What will happen after starting the migration:

  1. Your control plane will restart for a first time: the Kubernetes API of your cluster will temporarily be unavailable.
  2. Your pools will be upgraded to migrate the nodes into the Private Network. All of your nodes will be rebooted according to the specified Upgrade Policy of your pools.
  3. Once all your nodes have rebooted, your control plane will be configured to use the Private Network and restarted: your Load Balancers will be reconfigured and migrated to the Private Network.
  4. The CNI of your cluster will be reconfigured and restarted to use the Private Network.
    Important:

    During step IV, the Pod network of your cluster will be temporarily unavailable for 1 to 10 minutes as all Pods of the CNI are restarted, depending on the size of your cluster and the CNI you are using. Your Pods will be unable to communicate with each other during this step.

To further improve the security of your Kubernetes cluster, you can configure a Security Group to block inbound traffic on the public network interface of your nodes. However, be careful when updating the Security Group: stateful group must be enabled and if you currently use Kubernetes services with NodePorts, need SSH access, or need ICMP access, you must add a rule to allow this traffic.

Is Kosmos compatible with Private Networks?

Only Kapsule can use a Private Network. Kosmos uses Kilo as a CNI, which uses Wireguard to create a VPN Mesh between nodes for communication between pods. Any node in Kosmos, either in Scaleway or outside, uses these VPN tunnels to communicate securely by construct.

What future options will there be for isolation?

We are currently at the first phase of an isolated Kubernetes infrastructure. In the future, additional features are planned, such as:

  • compatibility with Scaleway Public Gateways, to be used as a single exit point for the cluster
  • extended VPC integration of Kubernetes Kapsule, with support of Multi-AZ features.
  • Isolation of the control plane on top of your worker nodes

Will the control plane also be located inside the Private Network?

Currently, only worker nodes are located in the Private Network of your cluster. The communication between the nodes and the control plane uses the Public IP of the node.

Which IP ranges are used for the Private Network of my cluster?

We automatically assign a /22 IP subnet from a Private Network to your cluster.

How can I access my cluster via my nodes’ public IPs for specific use cases?

Once you create a cluster in Kapsule, all nodes, particularly those with the Private Network feature enabled, are protected by a security group named kubernetes-<cluster-id>. Any changes made to this security group will be applicable to all nodes in the cluster.

If you wish to allow access to the nodes through a public IP using a specific port/protocol, you can modify the security group after creating the cluster by following these steps:

From the Scaleway console

  1. Go to the Instances section of the Scaleway console.
  2. Click the Security Groups tab. A list of your existing security groups displays.
  3. Click the name of the security group that is configured for your Instance, which is named kubernetes-<cluster-id>.
  4. Click the Rules tab. A list of rules configured for this group displays.
  5. Click «Edit Icon» to edit the security group rules.
  6. Click Add inbound route to configure a new rule and customize it according to your requirements.
  7. Apply your custom rules by clicking «Validate Icon».

Using Terraform

Important:

Any existing Kapsule clusters will be destroyed and recreated by adding the private_network_id attribute over an existing Terraform definition. For more information, refer to Scaleway’s terraform provider documentation (v2.19.0).

If you are using Terraform to create your cluster, you can create a security group resource after creating the cluster resource and before creating the pool resource.
You can find a Terraform configuration example below:

data "scaleway_k8s_version" "latest" {
name = "latest"
}
resource "scaleway_vpc_private_network" "kapsule" {
name = "pn_kapsule"
tags = ["kapsule"]
}
resource "scaleway_k8s_cluster" "kapsule" {
name = "open-pn-test"
version = data.scaleway_k8s_version.latest.name
cni = "cilium"
private_network_id = scaleway_vpc_private_network.kapsule.id
delete_additional_resources = true
depends_on = [scaleway_vpc_private_network.kapsule]
}
resource "scaleway_instance_security_group" "kapsule" {
name = "kubernetes ${split("/", scaleway_k8s_cluster.kapsule.id)[1]}"
inbound_default_policy = "drop"
outbound_default_policy = "accept"
stateful = true
inbound_rule {
action = "accept"
protocol = "UDP"
port = "500"
}
depends_on = [scaleway_k8s_cluster.kapsule]
}
resource "scaleway_k8s_pool" "default" {
cluster_id = scaleway_k8s_cluster.kapsule.id
name = "default"
node_type = "DEV1-M"
size = 1
autohealing = true
wait_for_pool_ready = true
depends_on = [scaleway_instance_security_group.kapsule]
}
resource "scaleway_rdb_instance" "main" {
name = "pn-rdb"
node_type = "DB-DEV-S"
engine = "PostgreSQL-14"
is_ha_cluster = true
disable_backup = true
user_name = "username"
password = "thiZ_is_v&ry_s3cret" # Obviously change password here or generate one at runtime through null_resource and display it via output.
private_network {
pn_id = scaleway_vpc_private_network.kapsule.id
}
}

Are Managed Databases compatible with Kubernetes Kapsule on Private Network?

Yes they are. You will have to create a new endpoint on your Database Instance, specifiying an automated configuration of your Private Network with Scaleway IPAM service. You can find an example below:

curl --request POST \
--url https://api.scaleway.com/rdb/v1/regions/$REGION/instances/$INSTANCE_ID/endpoints \
--header "Content-Type: application/json" \
--header "X-Auth-Token: $SCW_TOKEN" \
--data '{
"endpoint_spec": {
"private_network": {
"ipam_config": {},
"private_network_id": "$PRIVATE_NETWORK_ID"
}
}
}'

Please refer to the Managed Database for PostgreSQL and MySQL API documentation for further information.

Note:
  • This action replaces your current endpoint, which means you might need to update any environment configurations that point to the old endpoint.
  • This feature is not yet provided in the Scaleway console.

Are Managed Load Balancers compatible with Kubernetes Kapsule Private Networks?

Managed Load Balancers support Private Networks with private backends and public frontends, meaning the traffic is forwarded to your worker nodes through your clusters’ Private Network.

Note:

If you have a trusted IP configured on your ingress controller, note that the request will come from a private IP.