NavigationContentFooter
Jump toSuggest an edit

Securing a cluster with a Private Network

Reviewed on 08 April 2024Published on 05 May 2023

Scaleway Kubernetes Kapsule provides a managed environment to create, configure, and run a cluster of preconfigured machines for containerized applications. This allows you to create Kubernetes clusters without the complexity of managing the infrastructure.

All new Kubernetes clusters are deployed with a Scaleway Private Network using controlled isolation.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • Created a Kubernetes Kapsule cluster
Tip

We recommend migrating existing clusters to a Private Network via the Scaleway console. Public-only endpoints are deprecated and will reach end of support in Q1/2024.

By default, worker nodes are currently delivered with public IP addresses (controlled isolation). These IPs serve solely for outgoing traffic from your nodes to the internet; no default services are set to listen on them.

Even though these nodes have public IP addresses for specific maintenance and operational purposes, your cluster’s security remains uncompromised. See below for more information. Optionally, you can configure your nodes inside an entirely private network using full isolation.

Why have a Private Network for your Kubernetes Kapsule cluster?

A Private Network offers crucial functionalities to your cluster, including:

  • Implementation of best practices in terms of security: all Scaleway resources can communicate securely (Instances, Load Balancers, Managed Databases), with less surface area for attack. For further information, refer to our blog post 10 best practices to configure your VPC.
  • Compliance with expectations from the market (enterprise customers)
  • Less manual configuration work such as security group configuration, IP range configuration, etc.
  • Multi-AZ compatibility allows you to create node pools in several Availability Zones for better resilience.
  • Lower latency

How can I migrate my existing clusters to Private Networks?

Tip

We recommend that you use the Scaleway console to migrate all your previous clusters or the CLI or Terraform.

  1. In the Kubernetes section of the Scaleway console, locate your public clusters in the list.
  2. Click the cluster’s name to navigate to your cluster information.
  3. Use the recommended migration action in your cluster information, or go to the Private Network tab to start the migration.
    Note

    Migrating an existing cluster is only available for clusters with a Kubernetes version above v1.22.0. Note that once a cluster is migrated, you cannot roll back this change.

What will happen after starting the migration:

  1. Your control plane will restart for a first time: the Kubernetes API of your cluster will temporarily be unavailable.
  2. Your pools will be upgraded to migrate the nodes into the Private Network. All of your nodes will be rebooted according to the specified Upgrade Policy of your pools.
  3. Once all your nodes have rebooted, your control plane will be configured to use the Private Network and restarted: your Load Balancers will be reconfigured and migrated to the Private Network.
  4. The CNI of your cluster will be reconfigured and restarted to use the Private Network.
    Important

    During step IV, the pod network of your cluster will be temporarily unavailable for 1 to 10 minutes as all pods of the CNI are restarted, depending on the size of your cluster and the CNI you are using. Your pods will be unable to communicate with each other during this step.

To further improve the security of your Kubernetes cluster, you can configure a security group to block inbound traffic on the public network interface of your nodes. However, be careful when updating the security group: stateful group must be enabled and if you currently use Kubernetes services with NodePorts, need SSH access, or need ICMP access, you must add a rule to allow this traffic.

What is the difference between controlled isolation and full isolation?

Worker node pools with controlled isolation inside a Private Network have both private and public IPs for outgoing traffic. Fully isolated nodes get only a private IP, with all external communications channeled through a Public Gateway for secure external connections.

IsolationControlled isolation
(default)
Full isolation
(optional)
None
(deprecated)
DescriptionWorker nodes are assigned both private IPs and public IPs.
All inbound traffic on the public interface is dropped by default using Security Groups.
Worker nodes are set without public IPs (100% private network).
A Public Gateway is required.
Clusters without a Private Network attached.
Nodes have public-only endpoints.
Benefits1. Strong security
2. Dynamic public IPs to reach out to external providers while avoiding rate limiting
1. Maximum security
2. A stable egress IP for secure connection to external providers
n/a
NoticeDefault choice for new clusters. Can be used in combination with pools using full isolation.Requires a Public Gateway, which incurs additional costs.Deprecated in October 2023.
Important

By removing or detaching the Public Gateway from the Private Network, a node pool with full isolation can lead to a single point of failure in the cluster, as nodes will no longer be able to reach their control plane.

What are the migration strategies provided?

As we transition towards Private Networks for Kapsule clusters, we understand that you may have varying needs and preferences when it comes to migrating your existing clusters.

We recommend two distinct migration strategies tailored to various requirements and limitations.

Tip

The Migration Strategy 1 is very similar to upgrading the Kubernetes version of a cluster. If you are familiar with upgrading your Kubernetes cluster, this method should feel intuitive. For detailed steps on how a cluster upgrade is performed, you can refer to the cluster upgrade documentation.

Migration Strategy 1: API-Guided MigrationMigration Strategy 2: Parallel Cluster Deployment
Description:
Uses the migration API provided by Kubernetes Kapsule.
Description:
Deploys a new Kapsule cluster on a Private Network in parallel to the existing public cluster.
Implementation:
1. Access migration API: Scaleway API Documentation.
2. Initiate migration via Scaleway console’s Private Network tab.
3. For Terraform users: Attach a Private Network ID to the cluster resource to activate migration.
Implementation:
1. Establish a new Kapsule cluster on a Private Network.
2. Manually deploy services and containers.
3. Run both clusters simultaneously.
4. Once the new cluster is verified functional, redirect traffic to it.
5. Decommission the old cluster. Configuration intricacy can affect this method.
Downtime:
Expected 1-10 minutes. Downtime occurs due to pod network unavailability when all CNI pods restart. The duration depends on cluster size and CNI used.
Downtime:
Aims to reduce or even bypass downtime. The exact duration is contingent on the efficiency of parallel deployment and the migration process.
Pros:
* Detailed migration guidelines: Scaleway Documentation.
* Efficient migration via the provided API.
Pros:
* Potential for negligible downtime.
* Thorough testing and validation of the new cluster before traffic rerouting.
Cons:
Downtime during migration.
Cons:
1. Needs a new cluster, leading to extra expenses.
2. Complexity varies with workload and configuration.
3. Replication and validation can prolong migration.
Important

When using persistence in Kubernetes, consider migrating your cluster through Migration Strategy 1.

Scaleway product compatibility

Can I use a Public Gateway with my Private Network to exit all outgoing traffic from the nodes?

Yes, you are required to attach a Private Gateway when setting up a node pool with full isolation. This allows Kapsule nodes with private IPs to route their outgoing traffic through the Public Gateway. For detailed steps on setting up a Public Gateway, refer to our Public Gateway documentation. Keep in mind that removing or detaching the Public Gateway from the Private Network can cause a single point of failure in the cluster, preventing fully isolated node pools from accessing the control plane.

Note

To use a Public Gateway with a Private Network on a Kapsule cluster, make sure that

  • The Public Gateway is located in the same region as the Kapsule cluster.
  • Dynamic NAT must be activated (enabled by default).
  • Advertise DefaultRoute must be activated (enabled by default).
  • Your Public Gateway is fully integrated with IPAM, and is not a legacy gateway.

Is Kosmos compatible with Private Networks?

Only Kapsule can use a Private Network.

Kosmos uses Kilo as a CNI, which uses WireGuard to create a VPN Mesh between nodes for communication between pods. Any node in Kosmos, either in Scaleway or outside, uses these VPN tunnels to communicate securely by construct.

Are Managed Databases compatible with Kubernetes Kapsule on Private Networks?

Yes, they are. Since July 2023, the automatic allocation of IP addresses via IPAM is available for Managed Databases. These IP addresses are compatible with Scaleway’s VPC, which is now in General Availability. For more information about product compatibility, refer to the VPC documentation.

For any new Private Networks you create and attach to Managed Databases after July 2023, your private IP addresses are automatically allocated.

If you have set up Private Network endpoints for your Managed Databases before July 2023, and want to connect to Kapsule via a Private Network, you must first delete your old private network endpoint. Then, you can create a new one, either via the Scaleway console or API.

In the example below, we show you how to do so via the API. We specify the automated configuration of your Private Network via IPAM using "ipam_config": {},.

curl --request POST \
--url https://api.scaleway.com/rdb/v1/regions/$REGION/instances/$INSTANCE_ID/endpoints \
--header "Content-Type: application/json" \
--header "X-Auth-Token: $SCW_SECRET_KEY" \
--data '{
"endpoint_spec": {
"private_network": {
"ipam_config": {},
"private_network_id": "<PRIVATE_NETWORK_ID>"
}
}
}'
Note

Replace <PRIVATE_NETWORK_ID> with the ID of the Private Network in question.

Important
  • This action adds a new endpoint. If you want to use it in your environment, you need to update the endpoint in your configuration.

Refer to the Managed Database for PostgreSQL and MySQL API documentation for further information.

Are managed Load Balancers compatible with Kubernetes Kapsule Private Networks?

Managed Load Balancers support Private Networks with private backends and public frontends, meaning the traffic is forwarded to your worker nodes through your clusters’ Private Network.

Additionally, private Load Balancers are supported. These Load Balancers have no public IPs in either their back or frontends.

Note

If you have a trusted IP configured on your ingress controller, note that the request will come from a private IP.

Which IP ranges are used for the Private Network of my cluster?

We automatically assign a /22 IP subnet from a Private Network to your cluster.

How can I access my cluster via my nodes’ public IPs for specific use cases?

Once you create a cluster in Kapsule, all nodes, particularly those with the Private Network feature enabled, are protected by a security group named kubernetes <cluster-id>. Any changes made to this security group will apply to all nodes in the cluster.

If you wish to allow access to the nodes through a public IP using a specific port/protocol, you can modify the security group after creating the cluster by following these steps:

From the Scaleway console

  1. Go to the Instances section of the Scaleway console.
  2. Click the Security groups tab. A list of your existing security groups displays.
  3. Click the name of the security group that is configured for your Instance, which is named kubernetes <cluster-id>.
  4. Click the Rules tab. A list of rules configured for this group displays.
  5. Click «Edit Icon» to edit the security group rules.
  6. Click Add inbound route to configure a new rule and customize it according to your requirements.
  7. Apply your custom rules by clicking «Validate Icon».

Using Terraform

Important

Existing Kapsule clusters can be migrated by adding the private_network_id attribute over an existing Terraform definition. For more information, refer to Scaleway’s terraform provider documentation.

If you are using Terraform to create your cluster, you can create a security group resource after creating the cluster resource and before creating the pool resource. You can find a Terraform configuration example below:

data "scaleway_k8s_version" "latest" {
name = "latest"
}
resource "scaleway_vpc_private_network" "kapsule" {
name = "pn_kapsule"
tags = ["kapsule"]
}
resource "scaleway_k8s_cluster" "kapsule" {
name = "open-pn-test"
version = data.scaleway_k8s_version.latest.name
cni = "cilium"
private_network_id = scaleway_vpc_private_network.kapsule.id
delete_additional_resources = true
depends_on = [scaleway_vpc_private_network.kapsule]
}
resource "scaleway_instance_security_group" "kapsule" {
name = "kubernetes ${split("/", scaleway_k8s_cluster.kapsule.id)[1]}"
inbound_default_policy = "drop"
outbound_default_policy = "accept"
stateful = true
inbound_rule {
action = "accept"
protocol = "UDP"
port = "500"
}
depends_on = [scaleway_k8s_cluster.kapsule]
}
resource "scaleway_k8s_pool" "default" {
cluster_id = scaleway_k8s_cluster.kapsule.id
name = "default"
node_type = "DEV1-M"
size = 1
autohealing = true
wait_for_pool_ready = true
depends_on = [scaleway_instance_security_group.kapsule]
}
resource "scaleway_rdb_instance" "main" {
name = "pn-rdb"
node_type = "DB-DEV-S"
engine = "PostgreSQL-14"
is_ha_cluster = true
disable_backup = true
user_name = "username"
password = "thiZ_is_v&ry_s3cret" # Obviously change password here or generate one at runtime through null_resource and display it via output.
private_network {
pn_id = scaleway_vpc_private_network.kapsule.id
}
}

Will the control plane also be located inside the Private Network?

Currently, only worker nodes are located in the Private Network of your cluster. The communication between the nodes and the control plane uses the Public IP of the node. You can reach the control plane for nodes using full isolation by adding a Public Gateway to the cluster.

What future options will there be for isolation?

  • Control plane in isolation with nodes and API communicating in the same isolated network. The CNI’s network policies will restrict/allow a range of IPs or ports to control who can access the API server.
  • End of support for legacy public clusters. All clusters will need to migrate to controlled isolation (Private Networks) by Q1/2024.
Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway