Deploying a demo application on Scaleway Kubernetes Kapsule
Deploy an intermediate workload on Scaleway Kubernetes Kapsule
This tutorial guides you through deploying a demo application (whoami
) on Scaleway Kubernetes Kapsule. You will create a managed Kubernetes cluster, deploy a sample application, configure an ingress controller for external access, set up auto-scaling, and test the setup.
This tutorial is designed for users with a basic understanding of Kubernetes concepts like pods, deployments, services, and ingress.
Before you start
To complete the actions presented below, you must have:
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- A Scaleway API key for details.
- Installed the tools
kubectl
,scw
, andhelm
on your local computer - Basic familiarity with Kubernetes concepts (Pods, Deployments, Services, Ingress).
Configure Scaleway CLI
Configure the Scaleway CLI (v2) to manage your Kubernetes Kapsule cluster.
-
Install the Scaleway CLI (if not already installed):
curl -s https://raw.githubusercontent.com/scaleway/scaleway-cli/master/scripts/get.sh | sh
-
Initialize the CLI with your API key:
scw init
Follow the prompts to enter your
SCW_ACCESS_KEY
,SCW_SECRET_KEY
, and default region (e.g.,pl-waw
for Warsaw, Poland).
Create a Kubernetes Kapsule cluster
Create a managed Kubernetes cluster using the Scaleway CLI.
-
Run the following command to create a cluster with a single node pool:
scw k8s cluster create name=demo-cluster version=1.32.7 pools.0.size=2 pools.0.node-type=DEV1-M pools.0.name=default pools.0.min-size=1 pools.0.max-size=3 pools.0.autoscaling=true region=pl-waw
version=1.32.7
: Specifies a recent Kubernetes version.pools.0.size=2
: Starts with two nodes.pools.0.min-size=1
,pools.0.max-size=3
,pools.0.autoscaling=true
: Enables node auto-scaling.region=pl-waw
: Deploys in the Warsaw region.
-
Retrieve the cluster ID and download the kubeconfig file:
CLUSTER_ID=$(scw k8s cluster list | grep demo-cluster | awk '{print $1}') scw k8s kubeconfig get $CLUSTER_ID > ~/.kube/demo-cluster-config export KUBECONFIG=~/.kube/demo-cluster-config
-
Verify cluster connectivity:
kubectl get nodes
Ensure all nodes are in the
Ready
state.
Deploy a sample application
Deploy the whoami application (a well-known demo application to test cluster deployments) using a Kubernetes Deployment and Service.
-
Create a file named
whoami-deployment.yaml
with the following content:apiVersion: apps/v1 kind: Deployment metadata: name: whoami namespace: default spec: replicas: 2 selector: matchLabels: app: whoami template: metadata: labels: app: whoami spec: containers: - name: whoami image: traefik/whoami:latest ports: - containerPort: 80 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "200m" memory: "256Mi" --- apiVersion: v1 kind: Service metadata: name: whoami-service namespace: default spec: selector: app: whoami ports: - protocol: TCP port: 80 targetPort: 80 type: ClusterIP
-
Apply the configuration:
kubectl apply -f whoami-deployment.yaml
-
Verify the deployment and service:
kubectl get deployments kubectl get pods kubectl get services
Configure an ingress controller
Expose the whoami
application externally using an Nginx ingress controller.
-
Install the Nginx ingress controller using Helm:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
-
Create a file named
whoami-ingress.yaml
with the following content:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: whoami.example.com http: paths: - path: / pathType: Prefix backend: service: name: whoami-service port: number: 80
-
Apply the Ingress configuration:
kubectl apply -f whoami-ingress.yaml
-
Retrieve the external IP of the Ingress controller:
kubectl get svc -n ingress-nginx ingress-nginx-controller
Set up auto-scaling
Configure Horizontal Pod Autoscaling (HPA) to dynamically scale the whoami
application based on CPU usage.
-
Create a file named
whoami-hpa.yaml
with the following content:apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: whoami-hpa namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: whoami minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70
-
Apply the HPA configuration:
kubectl apply -f whoami-hpa.yaml
-
Verify the HPA status:
kubectl get hpa kubectl describe hpa whoami-hpa
Test the application
-
Get the Ingress controller’s external IP:
INGRESS_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
-
Test the application by sending an HTTP request (replace
whoami.example.com
with your domain or use the IP directly):curl -H "Host: whoami.example.com" http://$INGRESS_IP
-
Simulate load to trigger auto-scaling (optional):
kubectl run -i --tty load-generator --image=busybox --restart=Never -- /bin/sh -c "while true; do wget -q -O- http://whoami-service.default.svc.cluster.local; done"
-
Open another terminal and monitor pod scaling:
kubectl get pods -w kubectl get hpa -w
Clean up
Delete the cluster to avoid unnecessary costs.
-
Delete the cluster:
scw k8s cluster delete $CLUSTER_ID
-
Confirm the cluster is deleted:
scw k8s cluster list
Conclusion
This tutorial has guided you through the full lifecycle of a Kubernetes deployment, from creating a cluster and deploying an application to configuring ingress, enabling autoscaling, performing load testing, monitoring performance, and cleaning up resources. You have completed the first steps to effectively manage cloud-native applications on Scaleway, with a focus on both manual resource control and automated scaling to build resilient, efficient, and scalable systems.
Visit our Help Center and find the answers to your most frequent questions.
Visit Help Center