How to monitor your Kubernetes cluster with Cockpit
Scaleway provides the k8s-monitoring repository with three ready-to-use installation methods to monitor your Kubernetes cluster with Cockpit. Each method deploys Grafana Alloy collectors on your Kapsule cluster to scrape Prometheus metrics and gather Loki logs, then forwards everything to Cockpit for visualization through Grafana dashboards.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
What gets collected
The monitoring stack collects the following data from your cluster:
- Cluster metrics — nodes, pods, deployments, and volumes
- Node metrics — CPU, memory, disk, and network statistics via node-exporter
- Kubernetes state metrics — resource states via kube-state-metrics
- Pod logs — application logs with annotation-based autodiscovery
- Node logs — systemd journal logs
- Cluster events — Kubernetes event logs
- Custom application metrics — through Prometheus annotations
- Prometheus Operator CRDs — ServiceMonitor, PodMonitor, and Probe support
The deployment also includes seven preconfigured Grafana dashboards: four for Kubernetes metrics (cluster, namespace, node, and pod views) and three for logs (cluster events, node logs, and pod logs).
Choose your installation method
Scaleway provides three methods to deploy the monitoring stack, depending on your existing infrastructure and preferred workflow.
Terraform - Complete setup
This method creates everything from scratch, including a new Scaleway Project, VPC, Kapsule cluster, and the full monitoring stack. It is best suited for new environments where you do not yet have a running cluster.
Prerequisites: Terraform >= 1.0, a Scaleway account with API credentials
Refer to the terraform-complete guide for step-by-step instructions.
Terraform - Existing cluster
This method deploys the monitoring stack to an already-running Kapsule cluster using Terraform. It does not create any cluster infrastructure — it only provisions Cockpit resources, Helm charts, and dashboards. It is best suited for teams who already have a cluster and want to manage monitoring as Infrastructure as Code.
Prerequisites: Terraform >= 1.0, a running Kapsule cluster, a valid kubeconfig file
Refer to the terraform-existing-cluster guide for step-by-step instructions.
Helm only
This method uses the Scaleway CLI and Helm to manually provision Cockpit resources and install the monitoring stack. Dashboards are imported through the Grafana UI. It is best suited for users who prefer direct CLI control without Terraform.
Prerequisites: Helm >= 3.0, Scaleway CLI (scw), kubectl, jq
Refer to the helm-only guide for step-by-step instructions.
Going further
If you need more granular control over what data you send to Cockpit, refer to our dedicated guides:
- Send logs from your Kubernetes cluster to Cockpit — configure log forwarding using the k8s-monitoring Helm chart directly
- Send metrics from your Kubernetes cluster to Cockpit — configure metric collection with annotation-based autodiscovery