Monitor your Kubernetes cluster with Grafana

Monitor your Kubernetes cluster with Grafana - Overview

When using a managed Kubernetes Kapsule cluster, you may want to know what is going on inside it. This tutorial will explain you how to monitor your Kapsule cluster. The applications used for this how-to are well known open-source software that are widely used and fit very well in a Kubernetes environment. The stack that we are going to deploy is based on Prometheus, Grafana, kube-state-metrics and node-exporter. We will use Helm to deploy the whole stack.

  • Prometheus : Prometheus is an application used for monitoring and alerting. It records real-time metrics in a time series database. It is based on a pull model and relies on http for scraping the metrics.
  • Grafana : Grafana is used for visualizing the metrics scraped by Prometheus and stores in a time-series database.
  • kube-state-metrics : kube-state-metrics listens to the Kubernetes API server and generate metrics about the state of the objects. The list of the exported metrics are available here. For instance kube-state-metrics can report the number of pod ready (kube_pod_status_ready), or the number of unschedulable pods (kube_pod_status_unschedulable).
  • node-exporter : The node-exporter is a Prometheus exporter for hardware and OS metrics exposed by the Linux Kernel. It will allow you to get metrics about CPU, memory, and filesystem for each Kubernetes node.


Installing Prometheus

We are first going to deploy the Prometheus stack in a dedicated Kubername namespace called “monitoring”. We will set the retention time for 30 days and create a persistant volume (based on Scaleway block storage) to store the Prometheus data.

1 . Start by setting the following parameters to helm for the following options:

  • server.persistentVolume: 100Gi
  • server.retention: 30d
$ helm install stable/prometheus -n prometheus --namespace monitoring  --set server.persistentVolume.size=100Gi,server.retention=30d
NAME:   prometheus
LAST DEPLOYED: Wed Nov 13 11:38:07 2019
NAMESPACE: monitoring

2 . Once the stack is deployed, verify that all pods are correctly running the application mentionned above. Check also that the 100Gi block volume was created correctly:

$ kubectl get pods,pv,pvc -n monitoring
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/prometheus-alertmanager-df7d48c78-5m5n7          2/2     Running   0          44s
pod/prometheus-kube-state-metrics-6cd8cdc7b7-k2dx8   1/1     Running   0          44s
pod/prometheus-node-exporter-d7td7                   1/1     Running   0          44s
pod/prometheus-node-exporter-kq7tf                   1/1     Running   0          44s
pod/prometheus-node-exporter-qgfhw                   1/1     Running   0          44s
pod/prometheus-pushgateway-655f59475-btjc7           1/1     Running   0          43s
pod/prometheus-server-68fcc4b79c-8hhdj               1/2     Running   0          43s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   REASON   AGE
persistentvolume/pvc-646b5df7-4fe3-408c-9300-c1671b100db6   100Gi      RWO            Delete           Bound    monitoring/prometheus-server         scw-bssd                43s
persistentvolume/pvc-caa2e7b0-cb3d-48d0-8afa-e1157b576158   2Gi        RWO            Delete           Bound    monitoring/prometheus-alertmanager   scw-bssd                43s

NAME                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/prometheus-alertmanager   Bound    pvc-caa2e7b0-cb3d-48d0-8afa-e1157b576158   2Gi        RWO            scw-bssd       44s
persistentvolumeclaim/prometheus-server         Bound    pvc-646b5df7-4fe3-408c-9300-c1671b100db6   100Gi      RWO            scw-bssd       44s

3 . To access Prometheus we use the Kubernetes port forwarding service. Run the following command to configure it:

$ export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0]}")
$ kubectl --namespace monitoring port-forward $POD_NAME 9090

4 . Access the Prometheus dashboard now using this URL: http://localhost:9090. The Prometheus dashboard displays:

Prometheus Dashboard

5 . Verify that both node-exporter and kube-state-metrics metrics are correctly scrapped by Prometheus:

  • The node-exporter metrics are begining with “node_”

Prometheus Dashboard

  • The kube-state-metrics are begining with “kube_”

Prometheus Dashboard

6 . Start playing around with Prometheus to begin to graph some metrics.

Prometheus Dashboard

Deploying Grafana

We are going to use and deploy Grafana to display the Prometheus metrics in some pre-defined dashboards. To do so, we are, as always, using helm.

Once again we deploy it in the monitoring namespace and enable the persistence:

  • persistence.enable : true
  • persistence.type : pvc
  • persistence.size : 10Gi

Please refer to the Loki tutorial to have additionnal information about Grafana

1 . Run the following command to deploy Grafana with the specifications mentionned above:

$  helm install stable/grafana -n grafana \
                              --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \

2 . Once Grafana is installed retrieve the admin password, and once again use the port forwarding service to access the Grafana web ui. It will be available at the following URL: http://localhost:3000:

$ kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
$ kubectl port-forward --namespace monitoring service/grafana 3000:80
Forwarding from -> 3000
Forwarding from [::1]:3000 -> 3000

Setting up a Data Source

Grafana itself is just a display tool and without data it will only produce empty graphs. The data and metrics collecting is done by Prometheus; now we need to connect them together.

So let’s add Prometheus as the default data source of Grafana:

1 . Click on the Grafana logo to open the sidebar menu.
2 . Click on Data Sources in the sidebar.
> It is also possible to go directly to http://localhost:3000/datasources
3 . Click on Add New.
4 . Select Prometheus as the type.
5 . Set the appropriate Prometheus server URL (in our case, http://localhost:9090/ from the port-forwarding).
6 . Adjust other data source settings as desired (for example, turning the proxy access off).
7 . Check the Default box.
8 . Click Save & Test to save the new data source.

Adding a new Prometheus data source

Setting up the Dashboard

Now that the data source is configured, you can start to create dashboards to visualize it. It is possible to configure your dashboard by yourself, but there are a few already avaiable publically on

You can find the one used in this tutorial here. If you want to use it, follow these steps:

1 . On the left menu, click on + then Import.
2 . In the field Dashboard, paste the id of the dashboard (in our case 1860):

Adding a new Prometheus data source

3 . Click on Load.
4 . Give it a name, a uid and select the data source.

Grafana GUI with custom dashboard

5 . Click on Import.
6 . You can now access the dashboard with system metrics for each Kubernetes node:

Grafana GUI with custom dashboard

Optionally: If you want to create a dashboard that uses kube-state-metrics, import the dashboard number 8588 and get informations about your Deployement,Statefulset and Daemonset.

Grafana dashboard 8588


You now have all the basic knowledge needed to monitor your Kubernetes cluster. If a further tutorial you will learn how to use Prometheus alert manager to create some alerts based on the metrics gathered by Prometheus.

External Links

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.