Monitoring a Kubernetes Kapsule Cluster

Grafana and Prometheus Overview

Kubernetes Kapsule provides a managed Kubernetes environment to create, configure and run a cluster of preconfigured machines for containerized applications.

This tutorial will explain how to to monitor your Kubernetes Kapsule cluster.
The stack that we are going to deploy is based on Prometheus, Grafana, kube-state-metrics and node-exporter. We will use Helm to deploy the whole stack.
All applications used for this how-to are well known open-source software that are widely used and they fit very well in a Kubernetes environment.

  • Prometheus: Prometheus is an application used for monitoring and alerting. It records real-time metrics in a time series database. It is based on a pull model and relies on http for scraping the metrics.
  • Grafana: Grafana is used for visualizing the metrics scraped by Prometheus and stored in the time serie database
  • kube-state-metrics: kube-state-metrics listens to the Kubernetes API server and generates metrics about the state of the objects. The list of the exported metrics are available here. For instance, kube-state-metrics can report the number of pods ready (kube_pod_status_ready), or the number of unschedulable pods (kube_pod_status_unschedulable).
  • node-exporter: The node-exporter is a Prometheus exporter for hardware and OS metrics exposed by the Linux Kernel. It will allow you to get metrics about cpu, memory, filesystem for each Kubernetes node.

Requirements:

Preparing the Kubernetes Kapsule Cluster

1 . Ensure you are connected to your cluster and that kubectl and helm are installed on your local machine.

2 . Configure the RBAC authorization on your cluster and configure helm:

$ kubectl --namespace kube-system create serviceaccount tiller
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Deploying Prometheus

We are first going to deploy the Prometheus stack in a dedicated Kubernetes namespace called “monitoring”. We will set the retention time for 30 days and create a persistant volume (based on Scaleway Block Storage) to store the Prometheus data.

1 . Use the helm packet manager to install the stable release of Prometheus. Set the following parameters to helm for both of these:

  • server.persistentVolume: 100Gi
  • server.retention: 30d
$ helm install stable/prometheus -n prometheus --namespace monitoring  --set server.persistentVolume.size=100Gi,server.retention=30d
NAME:   prometheus
LAST DEPLOYED: Wed Mar 18 15:04:25 2020
NAMESPACE: monitoring
STATUS: DEPLOYED
[..]

2 . Once the stack is deployed, verify that pods running all the p described above. It is also possible to check if the 100Gi block volume was created:

$ kubectl get pods,pv,pvc -n monitoring
NAME                                                READY   STATUS    RESTARTS   AGE
pod/prometheus-alertmanager-6565668c85-5vdxc        2/2     Running   0          67s
pod/prometheus-kube-state-metrics-6756bbbb8-6qs9r   1/1     Running   0          67s
pod/prometheus-node-exporter-fbg6s                  1/1     Running   0          67s
pod/prometheus-pushgateway-6d75c59b7b-6knfd         1/1     Running   0          67s
pod/prometheus-server-556dbfdfb5-rx6nl              1/2     Running   0          67s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   REASON   AGE
persistentvolume/pvc-5a9def3b-22a1-4545-9adb-72823b899c36   100Gi      RWO            Delete           Bound    monitoring/prometheus-server         scw-bssd                67s
persistentvolume/pvc-c5e24d9b-3a69-46c1-9120-b16b7adf73e9   2Gi        RWO            Delete           Bound    monitoring/prometheus-alertmanager   scw-bssd                67s

NAME                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/prometheus-alertmanager   Bound    pvc-c5e24d9b-3a69-46c1-9120-b16b7adf73e9   2Gi        RWO            scw-bssd       68s
persistentvolumeclaim/prometheus-server         Bound    pvc-5a9def3b-22a1-4545-9adb-72823b899c36   100Gi      RWO            scw-bssd       68s

3 . To access Prometheus use the Kubernetes port forwarding feature:

$ export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace monitoring port-forward $POD_NAME 9090

4 . Access the Prometheus dashboard using the following URL: http://localhost:9090

5 . Verify that both node-exporter and kube-state-metrics metrics are correctly scrapped by Prometheus:

  • The node-exporter metrics are begining with “node_”

  • The kube-state-metrics are begining with “kube_”

6 . Prometheus is capable of generating graphs on its own, and you may test to graph some metrics directly in the application:

Deploying Grafana

We are going to use and deploy Grafana to display the Prometheus metrics in some pre-defined dashboards. To do so, we are -as always- using helm. Once again we deploy it in the monitoring namespace and enable the persistence:

  • persistence.enable : true
  • persistence.type : pvc
  • persistence.size : 10Gi

Please refer to the Loki tutorial to have additionnal information about Grafana

1 . Install Grafana using helm with the following command:

$  helm install stable/grafana -n grafana \
                              --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \
                              --namespace=monitoring

2 . Once Grafana is installed retrieve the admin password:

$ kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

3 . Configure the port forwarding to access the Grafana Webinterface at this address: http://localhost:3000:

$ kubectl port-forward --namespace monitoring service/grafana 3000:80
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

4 . Open the Grafana Web Interface in a web browser at: http://localhost:3000. The login screen displays. Enter the user admin and the password recovered in step 2:

5 . The welcome screen displays and invites you to complete the configuration of Grafana. Click Add data source to configure a new data source:

6 . Choose Prometheus as data source from the list of available options.

7 . Enter the details of the data source. You can leave the default settings and add the data source: http://prometheus-server. Click Test & Save to validate the connection to Prometheus and to save the settings:

8 . Click + > Import to import a ready-to-use dashboard from the Grafana website. To create a dashboard that uses kube-state-metrics, import the dashboard number8588 and get information about your Deployement,Statefulset and Daemonset:

9 . Choose Prometheus as data source:

10 . Access the dashboard with metrics for Deployement,Statefulset and Daemonset:

11 . You can also configure additional dashobards, for example the node exporter full dashboard (1860) to display a dashboard with system metrics for each Kubernetes node:

You know have a basic monitoring for your Kubernetes Kapsule cluster. For more information how to configure your cluster, refer to the official Kubernetes documentation.

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.