Use Loki to Manage k8s Application Logs

Loki - Overview

Scaleway Elements Kubernetes Kapsule is not delivered with an embedded logging feature.
In this tutorial, you will learn how to collect your Kubernetes logs using Loki and Grafana. Loki is a log aggregation system inspired by Prometheus.
We believe that it is easy to operate -especially in a Kubernetes environment- as it does not index the content of the logs but set labels for log streams.
As in a cloud-native environment, Prometheus is one of the most common solutions for monitoring. You can re-use the same labels you have already set for Prometheus. For instance, in Kubernetes, the metadata you are using (object labels) can be used in Loki for scraping logs. If you use Grafana for metrics, using Loki will allow you to have a single point of management for both logging and monitoring.

Requirements:

Installing Loki

The loki application is not is included in the default Helm repositories. Add the Loki repository to Helm and update it.

$ helm repo add loki https://grafana.github.io/loki/charts
"loki" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "loki" chart repository
[..]
$ helm search repo loki
NAME            CHART VERSION   APP VERSION     DESCRIPTION
loki/loki       0.31.1          v1.6.0          Loki: like Prometheus, but for logs.
loki/loki-stack 0.41.1          v1.6.0          Loki: like Prometheus, but for logs.
loki/fluent-bit 0.3.1           v1.6.0          Uses fluent-bit Loki go plugin for gathering lo...
loki/promtail   0.25.1          v1.6.0          Responsible for gathering logs and sending them...

Install the Loki stack with Helm. We want to enable persistence (allow Helm to create a Scaleway block device and attach it to the Loki pod to store its data) using a Kubernetes Persistent Volumes to survive a pod re-schedule so do not forget to set these parameters when running the helm install command:

  • loki.persistence.enabled : true
  • loki.persistence.size: 100Gi

It will use the Kapsule’s default storage class which is scw-bsdd to create block volumes using Scaleway block storage.

We install all the stack in a Kubernetes dedicated namespace named loki-stack.

$ helm install loki-stack loki/loki-stack \
                               --create-namespace \
                               --namespace loki-stack \
                               --set promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=100Gi

If you plan to use Loki on a production system be sure that you setup a retention period to avoid filling the filesystems. For instance use these parameters if you want to enable a 30 days retention (logs older than 30 days will be deleted). Please note that you have to choose the size of the persistent volume to fit the amount of volume your deployment will create.

  • config.table_manager.retention_deletes_enabled : true
  • config.table_manager.retention_period: 720h

Install Grafana in the loki-stack namespace with Helm. We also want Grafana to survive a re-schedule so we are enabling persistence too :

  • persistence.enabled: true
  • persistence.type: pvc
  • persistence.size: 10Gi
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm install loki-grafana grafana/grafana \
                              --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \
                              --namespace=loki-stack

You can check if the block devices were correctly created by Kubernetes:

$ kubectl get pv,pvc -n loki-stack
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   REASON   AGE
persistentvolume/pvc-53a60678-f132-4e62-b568-3518a20d6bd3   10Gi       RWO            Delete           Bound    loki-stack/loki-grafana              scw-bssd                16s
persistentvolume/pvc-8d605c6a-154c-4a4f-9dc0-7edce1825106   100Gi      RWO            Delete           Bound    loki-stack/storage-loki-stack-0      scw-bssd                5m29s

NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/loki-grafana           Bound    pvc-53a60678-f132-4e62-b568-3518a20d6bd3   10Gi       RWO            scw-bssd       17s
persistentvolumeclaim/storage-loki-stack-0   Bound    pvc-8d605c6a-154c-4a4f-9dc0-7edce1825106   100Gi      RWO            scw-bssd       5m30s

Now that both Loki and Grafana are installed in the cluster, check if the pods are correctly running:

$ kubectl get pods -n loki-stack

NAME                               READY   STATUS    RESTARTS   AGE
loki-grafana-75f788cb85-xqchj      1/1     Running   0          4m32s
loki-stack-0                       1/1     Running   0          17m
[..]
loki-stack-promtail-4nnkd          1/1     Running   0          17m
[..]

To be able to connect to Grafana you first have to get the admin password. Then open Grafana in a web browser using a port-forward:

$ kubectl get secret --namespace loki-stack loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
8DaJk0m1gmS4t1OgXkpgZXs46PaUQn5iydvsZS7g
$ kubectl port-forward --namespace loki-stack service/loki-grafana 3000:80
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

Access http://localhost:3000 to reach the Grafana interface. Login using the admin user and the password you got above.

loki grafana access

Add the Loki source to Grafana (http://loki-stack.loki-stack:3100).

loki add datasource

Check you can access your logs using the explore tab in Grafana:

loki graph

You now have a Loki stack up and running. All your pods logs will be stored in Loki and you will be able to view and query your applications logs in Grafana. Please refer to the Grafana documentation here, if you want to learn more about querying the Loki data source.

Discover the Cloud That Makes Sense