Scaleway Documentationtutorials
manage k8s logging loki

Update content

Using Loki to Manage k8s Application Logs

Reviewed on 10 May 2021Published on 06 November 2019
  • advanced
  • settings

Scaleway Elements Kubernetes Kapsule is not delivered with an embedded logging feature.
In this tutorial, you will learn how to collect your Kubernetes logs using Loki and Grafana. Loki is a log aggregation system inspired by Prometheus.
We believe that it is easy to operate -especially in a Kubernetes environment- as it does not index the content of the logs but set labels for log streams.
As in a cloud-native environment, Prometheus is one of the most common solutions for monitoring. You can re-use the same labels you have already set for Prometheus. For instance, in Kubernetes, the metadata you are using (object labels) can be used in Loki for scraping logs. If you use Grafana for metrics, using Loki will allow you to have a single point of management for both logging and monitoring.

Requirements:

Installing Loki with Helm

The loki application is not is included in the default Helm repositories.

Important:

Since December 2020, Loki’s Helm charts have been moved from their initial location within the Loki repo and hosted at https://grafana.github.io/loki/charts to their new location at https://github.com/grafana/helm-charts which are hosted at https://grafana.github.io/helm-charts.

  1. Add the Grafana repository to Helm and update it.

    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update

    Which returns

    "grafana" has been added to your repositories
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "loki" chart repository
    ...Successfully got an update from the "grafana" chart repository
    Update Complete. ⎈Happy Helming!⎈
  2. Install the loki-stack with Helm. We install all the stack in a Kubernetes dedicated namespace named loki-stack. We need to deploy it to your cluster and enable persistence (allow Helm to create a Scaleway block device and attach it to the Loki pod to store its data) using a Kubernetes Persistent Volumes to survive a pod re-schedule:

    $ helm install loki-stack grafana/loki-stack \
    --create-namespace \
    --namespace loki-stack \
    --set promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=100Gi

    It will use the Kapsule’s default storage class which is scw-bsdd to create block volumes using Scaleway block storage.

    If you plan to use Loki on a production system be sure that you setup a retention period to avoid filling the filesystems. For instance use these parameters if you want to enable a 30 days retention (logs older than 30 days will be deleted). Please note that you have to choose the size of the persistent volume to fit the amount of volume your deployment will create.

  • config.table_manager.retention_deletes_enabled : true

  • config.table_manager.retention_period: 720h

    Install Grafana in the loki-stack namespace with Helm. We also want Grafana to survive a re-schedule so we are enabling persistence too :

  • persistence.enabled: true

  • persistence.type: pvc

  • persistence.size: 10Gi

    $ helm install loki-grafana grafana/grafana \
    --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \
    --namespace=loki-stack

    You can check if the block devices were correctly created by Kubernetes:

    $ kubectl get pv,pvc -n loki-stack
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    persistentvolume/pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO Delete Bound loki-stack/loki-grafana scw-bssd 18s
    persistentvolume/pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO Delete Bound loki-stack/storage-loki-stack-0 scw-bssd 4m30s

    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    persistentvolumeclaim/loki-grafana Bound pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO scw-bssd 19s
    persistentvolumeclaim/storage-loki-stack-0 Bound pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO scw-bssd 5m3s

    Now that both Loki and Grafana are installed in the cluster, check if the pods are correctly running:

    $ kubectl get pods -n loki-stack

    NAME READY STATUS RESTARTS AGE
    loki-grafana-67994589cc-7jq4t 0/1 Running 0 74s
    loki-stack-0 1/1 Running 0 5m58s
    loki-stack-promtail-dtf5v 1/1 Running 0 5m42s

    To be able to connect to Grafana you first have to get the admin password:

    $ kubectl get secret --namespace loki-stack loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

    Configure a port-forward to reach Grafana from your web browser:

    $ kubectl port-forward --namespace loki-stack service/loki-grafana 3000:80
    Forwarding from 127.0.0.1:3000 -> 3000
    Forwarding from [::1]:3000 -> 3000

    Access http://localhost:3000 to reach the Grafana interface. Login using the admin user and the password you got above.

    Add the Loki source to Grafana (http://loki-stack.loki-stack:3100).

    Check you can access your logs using the explore tab in Grafana:

    You now have a Loki stack up and running. All your pods logs will be stored in Loki and you will be able to view and query your applications logs in Grafana. Please refer to the Grafana documentation here, if you want to learn more about querying the Loki data source.