NavigationContentFooter
Suggest an edit

Using Grafana and Loki to Manage Kubernetes Logs

Reviewed on 20 June 2024Published on 06 November 2019
  • Grafana
  • Loki
  • Kubernetes
  • logs
Important

Kubernetes Kapsule is fully integrated with Scaleway’s Observability Cockpit. You can monitor your cluster directly from the cluster’s dashboard, eliminating the need to set up your own monitoring solution. The following content is provided for informational purposes only.

In this tutorial, you will learn how to use Loki and Grafana to collect your Kubernetes logs on a Kapsule cluster.

Loki is a log aggregation system inspired by Prometheus. It is easy to operate, as it does not index the content of the Kubernetes logs but sets labels for log streams. Your metadata (object labels) can be used in Loki for scraping Kubernetes logs. If you use Grafana for metrics, Loki will allow you to have a single point of management for both logging and monitoring.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • Created a Kapsule cluster
  • Configured kubectl on your machine
  • Installed helm (version 3.2+), the Kubernetes packet manager, on your local machine
Important

The loki application is not included in the default Helm repositories. Since December 2020, Loki’s Helm charts have been moved from their initial location within the Loki repository to their new location at https://github.com/grafana/helm-charts.

  1. Add the Grafana repository to Helm and update it.

    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update

    Which returns:

    "grafana" has been added to your repositories
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "loki" chart repository
    ...Successfully got an update from the "grafana" chart repository
    Update Complete. ⎈Happy Helming!⎈
  2. Install all the stack in a Kubernetes dedicated namespace named loki-stack, using Helm. It must be deployed to your cluster and persistence must be enabled (allow Helm to create a Scaleway block device and attach it to the Loki pod to store its data) using a Kubernetes Persistent Volumes to survive a pod re-schedule:

    helm install loki-stack grafana/loki-stack \
    --create-namespace \
    --namespace loki-stack \
    --set promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=100Gi

    It will use Kapsule’s default storage class, scw-bsdd, to create block volumes using Scaleway Block Storage.

    Important

    You must enter a size for the persistent volume that fits the amount of volume your deployment will create.

    Note

    If you plan to use Loki on a production system, make sure that you set up a retention period to avoid filling the file systems. Use these parameters to enable a 30-day retention (logs older than 30 days will be deleted), for example.

    • config.table_manager.retention_deletes_enabled : true
    • config.table_manager.retention_period: 720h
  3. Install Grafana in the loki-stack namespace with Helm. Enable persistence to ensure Grafana remains stable in the event of a re-schedule.

    • persistence.enabled: true
    • persistence.type: pvc
    • persistence.size: 10Gi
    helm install loki-grafana grafana/grafana \
    --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \
    --namespace=loki-stack

    You can check if the block devices were correctly created by Kubernetes:

    kubectl get pv,pvc -n loki-stack
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    persistentvolume/pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO Delete Bound loki-stack/loki-grafana scw-bssd 18s
    persistentvolume/pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO Delete Bound loki-stack/storage-loki-stack-0 scw-bssd 4m30s
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    persistentvolumeclaim/loki-grafana Bound pvc-88038939-24a5-4383-abe8-f3aab97b7ce7 10Gi RWO scw-bssd 19s
    persistentvolumeclaim/storage-loki-stack-0 Bound pvc-c6fce993-a73d-4423-9464-7c10ab009062 100Gi RWO scw-bssd 5m3s
  4. Check if the pods are correctly running.

    kubectl get pods -n loki-stack
    NAME READY STATUS RESTARTS AGE
    loki-grafana-67994589cc-7jq4t 0/1 Running 0 74s
    loki-stack-0 1/1 Running 0 5m58s
    loki-stack-promtail-dtf5v 1/1 Running 0 5m42s
  5. Get the admin password.

    kubectl get secret --namespace loki-stack loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
  6. Configure a port-forward to reach Grafana from your web browser:

    kubectl port-forward --namespace loki-stack service/loki-grafana 3000:80
    Forwarding from 127.0.0.1:3000 -> 3000
    Forwarding from [::1]:3000 -> 3000
  7. Access http://localhost:3000 to reach the Grafana interface. Log in using the admin user and the password you got above.

  8. Click Configuration > Data Sources in the side menu.

  9. Click + Add Data Source.

  10. Select Loki.

  11. Add the Loki source to Grafana (http://loki-stack.loki-stack:3100).

  12. Check you can access your logs using the explore tab in Grafana:

You now have a Loki stack up and running. All your pod’s logs will be stored in Loki and you will be able to view and query your Kubernetes logs in Grafana. Refer to the Grafana documentation, if you want to learn more about querying the Loki data source.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2024 – Scaleway