Use Loki to Manage k8s Application Logs

Loki - Overview

Scaleway Elements Kubernetes Kapsule is not delivered with an embedded logging feature.
In this tutorial, you will learn how to collect your Kubernetes logs using Loki and Grafana. Loki is a log aggregation system inspired by Prometheus.
We believe that it is easy to operate -especially in a Kubernetes environment- as it does not index the content of the logs but set labels for log streams.
As in a cloud-native environment, Prometheus is one of the most common solutions for monitoring. You can re-use the same labels you have already set for Prometheus. For instance, in Kubernetes, the metadata you are using (object labels) can be used in Loki for scraping logs. If you use Grafana for metrics, using Loki will allow you to have a single point of management for both logging and monitoring.

Getting Started

You first have to deploy a Kapsule cluster running version 1.16.x. In the example, below a 10 nodes Kubernetes cluster is already running. Follow our quick start guide if you require more information how to get started.

$ kubectl get nodes
NAME                                             STATUS   ROLES    AGE    VERSION
scw-k8s-objective-yonath-default-10f47ee6c83c4   Ready    <none>   97s    v1.16.1
scw-k8s-objective-yonath-default-2c1c18e59d1a4   Ready    <none>   2m9s   v1.16.1
scw-k8s-objective-yonath-default-3a301b7a8b0c4   Ready    <none>   117s   v1.16.1
scw-k8s-objective-yonath-default-3bf169ff3f7f4   Ready    <none>   117s   v1.16.1
scw-k8s-objective-yonath-default-43e84dcf9e504   Ready    <none>   81s    v1.16.1
scw-k8s-objective-yonath-default-4df0dee607b54   Ready    <none>   64s    v1.16.1
scw-k8s-objective-yonath-default-5ba95ac086564   Ready    <none>   105s   v1.16.1
scw-k8s-objective-yonath-default-b63f5f4753e94   Ready    <none>   97s    v1.16.1
scw-k8s-objective-yonath-default-c0c509b2e1f24   Ready    <none>   99s    v1.16.1
scw-k8s-objective-yonath-default-c6461c0fa2cc4   Ready    <none>   94s    v1.16.1

Installing Helm

As we are going to deploy Loki and Grafana using Helm. In first step, you have to install Helm. As there is a bug in Helm with Kubernetes 1.16 we have to modify the tiller deployment.

Kapsule clusters are created with rbac enabled. A Kubernetes service account and its associated cluster role binding must be created before deploying Helm.

Create a file named rbac-config.yaml with the content below. In this single YAML file you will write the definitions of the ServiceAccount and ClusterRoleBinding needed by Helm.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Once the file is created apply it using the kubectl command.

$ kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

To install tiller in your cluster, you first have to install helm on your local machine. Please refer to the helm documentation for more information. Then install tiller in the cluster (using the helm command installed previously on your local machine):

$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
$ helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
deployment.apps/tiller-deploy created
service/tiller-deploy created
$ kubectl get all --all-namespaces -l app=helm
[..]
NAME                                 READY   STATUS    RESTARTS   AGE
pod/tiller-deploy-77855d9dcf-k46fg   1/1     Running   0          2m27s

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/tiller-deploy   ClusterIP   10.46.72.238   <none>        44134/TCP   2m26s
[..]

You can see on the output above that a tiller pod is running. You are now ready to use Helm to deploy Loki and Grafana.

Installing Loki

The loki application is not is included in the default Helm repositories. Add the Loki repository to Helm and update it.

$ helm repo add loki https://grafana.github.io/loki/charts
"loki" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "loki" chart repository
[..]
$ helm search loki
NAME           	CHART VERSION	APP VERSION	DESCRIPTION                                                 
loki/loki      	0.16.0       	v0.3.0     	Loki: like Prometheus, but for logs.                        
loki/loki-stack	0.17.1       	v0.3.0     	Loki: like Prometheus, but for logs.                        
loki/fluent-bit	0.0.1        	v0.0.1     	Uses fluent-bit Loki go plugin for gathering logs and sen...
loki/promtail  	0.12.3       	v0.3.0     	Responsible for gathering logs and sending them to Loki     

Install the Loki stack with Helm. We want to enable persistence (allow Helm to create a Scaleway block device and attach it to the Loki pod to store its data) using a Kubernetes Persistent Volumes to survive a pod re-schedule so do not forget to set these parameters when running the helm install command:

  • loki.persistence.enabled : true
  • loki.persistence.size: 100Gi

It will use the Kapsule’s default storage class which is scw-bsdd to create block volumes using Scaleway block storage.

We install all the stack in a Kubernetes dedicated namespace named loki-stack.

$ helm install loki/loki-stack -n loki-stack \
                               --set fluent-bit.enabled=true,promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=100Gi \
                               --namespace=loki-stack

If you plan to use Loki on a production system be sure that you setup a retention period to avoid filling the filesystems. For instance use these parameters if you want to enable a 30 days retention (logs older than 30 days will be deleted). Please note that you have to choose the size of the persistent volume to fit the amount of volume your deployment will create.

  • config.table_manager.retention_deletes_enabled : true
  • config.table_manager.retention_period: 720h

Install Grafana in the loki-stack namespace with Helm. We also want Grafana to survive a re-schedule so we are enabling persistence too :

  • persistence.enabled: true
  • persistence.type: pvc
  • persistence.size: 10Gi
$ helm install stable/grafana -n loki-grafana \
                              --set persistence.enabled=true,persistence.type=pvc,persistence.size=10Gi \
                              --namespace=loki-stack

You can check if the block devices were correctly created by Kubernetes:

$ kubectl get pv,pvc --all-namespaces
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS   REASON   AGE
persistentvolume/pvc-1805f8d1-946a-45bb-a968-08a49c81acd4   100Gi      RWO            Delete           Bound    loki-stack/storage-loki-stack-0   scw-bssd                15m
persistentvolume/pvc-65fb2f05-aa74-49ba-b3f0-2b5dab4077fb   10Gi       RWO            Delete           Bound    loki-stack/loki-grafana           scw-bssd                2m48s

NAMESPACE    NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
loki-stack   persistentvolumeclaim/loki-grafana           Bound    pvc-65fb2f05-aa74-49ba-b3f0-2b5dab4077fb   10Gi       RWO            scw-bssd       2m48s
loki-stack   persistentvolumeclaim/storage-loki-stack-0   Bound    pvc-1805f8d1-946a-45bb-a968-08a49c81acd4   100Gi      RWO            scw-bssd       15m

Now that both Loki and Grafana are installed in the cluster, check if the pods are correctly running:

$ kubectl get pods -n loki-stack
NAME                               READY   STATUS    RESTARTS   AGE
loki-grafana-75f788cb85-xqchj      1/1     Running   0          4m32s
loki-stack-0                       1/1     Running   0          17m
loki-stack-fluent-bit-loki-g8rw8   1/1     Running   0          17m
[..]
loki-stack-promtail-4nnkd          1/1     Running   0          17m
[..]

To be able to connect to Grafana you first have to get the admin password. Then open Grafana in a web browser using a port-forward:

$ kubectl get secret --namespace loki-stack loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
8DaJk0m1gmS4t1OgXkpgZXs46PaUQn5iydvsZS7g
$ kubectl port-forward --namespace loki-stack service/loki-grafana 3000:80
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

Access http://localhost:3000 to reach the Grafana interface. Login using the admin user and the password you got above.

loki grafana access

Add the Loki source to Grafana (http://loki-stack.loki-stack:3100).

loki add datasource

Check you can access your logs using the explore tab in Grafana:

loki graph

You now have a Loki stack up and running. All your pods logs will be stored in Loki and you will be able to view and query your applications logs in Grafana. Please refer to the Grafana documentation here, if you want to learn more about querying the Loki data source.

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.