NavigationContentFooter
Jump toSuggest an edit

Send Kapsule logs and metrics to the Observability Cockpit with Fluent Bit

Reviewed on 13 December 2023Published on 01 June 2023
  • fluentbit
  • grafana
  • kubernetes
  • metrics
  • logs

In this tutorial you will learn how to forward the applicative logs and the usage metrics of your Kubernetes Kapsule containers into the Observability Cockpit.

This process will be done using Fluent Bit, a lightweight logs and metrics processor which acts as a gateway between containers and the Cockpit endpoints, when configured in a Kubernetes cluster.

Cockpit dashboard updates

Starting April 2024, a new version of Cockpit will be released.

In this version, the concept of regionalization will be introduced to offer you more flexibility and resilience for seamless monitoring. If you have created customized dashboards with data for your Scaleway resources before April 2024, you will need to update your queries in Grafana, with the new regionalized data sources.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • Retrieved your Grafana credentials
  • Created a Kapsule cluster
  • Set up kubectl on your machine
  • Installed helm, the Kubernetes package manager, on your local machine (version 3.2+)
Important
  • Having the default configuration on your agents might lead to more of your resources’ metrics being sent, a high consumption and a high bill at the end of the month.
  • Sending metrics and logs for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the product pricing for more information.

Configuring the Fluent Bit service

Fluent Bit will be installed as a Helm package configured to target your Kubernetes resources as inputs and your Observability cockpit as an output.

  1. Add the Helm repository for Fluent Bit to your machine:

    helm repo add fluent https://fluent.github.io/helm-charts
    helm repo update
  2. Create a values file for Helm named values.yaml that we will use to configure Fluent Bit.

  3. Create a first section config.service in the values.yaml file to configure the Fluent Bit master process:

    config:
    service: |
    [SERVICE]
    Flush 1
    Log_level info
    Daemon off
    Parsers_File custom_parsers.conf
    HTTP_Server on
    HTTP_Listen 0.0.0.0
    HTTP_PORT 2020
  • Flush 1: Collects logs every second.
  • Log_level info: Displays informational logs in the Fluent Bit pods.
  • Daemon off: Run Fluent Bit as the foreground process in its pods.
  • Parsers_File custom_parsers.conf: Loads additional log parsers that we will define later on.
  • HTTP_Server on: Enables Fluent Bit’s built-in HTTP server.
  • HTTP_Listen 0.0.0.0: Listen on all interfaces exposed by your pod.
  • HTTP_PORT 2020: Listen to port 2020.
Note

You need to enable Fluent Bit’s HTTP server in order for it to communicate with your Cockpit.

Configuring observability inputs

We will configure Fluent Bit to retrieve the metrics (e.g.: CPU, memory, disk usage) from your Kubernetes nodes and the applicative logs from your running pods.

Create a new section config.inputs in the values.yaml file:

inputs: |
[INPUT]
Name node_exporter_metrics
Tag node_metrics
Scrape_interval 60
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag logs.*

The first subsection adds an input to Fluent Bit to retrieve the usage metrics from your containers:

  • Name node_exporter_metrics: This input plugin is used to collect various system-level metrics from your nodes.
  • Tag node_metrics: The Tag parameter assigns a tag to the incoming data from the node_exporter_metrics plugin. In this case, the tag node_metrics is assigned to the collected metrics.
  • Scrape_interval 60: The frequency at which metrics are retrieved. Metrics are collected every 60 seconds.
Important

Increasing the scrape interval allows you to push fewer metrics samples per minute to your Cockpit and thus, pay less. For instance, if your application exposes 100 metrics every 60 seconds, these 100 metrics are collected and pushed to the server. If you configure your scrape interval to 1 second, you will push 6000 samples per minute.

The second subsection adds an input to Fluent Bit to retrieve the logs from your containers:

  • Name tail: The tail input plugin is used to read logs from files.
  • Path /var/log/containers/*.log: The tail plugin read logs from /var/log/containers/*.log which are the log dumps from your containers.
  • Parser docker: The Parser parameter specifies the parser to be used for parsing log records. The docker parser is a custom parser that will be defined below.
  • Tag logs.*: The Tag parameter assigns a tag to the incoming data from the tail plugin. The tag “logs.*” indicates that the collected logs will have a tag prefix of “logs” followed by any additional subtag.

Configuring logs processing

The inputs collected by Fluent Bit should be structured before sending them to the Cockpit to enable further filtering and better visualisation.

  1. Create a config.customParsers section to define the docker parser which is referenced by the log parsing input:

    customParsers: |
    [PARSER]
    Name docker
    Format json
    Time_Key time
    Time_Format %Y-%m-%dT%H:%M:%S.%L

    This parser expects log records in JSON format. It assumes that the timestamp information is located under the key “time” in the JSON log record, and the timestamp format is in ISO 8601 date format.

  2. Define a section named config.filters in order to filter incoming log files from the containers:

    filters: |
    [FILTER]
    Name kubernetes
    Match logs.*
    Merge_Log on
    Keep_Log off
    K8S-Logging.Parser on
    K8S-Logging.Exclude on

    This sets up a filter plugin which will be applied to log records with tags starting with logs.. It enables log merging, extracts and parses Kubernetes log metadata, and allows log exclusion based on Kubernetes log metadata filters.

  3. Define a section named config.extraFiles.'labelmap.json':

    extraFiles:
    labelmap.json: |
    {
    "kubernetes": {
    "container_name": "container",
    "host": "node",
    "labels": {
    "app": "app",
    "release": "release"
    },
    "namespace_name": "namespace",
    "pod_name": "instance"
    },
    "stream": "stream"
    }

    This defines a map for various Kubernetes labels and metadata to specific Fluent Bit field names in order to parse and structure the logs.

Configuring observability outputs

The last step in the Fluent Bit configuration is to define where the logs and metrics will be pushed.

  1. Create a token and select push permissions for both logs and metrics.

  2. Create a section named config.outputs in the values.yaml file:

    outputs: |
    [OUTPUT]
    Name prometheus_remote_write
    Match node_metrics
    Host <...>
    Port 443
    Uri /api/v1/push
    Header Authorization Bearer <...>
    Log_response_payload false
    Tls on
    Tls.verify on
    Add_label job kapsule-metrics
    [OUTPUT]
    Match logs.*
    Name loki
    Host <...>
    Port 443
    Tls on
    Tls.verify on
    Label_map_path /fluent-bit/etc/labelmap.json
    Auto_kubernetes_labels on
    Http_user nologin
    Http_passwd <...>
  3. Fill in the blanks as follows:

  • Host from the first subsection: paste your Metrics API URL defined in the API and Tokens tab section from the Cockpit. Remove the https:// protocol.
  • Header: Next to Bearer, paste the token generated in the previous step.
  • Host from the second subsection: paste your Logs API URL defined in the API and Tokens tab section from the Cockpit. Remove the https:// protocol.
  • Http_passwd: paste the token generated in the previous step.

In the first subsection, the prometheus_remote_write plugin is used to send metrics to the Prometheus server of your Cockpit using the remote write protocol. In the second subsection, the loki plugin is used to send logs to the Loki server of your Cockpit, using the field mapping from labelmap.json defined above.

Installing Fluent Bit

Run the following command in the same directory as your values.yaml file to install Fluent Bit:

helm upgrade --install fluent-bit fluent/fluent-bit -f ./values.yaml

You should see a DeamonSet named fluent-bit with running pods on all of your nodes.

Visualizing Kapsule logs and metrics

You can find the logs and metrics from your Kubernetes cluster in your Cockpit’s dashboard in Grafana.

Exploring metrics

Grafana has a built-in dashboard for visualizing node metrics.

  1. Go to Dashboards in your Grafana instance.
  2. Click New, Folder and name it Kapsule.
  3. Click New, Import and paste the following URL in the Import via grafana.com field:
    https://grafana.com/grafana/dashboards/1860-node-exporter-full/
  4. Click Load to access the new dashboard named Node Exporter Server Metrics.

Exploring logs

Your Kapsule logs index can be queried in the Explore section of your Cockpit’s dashboard in Grafana. In the data source selector, pick the Logs index. The Kubernetes labels are already mapped and can be used as filters in queries.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway