Send Kapsule logs and metrics to the Observability Cockpit with Fluent Bit
Reviewed on 13 December 2023 • Published on 01 June 2023
fluentbit
grafana
kubernetes
metrics
logs
In this tutorial you will learn how to forward the applicative logs and the usage metrics of your Kubernetes Kapsule containers into the Observability Cockpit.
This process will be done using Fluent Bit, a lightweight logs and metrics processor which acts as a gateway between containers and the Cockpit endpoints, when configured in a Kubernetes cluster.
Cockpit dashboard updates
Starting April 2024, a new version of Cockpit will be released.
In this version, the concept of regionalization will be introduced to offer you more flexibility and resilience for seamless monitoring. If you have created customized dashboards with data for your Scaleway resources before April 2024, you will need to update your queries in Grafana, with the new regionalized data sources.
Installed helm, the Kubernetes package manager, on your local machine (version 3.2+)
Important
Having the default configuration on your agents might lead to more of your resources’ metrics being sent, a high consumption and a high bill at the end of the month.
Sending metrics and logs for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the product pricing for more information.
We will configure Fluent Bit to retrieve the metrics (e.g.: CPU, memory, disk usage) from your Kubernetes nodes and the applicative logs from your running pods.
Create a new section config.inputs in the values.yaml file:
inputs:|
[INPUT]
Name node_exporter_metrics
Tag node_metrics
Scrape_interval 60
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag logs.*
The first subsection adds an input to Fluent Bit to retrieve the usage metrics from your containers:
Name node_exporter_metrics: This input plugin is used to collect various system-level metrics from your nodes.
Tag node_metrics: The Tag parameter assigns a tag to the incoming data from the node_exporter_metrics plugin. In this case, the tag node_metrics is assigned to the collected metrics.
Scrape_interval 60: The frequency at which metrics are retrieved. Metrics are collected every 60 seconds.
Important
Increasing the scrape interval allows you to push fewer metrics samples per minute to your Cockpit and thus, pay less.
For instance, if your application exposes 100 metrics every 60 seconds, these 100 metrics are collected and pushed to the server. If you configure your scrape interval to 1 second, you will push 6000 samples per minute.
The second subsection adds an input to Fluent Bit to retrieve the logs from your containers:
Name tail: The tail input plugin is used to read logs from files.
Path /var/log/containers/*.log: The tail plugin read logs from /var/log/containers/*.log which are the log dumps from your containers.
Parser docker: The Parser parameter specifies the parser to be used for parsing log records. The docker parser is a custom parser that will be defined below.
Tag logs.*: The Tag parameter assigns a tag to the incoming data from the tail plugin. The tag “logs.*” indicates that the collected logs will have a tag prefix of “logs” followed by any additional subtag.
The inputs collected by Fluent Bit should be structured before sending them to the Cockpit to enable further filtering and better visualisation.
Create a config.customParsers section to define the docker parser which is referenced by the log parsing input:
customParsers:|
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
This parser expects log records in JSON format. It assumes that the timestamp information is located under the key “time” in the JSON log record, and the timestamp format is in ISO 8601 date format.
Define a section named config.filters in order to filter incoming log files from the containers:
filters:|
[FILTER]
Name kubernetes
Match logs.*
Merge_Log on
Keep_Log off
K8S-Logging.Parser on
K8S-Logging.Exclude on
This sets up a filter plugin which will be applied to log records with tags starting with logs.. It enables log merging, extracts and parses Kubernetes log metadata, and allows log exclusion based on Kubernetes log metadata filters.
Define a section named config.extraFiles.'labelmap.json':
extraFiles:
labelmap.json:|
{
"kubernetes": {
"container_name": "container",
"host": "node",
"labels": {
"app": "app",
"release": "release"
},
"namespace_name": "namespace",
"pod_name": "instance"
},
"stream": "stream"
}
This defines a map for various Kubernetes labels and metadata to specific Fluent Bit field names in order to parse and structure the logs.
The last step in the Fluent Bit configuration is to define where the logs and metrics will be pushed.
Create a token and select push permissions for both logs and metrics.
Create a section named config.outputs in the values.yaml file:
outputs:|
[OUTPUT]
Name prometheus_remote_write
Match node_metrics
Host <...>
Port 443
Uri /api/v1/push
Header Authorization Bearer <...>
Log_response_payload false
Tls on
Tls.verify on
Add_label job kapsule-metrics
[OUTPUT]
Match logs.*
Name loki
Host <...>
Port 443
Tls on
Tls.verify on
Label_map_path /fluent-bit/etc/labelmap.json
Auto_kubernetes_labels on
Http_user nologin
Http_passwd <...>
Fill in the blanks as follows:
Host from the first subsection: paste your Metrics API URL defined in the API and Tokens tab section from the Cockpit. Remove the https:// protocol.
Header: Next to Bearer, paste the token generated in the previous step.
Host from the second subsection: paste your Logs API URL defined in the API and Tokens tab section from the Cockpit. Remove the https:// protocol.
Http_passwd: paste the token generated in the previous step.
In the first subsection, the prometheus_remote_write plugin is used to send metrics to the Prometheus server of your Cockpit using the remote write protocol.
In the second subsection, the loki plugin is used to send logs to the Loki server of your Cockpit, using the field mapping from labelmap.json defined above.
Your Kapsule logs index can be queried in the Explore section of your Cockpit’s dashboard in Grafana. In the data source selector, pick the Logs index. The Kubernetes labels are already mapped and can be used as filters in queries.