
Collecting and visualizing your logs with the Elastic stack (Elasticsearch, Logstash, Kibana - ELK Stack)
- Instances
- elastic-metal
- ELK-stack
- ELK-logging
- elastic-stack
- elasticsearch
- logstash
- kibana
ELK is a bundle of three open-source software projects maintained by Elastic. Elastic has recently included a family of log shippers called Beats and renamed the stack as Elastic Stack. The solution is flexible and is mostly used to centralize logging requirements.
The ELK stack consists of:
- Elasticsearch, a NoSQL database based on the Lucene search engine.
- Logstash, a server-side data processing pipeline that accepts data from various sources simultaneously, transforms it, and exports the data to various targets.
- Kibana, a visualization layer that works on top of Elasticsearch.
You may need certain IAM permissions to carry out some actions described on this page. This means:
- you are the Owner of the Scaleway Organization in which the actions will be carried out, or
- you are an IAM user of the Organization, with a policy granting you the necessary permission sets
- You have an account and are logged into the Scaleway console
- You have configured your SSH key
- You have created an Instance or an Elastic Metal server with at least 4 GB of RAM
Installing the ELK stack
-
Install Java. In this tutorial, we use the OpenJDK package.
apt install -y openjdk-8-jdk -
Install the Elastic GPG Key to validate the packages.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - -
Install HTTPS transport to download the packages over a secure connection.
apt install -y apt-transport-https -
Add the Elastic repository to the APT configuration.
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list -
Update APT and install Elasticsearch.
apt update && apt install -y elasticsearch -
Uncomment and edit the following lines in the
/etc/elasticsearch/elasticsearch.yml
file to reflect the example below. This will limit the Elasticsearch connection to localhost.network.host: 127.0.0.1http.port:9200 -
Install Logstash and rsyslog.
apt install -y logstash rsyslog -
Install Filebeat.
apt install -y filebeat -
Install Kibana.
apt install -y kibana -
Start Elasticsearch:
systemctl start elasticsearch.service -
Run the following command to verify whether Elasticsearch is running:
curl -X GET "localhost:9200"The output should be similar to this example:
{"name" : "elastic-stack","cluster_name" : "elasticsearch","cluster_uuid" : "LiIyk5P1TMuR6MqOWcs_DQ","version" : {"number" : "7.8.0","build_flavor" : "default","build_type" : "deb","build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65","build_date" : "2020-06-14T19:35:50.234439Z","build_snapshot" : false,"lucene_version" : "8.5.1","minimum_wire_compatibility_version" : "6.8.0","minimum_index_compatibility_version" : "6.0.0-beta1"},"tagline" : "You Know, for Search"} -
Open the file
/etc/kibana/kibana.yml
and uncomment the following lines:server.port: 5601server.host: "localhost"elasticsearch.hosts: ["http://localhost:9200"]Save and exit.
-
Enable and start the Kibana service in systemd:
systemctl enable kibana.servicesystemctl start kibana.service -
Install nginx as a proxy to Kibana:
apt install -y nginx -
Use OpenSSL to create a user and password for the Elastic Stack interface. This command generates a
htpasswd
file, containing the userkibana
and a password you are prompted to create.echo "kibana:`openssl passwd -apr1`" | tee -a /etc/nginx/htpasswd.users -
Edit the
/etc/nginx/sites-available/elastic.local
file and paste the following content to create a proxy to Kibana.Important:Replace
elastic.local
with the domain name of your Instance:server {listen 80;server_name elastic.local;auth_basic "Restricted Access";auth_basic_user_file /etc/nginx/htpasswd.users;location / {proxy_pass http://localhost:5601;proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Proto $scheme;}} -
Create a symbolic link to enable the site in nginx:
ln -s /etc/nginx/sites-available/elastic.local /etc/nginx/sites-enabled/elastic.local -
Reload the nginx configuration to activate the proxy:
systemctl restart nginx.service
You can now access your Elastic Dashboard using your domain name, for example http://elastic.local
:
You can either start with an empty stack and start collecting your metrics or load sample data.
You can request a free SSL certificate from Let’s Encrypt to secure the connection between your browser and the Kibana Dashboard.
Configuring rsyslog
Edit the file /etc/rsyslog.conf
, uncomment the following lines and save.
# provides UDP syslog receptionmodule(load="imudp")input(type="imudp" port="514")
Configuring Logstash
Logstash allows you to collect data from different sources, transform it into a common format, and to export it to a defined destination.
Logstash configuration files are written in JSON and can be found in the /etc/logstash/conf.d
directory.
-
Configure a Filebeat input in the configuration file
02-beats-input.conf
:nano /etc/logstash/conf.d/02-beats-input.confCopy the following information into the file, save and close it. This configuration allows the
beats
input to listen on port5044
:input {beats {port => 5044}} -
Create a file named
/etc/logstash/conf.d/10-syslog-filter.conf
and paste the following contents.filter {if [type] == "syslog" {grok {match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }add_field => [ "received_at", "%{@timestamp}" ]add_field => [ "received_from", "%{host}" ]}syslog_pri { }date {match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]}}}Save and close.
Note:This filter parses incoming system logs to make them structured and usable by the Kibana Dashboards. For more information refer to the official Elastic documentation.
-
Create another file named
/etc/logstash/conf.d/30-elasticsearch-output.conf
, and copy the following content. Then, save and exit.output {elasticsearch {hosts => ["localhost:9200"]manage_template => falseindex => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"document_type => "%{[@metadata][type]}"}}The output rules defined in this file send the data to Elasticsearch, which runs at port 9200 on
localhost
. It also stores the data in an index named after the Beat used.Note:If you want to add filters that use the Filebeat input, make sure these filters are named between the input and output configuration (between
02
and30
). -
Start and enable the Filebeat service:
systemctl start logstash.servicesystemctl enable logstash.service
Configuring Filebeat
The Elastic Stack uses lightweight data shippers (called Beats) to collect data from various sources and transport them to Logstash or Elasticsearch. In this tutorial we show how to integrate your machine with Elastic using the Filebeat Beats client.
Other Beats are available, for example: Metricbeat to collect metrics of systems and services, Packetbeat to analyze network traffic or Heartbeat to monitor the availability of services.
-
Open the Filebeat configuration:
nano /etc/filebeat/filebeat.ymlNote:The file is written in YAML format, and it is important that you respect the formatting rules when you edit the file.
-
Add the following configuration for syslog in the
filebeat.inputs
section of the file:-protocol.udp:host: "localhost:514"type: syslog -
Search for
output.elasticsearch
and comment-out the lines as follows:[...]#output.elasticsearch:# Array of hosts to connect to.#hosts: ["localhost:9200"][...] -
Search for
output.logstash
and uncomment the lines as follows:[...]output.logstash:# The Logstash hostshosts: ["localhost:5044"][...] -
Enable the system plugin to handle generic system log files with Filebeat. Enable the plugin:
filebeat modules enable systemNote:Filebeat uses different modules to parse different log files. You can keep the default configuration of the module for this tutorial. If you want to learn more about the parsing rules applied, you may check the configuration of the module located at
/etc/filebeat/modules.d/system.yml
. -
Load the index template into Elasticsearch:
filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]' -
Disable the Logstash output and enable the Elasticsearch output to load the dashboards when Logstash is enabled:
filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601An output similar to the following displays:
[...]Loaded machine learning job configurations2020-07-22T11:48:00.660Z INFO eslegclient/connection.go:97 elasticsearch url: http://localhost:92002020-07-22T11:48:00.667Z INFO [esclientleg] eslegclient/connection.go:306 Attempting to connect to Elasticsearch version 7.8.02020-07-22T11:48:00.670Z INFO eslegclient/connection.go:97 elasticsearch url: http://localhost:92002020-07-22T11:48:00.674Z INFO [esclientleg] eslegclient/connection.go:306 Attempting to connect to Elasticsearch version 7.8.02020-07-22T11:48:01.405Z INFO fileset/pipelines.go:134 Elasticsearch pipeline with ID 'filebeat-7.8.0-system-auth-pipeline' loaded2020-07-22T11:48:01.637Z INFO fileset/pipelines.go:134 Elasticsearch pipeline with ID 'filebeat-7.8.0-system-syslog-pipeline' loaded2020-07-22T11:48:01.637Z INFO cfgfile/reload.go:262 Loading of config files completed.2020-07-22T11:48:01.637Z INFO [load] cfgfile/list.go:118 Stopping 1 runners ...Loaded Ingest pipelines -
You can now start and enable the Filebeat service:
systemctl start filebeat.servicesystemctl enable filebeat.service -
Run the following command to verify that your Filebeat service is running:
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'The output should look like the following example:
{"took" : 11,"timed_out" : false,"_shards" : {"total" : 2,"successful" : 2,"skipped" : 0,"failed" : 0},"hits" : {"total" : {"value" : 2058,"relation" : "eq"},"max_score" : 1.0,"hits" : [{"_index" : "filebeat-7.8.0-2020.07.22","_type" : "_doc","_id" : "ZIpbdnMBenM2E5SX9FAi","_score" : 1.0,"_source" : {"message" : "Jul 22 11:17:44 eleastic-stack kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-1018-kvm root=PARTUUID=fc220e13-bb33-43c7-a49a-90d85d9edc7f ro console=tty1 console=ttyS0 panic=-1","@version" : "1","fileset" : {"name" : "syslog"},"host" : {"containerized" : false,"mac" : ["de:1c:4c:3e:90:3a"],"hostname" : "eleastic-stack","name" : "eleastic-stack","os" : {"codename" : "focal","version" : "20.04 LTS (Focal Fossa)","kernel" : "5.4.0-1018-kvm","family" : "debian","name" : "Ubuntu","platform" : "ubuntu"},"architecture" : "x86_64","id" : "a4d9477d47d14c4fb551f52adb5eb810","ip" : ["10.65.100.115","2001:bc8:47b0:1239::1","fe80::dc1c:4cff:fe3e:903a"]},"ecs" : {"version" : "1.5.0"},"service" : {"type" : "system"},"log" : {"offset" : 344,"file" : {"path" : "/var/log/syslog"}},"input" : {"type" : "log"},"@timestamp" : "2020-07-22T11:49:57.578Z","agent" : {"ephemeral_id" : "348ca814-7f48-408e-9956-d8650d74420b","version" : "7.8.0","type" : "filebeat","hostname" : "eleastic-stack","name" : "eleastic-stack","id" : "76a478aa-6c78-47c8-a045-a962e89a1046"},"tags" : ["beats_input_codec_plain_applied"],"event" : {"dataset" : "system.syslog","module" : "system","timezone" : "+00:00"}}},[...]
Exploring Kibana
The data collected by your setup is now available in Kibana.
Use the menu on the left to navigate to the Dashboard page and search for Filebeat System dashboards.
You can browse the sample dashboards included with Kibana or create your dashboards based on the metrics you want to monitor.
For more information how to use the Elastic stack, refer to the official documentation.