Documentation & Tutorials

How to collect and visualize your logs with the ELK stack (Elasticsearch Logstash Kibana)

Elastic-Stack Overview

Historically ELK is a bundle of three open source software projects: Elasticsearch, Logstash and Kibana. All these products are maintained by the company Elastic. This bundle consists of:

  • Elasticsearch, a NoSQL database based on the Lucene search engine.
  • Logstash, a server-side data processing pipeline that accepts data from various simultaneously, transforms it, and exports the data to various targets.
  • Kibana, a visualization layer that works on top of Elasticsearch.

Elastic has recently included a family of log shippers called Beats and renamed the stack as Elastic Stack. The solution is flexible and is mostly used to centralize logging requirements.


  • You have an account and are logged into
  • You have configured your SSH Key
  • You have an instance with at least 4GB of RAM

Installing Elastic Stack

1 . The software requires Oracle Java to be installed on the machine. Start by installing the JRE:

add-apt-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java8-set-default

You have to accept Oracle’s software license during the installation of Java.

2 . Install Elastic GPG Key to validate the packages to install:

wget -qO - | apt-key add -

3 . Install HTTPS transport to download the packages over a secure connection:

apt-get install apt-transport-https

4 . Add the Elastic repository to the APT configuration:

echo "deb stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list

5 . Update APT and install Elasticsearch:

apt-get update
apt-get install elasticsearch

6 . Edit the file /etc/elasticsearch/elasticsearch.yml and limit the connection to Elasticsearch to localhost by adding the following lines: "localhost"

7 . Install Logstash and rsyslog

apt-get install logstash rsyslog

8 . Install Filebeat

apt-get install filebeat

9 . Install Kibana

apt-get install kibana

10 . Start Elasticsearch:

systemctl start elasticsearch

To verify if Elasticsearch is running, run a curl -X GET "localhost:9200". The output should be looking similar to this example:

  "name" : "xt7_7qL",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_zupO8KGQkekjkwV1lCbkQ",
  "version" : {
    "number" : "6.4.3",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "fe40335",
    "build_date" : "2018-10-30T23:17:19.084789Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  "tagline" : "You Know, for Search"

11 . Open the file /etc/kibana/kibana.yml and uncomment the following lines:

server.port: 5601
elasticsearch.url: "http://localhost:9200"

Enable and start the Kibana service in systemd:

systemctl enable kibana
systemctl start kibana

12 . Install nginx as a proxy to Kibana:

apt-get install nginx

13 . Use OpenSSL to create a user and password for Kibana. The command will generate a htpasswd file, containing the user kibana and your password:

echo "kibana:`openssl passwd -apr1`" | tee -a /etc/nginx/htpasswd.users

14 . Edit the file /etc/nginx/sites-available/kibana.local and paste the following content to create a proxy to Kibana. Make sure to replace kibana.local with the DNS name of your instance:

    server {
      listen 80;
      server_name kibana.local;

      auth_basic "Restricted Access";
      auth_basic_user_file /etc/nginx/htpasswd.users;

      location / {
        proxy_pass         http://localhost:5601;
        proxy_redirect     off;

        proxy_set_header   Host              $host;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;


15 . Create a symbolic link to enable the site in nginx:

ln -s /etc/nginx/sites-available/kibana.local /etc/nginx/sites-enabled/kibana.local

16 . Reload the nginx configuration to activate the proxy:

systemctl restart nginx

17 . You can now access your Kibana Dashboard at http://kibana.local:

Kibana Interface

Recommended: You can request a free SSL certificate from Let’s Encrypt to secure the connection between your browser and the Kibana Dashboard.

Configuring rsyslog

Edit the file /etc/rsyslog.conf and uncomment the following lines, then save the file:

# provides UDP syslog reception
input(type="imudp" port="514")

Configuring Logstash

Logstash allows you to collect data from different sources, transform it into a common format, and to export it to a defined destination.

Logshash configuration files are written in JSON and can be found in the /etc/logstash/conf.d directory.

1 . Configure a Filebeat input in the configuration file 02-beats-input.conf:

nano /etc/logstash/conf.d/02-beats-input.conf

Copy the following information into the file, save and close it. This configuration lets the beats input listen on port 5044:

input {
  beats {
    port => 5044

2 . Create a file /etc/logstash/conf.d/10-syslog-filter.conf and paste the following contents. This filter parses incoming system logs to make them structured and usable by the Kibana Dashboards. For more information you may refer to the official documentation. Save and close the file once edited.

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

3 . Create a file /etc/logstash/conf.d/30-elasticsearch-output.conf and put the following content into it. Save and exit the file once edited. The output rules defined in this file will send the data to Elasticsearch, running at port 9200 on localhost. It will also store the data in an index named after the Beat used.

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"

In case you want to add filters that use the Filebeat input, make sure these filters are named between the input and output configuration (between 02 and 30).

4 . Start and enable the Filebeat service:

systemctl start logstash
systemctl enable logstash

Configuring Filebeat

The Elastic Stack uses lightweight data shippers (Beats) to collect data from various sources and transport them to Logstash or Elasticsearch. This tutorial uses Filebeat to process log files.

Other Beats are available, for example: Metricbeat to collect metrics of systems and services, Packetbeat to analize network traffic or Heartbeat to monitor the availability of services.

1 . Open the Filebeat configuration:

nano /etc/filebeat/filebeat.yml

Note: The file is written in YAML format and it is important that you respect the formatting rules when you edit the file.

2 . Add the following configuration for syslog in the filebeat.inputs section of the file:

- type: syslog
    host: "localhost:514"

3 . Search for output.elasticsearch and comment-out the lines as follows:

  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

4 . Search for output.logstash and uncomment the lines as follows:

  # The Logstash hosts
  hosts: ["localhost:5044"]

5 . Filebeat uses different modules to parse different log files. Enable the system plugin to handle generic system log files with Filebeat. Enable the plugin:

filebeat modules enable system

You can keep the default configuration of the module for this tutorial. If you want to learn more about the parsing rules applied, you may check the configuration of the module located at /etc/filebeat/modules.d/system.yml.

6 . Load the index template into Elasticsearch:

filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

7 . Disable the Logstash output and enable Elasticsearch output to load the dashboards when Logstash is enabled:

filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E

You will see an output similar to the following:

2018-11-13T14:31:49.695Z	INFO	instance/beat.go:680	Kibana dashboards successfully loaded.
Loaded dashboards
2018-11-13T14:31:49.695Z	INFO	elasticsearch/client.go:163	Elasticsearch url: http://localhost:9200
2018-11-13T14:31:49.703Z	INFO	elasticsearch/client.go:712	Connected to Elasticsearch version 6.4.3
2018-11-13T14:31:49.703Z	INFO	kibana/client.go:113	Kibana url: http://localhost:5601
2018-11-13T14:31:49.771Z	WARN	fileset/modules.go:388	X-Pack Machine Learning is not enabled
2018-11-13T14:31:49.827Z	WARN	fileset/modules.go:388	X-Pack Machine Learning is not enabled
Loaded machine learning job configurations

8 . You can now start and enable the Filebeat service:

systemctl start filebeat
systemctl enable filebeat

9 . To verify that your Filebeat service is running, you may run the following command:

curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

The output should look like the following example:

  "took" : 11,
  "timed_out" : false,
  "_shards" : {
    "total" : 3,
    "successful" : 3,
    "skipped" : 0,
    "failed" : 0
  "hits" : {
    "total" : 5603,
    "max_score" : 1.0,
    "hits" : [
        "_index" : "filebeat-6.5.1-2018.11.21",
        "_type" : "doc",
        "_id" : "_My0NWcBPY-4NB5pqDHz",
        "_score" : 1.0,
        "_source" : {
          "prospector" : {
            "type" : "log"
          "offset" : 7623,
          "@timestamp" : "2018-11-21T09:58:09.981Z",
          "tags" : [
          "source" : "/var/log/syslog",
          "host" : {
            "id" : "8b55e0cf58da4373bc5ba7533f0f3415",
            "containerized" : false,
            "name" : "scw-63c9a9",
            "architecture" : "x86_64",
            "os" : {
              "codename" : "bionic",
              "version" : "18.04.1 LTS (Bionic Beaver)",
              "platform" : "ubuntu",
              "family" : "debian"

Exploring Kibana

Data collected by your setup is now available in Kibana, to visualize it:

Kibana Logs

Use the menu on the left to navigate to the Dashboard page and search for Filebeat System dashboards. You can browse the sample dashboards included with Kibana or create your own dashboards based on the metrics you want to monitor.

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.