Documentation & Tutorials
This tutorial explains how to use a Prometheus Monitoring server with Grafana Dashboard.
Prometheus is a flexible monitoring solution that is in development since 2012. The software stores all its data in a time series database and offers a multi-dimensional data-model and a powerful query language to generate reports of the monitored resources.
There are five steps to use Prometheus with Grafana:
In this tutorial, we use an instance running on Ubuntu Xenial (16.04).
1 . To run Prometheus safely on our server, we have to create a user for Prometheus and Node Exporter without the possibility to log in. To achieve this, we use the parameter
--no-create-home which skips the creation of a home directory and disable the shell with
2 . Create the folders required to store the binaries of Prometheus and its configuration files:
3 . Set the ownership of these directories to our
prometheus user, to make sure that Prometheus can access to these folders:
As your Prometheus is only capable of collecting metrics, we want to extend its capabilities by adding Node Exporter, a tool that collects information about the system including CPU, disk, and memory usage and exposes them for scraping.
1 . Download the latest version of Node Exporter:
2 . Unpack the downloaded archive. This will create a directory
node_exporter-0.16.0.linux-amd64, containing the executable, a readme and license file:
3 . Copy the binary file into the directory
/usr/local/bin and set the ownership to the user you have created in step previously:
4 . Remove the leftover files of Node Exporter, as they are not needed any longer:
5 . To run Node Exporter automatically on each boot, a Systemd service file is required. Create the following file by opening it in Nano:
6 . Copy the following information in the service file, save it and exit Nano:
7 . Collectors are used to gather information about the system. By default a set of collectors is activated. You can see the details about the set in the README-file. If you want to use a specific set of collectors, you can define them in the
ExecStart section of the service. Collectors are enabled by providing a
--collector.<name> flag. Collectors that are enabled by default can be disabled by providing a
8 . Reload Systemd to use the newly defined service:
9 . Run Node Exporter by typing the following command:
10 . Verify that the software has been started successfully:
You will see an output like this, showing you the status
active (running) as well as the main PID of the application:
11 . If everything is working, enable Node Exporter to be started on each boot of the server:
1 . Download and Unpack Prometheus latest release of Prometheus. As exemplified, the version is 2.2.1:
The following two binaries are in the directory:
The following two folders (which contain the web interface, configuration files examples and the license) are in the directory:
2 . Copy the binary files into the
3 . Set the ownership of these files to the
prometheus user previously created:
4 . Copy the
console_libraries directories to
5 . Set the ownership of the two folders, as well as of all files that they contain, to our
6 . In our home folder, remove the source files that are not needed anymore:
Prior to using Prometheus, it needs basic configuring. Thus, we need to create a configuration file named
Note: The configuration file of Prometheus is written in YAML which strictly forbids to use tabs. If your file is incorrectly formatted, Prometheus will not start. Be careful when you edit it.
1 . Open the file
prometheus.yml in a text editor:
Prometheus’ configuration file is divided into three parts:
global part we can find the general configuration of Prometheus:
scrape_interval defines how often Prometheus scrapes targets,
evaluation_interval controls how often the software will evaluate rules. Rules are used to create new time series and for the generation of alerts.
rule_files block contains information of the location of any rules we want the Prometheus server to load.
The last block of the configuration file is named
scape_configs and contains the information which resources Prometheus monitors.
Our file should look like this example:
global: scrape_interval: 15s evaluation_interval: 15s rule_files: # - "first.rules" # - "second.rules" scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090']
scrape_interval is set to 15 seconds which is enough for most use cases.
We do not have any
rule_files yet, so the lines are commented out and start with a
scrape_configs part we have defined our first exporter. It is Prometheus that monitors itself. As we want to have more precise information about the state of our Prometheus server we reduced the
scrape_interval to 5 seconds for this job. The parameters
targets determine where the exporters are running. In our case it is the same server, so we use
localhost and the port
As Prometheus scrapes only exporters that are defined in the
scrape_configs part of the configuration file, we have to add Node Exporter to the file, as we did for Prometheus itself.
We add the following part below the configuration for scrapping Prometheus:
- job_name: 'node_exporter' scrape_interval: 5s static_configs: - targets: ['localhost:9100']
Overwrite the global scrape interval again and set it to 5 seconds. As we are scarping the data from the same server as Prometheus is running on, we can use
localhost with the default port of Node Exporter:
If you want to scrape data from a remote host, you have to replace
localhost with the IP address of the remote server.
For all information about the configuration of Prometheus, you may check the configuration documentation.
2 . Set the ownership of the file to our
Our Prometheus server is ready to run for the first time.
1 . Start Prometheus directly from the command line with the following command, which executes the binary file as our
The server starts displaying multiple status messages and the information that the server has started:
2 . Open your browser and type
http://IP.OF.YOUR.SERVER:9090 to access the Prometheus interface. If everything is working, we end the task by pressing on
CTRL + C on our keyboard.
Note: If you get an error message when you start the server, double check your configuration file for possible YAML syntax errors. The error message will tell you what to check.
3 . The server is working now, but it cannot yet be launched automatically at boot. To achieve this, we have to create a new
systemd configuration file that will tell your OS which services should it launch automatically during the boot process.
The service file tells
systemd to run Prometheus as
prometheus and specifies the path of the configuration files.
4 . Copy the following information in the file and save it, then exit the editor:
5 . To use the new service, reload
We enable the service so that it will be loaded automatically during boot:
6 . Start Prometheus:
Your Prometheus server is ready to be used.
We have now installed Prometheus to monitor your instance.
Prometheus provides a basic web server running on
http://your.server.ip:9000 that provide access to the data collected by the software.
We can verify the status of our Prometheus server from the interface:
Moreover, do some queries in the data that has been collected.
The interface is very lightweight, and the Prometheus team recommend to use a tool like Grafana if you want to do more than testing and debugging the installation.
1 . Install Grafana on our instance which queries our Prometheus server.
2 . Enable the automatic start of Grafana by
Grafana is running now, and we can connect to it at
http://your.server.ip:3000. The default user and password is
Now you have to create a Prometheus data source:
Your settings should look like this:
You are now ready to create your first dashboard from the information collected by Prometheus. You can also import some dashboards from a collection of shared dashboards
Here is an example of a Dashboard that uses the CPU usage of our node and presents it in Grafana:
In this tutorial, we were able to configure a Prometheus server with two data collectors that are scraped by our Prometheus server which provides the data to build Dashboards with Grafana. Don’t hesitate to consult the official documentation of Prometheus and Grafana.