Monitoring OpenBSD with Grafana and Prometheus

· 5min · Dan F.

With any deployment of OpenBSD, it is always advisable to have some sort of monitoring enabled. In the past, I have used zabbix as the monitoring solution for both public findelabs servers, as well as my personal OpenBSD servers. I was going to write an article about the installation and configuration of the web frontend and the postures backend, but I kept putting it off as the configuration was rather clunky. Last week, I ended up moving over to a Grafana dashboard with prometheus as the monitoring system.

This approach turned out to be less complicated than zabbix. If you want to view the playbooks for setting up the various components to zabbix, you can view the backend here and the frontend here. While zabbix does work as intended, I always felt that the configuration even within frontend was rather complicated. Having had experiance with grafana and prometheus in the past, I figured that it must be fairly simple to get both running on OpenBSD.

As it turns out, it was quite simple to get everything setup and monitoring. The first step is to get prometheus up and running. Now for my setups, I typically have prometheus and grafana running on separate servers. However, I was able to have both running on one fairly small server during my testing.

Firstly, install prometheus and node_exporter. Node_exporter is a small application that exports server metrics on port 9100 on any server installed on.

doas pkg_add prometheus node_exporter

Next, edit /etc/prometheus/prometheus.yml. This is an example configuration, with little complexity:

# my global config
global:
  scrape_interval:     5s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
# rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    static_configs:
    - targets: ['0.0.0.0:9090']

  # This is where you can add new servers to monitor
  - job_name: 'node'
    file_sd_configs:
    - files:
      - '/etc/prometheus/targets.json'

This configuration gets you off and running, though we still need to create the /etc/prometheus/targets.json file, as this is where prometheus will expect targets to monintor, in this configuration. This file can be updated on the fly, without need of restarting prometheus:

[
  {
    "labels": {
      "job": "node"
    },

    "targets": [
      "server1:9100",
      "server2:9100",
      "server3:9100"
    ]
  }
]

With this target.json file, prometheus will attempt to get metrics from server1/2/3. These servers can also by specified by IP, however, prometheus will store all metrics for each instance, and label them with what is specified in this targets.json file. So I find it much easier to specify each server by its hostname, and update /etc/hosts to point to the correct IP's, that is unless you have working DNS for each server.

Next up, you can optionally modify the /etc/rc.d/prometheus script in order to set the retention of metrics. The default retention period is just 15 days, which is great, but I feel that should be longer in my case. I usually specify --storage.tsdb.retention.time=90 days within the daemon_flags, as show here. Also keep in mind that prometheus will listen on all interfaces by default, which is not always ideal. You can specify to listen on localhost, for example, with --web.listen-address="localhost:9090" under the daemon_flags below.

#!/bin/sh
#
# $OpenBSD: prometheus.rc,v 1.1.1.1 2018/01/10 16:26:19 claudio Exp $

daemon="/usr/local/bin/prometheus"
daemon_flags="--config.file /etc/prometheus/prometheus.yml"
daemon_flags="${daemon_flags} --storage.tsdb.path '/var/prometheus' --storage.tsdb.retention.time=90d"
daemon_user=_prometheus

. /etc/rc.d/rc.subr

pexp="${daemon}.*"
rc_bg=YES
rc_reload=NO

rc_start() {
        ${rcexec} "${daemon} ${daemon_flags} < /dev/null 2>&1 | \
                logger -p daemon.info -t prometheus"
}

rc_cmd $1

Now we should be able to enable and start the prometheus and node_exporter services. Watch /var/log/daemon to ensure that prometheus comes online properly:

doas rcctl enable prometheus node_exporter
doas rcctl start prometheus node_exporter
tail -f /var/log/daemon

# Test to make sure the frontend is accessible
curl localhost:9090/graph

# Ensure node_exporter is working
curl localhost:9100/metrics

Now we can move on to installing and starting grafana.

doas pkg_add grafana
doas rcctl enable grafana
doas rcctl start grafana

Once the service is started, navigate to the dashboard, which listens on port 3000 by default, and authenticate as the admin user, with the default password of admin. Once logged in, grafana will immediately request that the admin password be changed. Once you are inside the home dashboard, click on "Add data source", and click on prometheus.

To configure access to prometheus, you simply need to enter "http://localhost:9090" as the URL, and click save and test. If prometheus is up and running on the same server, grafana should be able to access the data source.

At this point, you can begin creating dashboards to monitor your OpenBSD servers. I have published one of my simple dashboards I have used to monitor my small collection on servers here. You can import the dashboard under Dashboards/Manage, and clicking import. On loading the dashboard, you should quickly begin to see metrics from your server.

To add more server to monitor, all that is required is installing the node_exporter package on the remote server, then updating the /etc/prometheus/targets.json file to scrape the new server.

All in all, I have been pleased with the simplicity of grafana and prometheus.


Has been tested on OpenBSD 6.6