This post continues from Part 7: Proxies and Caching

As this series comes to an end there is one last thing I wanted to cover: monitoring. This is another one of those not-necessary-but-nice-to-have features. If you don't really care about pretty graphs of network traffic then just stop here and call it a day. Personally though I like being able to see and track my network's performance. We rely on our home internet connections so heavily why not make sure it's running like a well oiled machine?

To do the monitoring we'll need a few different pieces of software, each of which fulfils one role of a complete monitoring system. We'll be using Prometheus for the database, Prometheus' node exporter for metrics collection, and Grafana for dashboarding and alerting.

Prometheus is what's known as a time series database (TSDB). It pulls stats down from different places at regular intervals so the metric values can be looked at over time. The metrics it pulls are exposed by different types of "exporters". An exporter is just a piece of software that collects data from all kinds of different sources and aggregates it into a format Prometheus can understand. In this case we'll be using the node exporter to export statistics like CPU usage, memory usage, and of course networking stats. There are plenty of other exporters available though for things like DNS, Apache, and Nginx. You can even instrument your own code and make it into an exporter. Once all of the data is in Prometheus we'll need a way to look at it. This is where Grafana comes in. It has native support for querying Prometheus data sources and can transform that data into pretty graphs. The graphs can be arranged into neat dashboards and it can even alert you when something goes haywire.

So let's just get to it. Before starting you'll need to get Docker installed on your router. While all of the applications we'll be using can be run on bare metal, setup and maintenance is far easier using Docker containers. If you need some help check out the installation docs from Docker or just Google around a bit.

Running Prometheus

Once you've got Docker up and running the first step will be to get a Prometheus container running. There's two things you'll need to get ready before running the container: a place to store data and a configuration file.

Make a directory for data storage in your home directory and within it add a directory for prometheus:

$> mkdir ~/data
$> mkdir ~/data/prometheus

Next create a basic configuration file for Prometheus like the one below called prometheus.yml. I've included comments inline to explain what it is doing:

# prometheus.yml
  # By default, scrape targets every 15 seconds.
  scrape_interval: 15s

    # This label will be added to all stats scraped by this instance of Prometheus
    monitor: 'pine-monitor'

# A configuration for where to pull stats from
  # Name for stats pulled by this configuration
  - job_name: 'prometheus'

      # Pull stats from router.home:9090. This is where
      # Prometheus will be running. By default it exports some
      # stats about itself.
      - targets: ['router.home:9090']

Now that you have the prereqs met, you can actually run the container. If you're using an ARM based system like I am make sure to use the braingamer/prometheus-arm Docker image. If you're running on a regular Intel or AMD architecture though just use the official prom/prometheus image.

docker run -d \
    -p 9090:9090 \
    -v ~/data/prometheus:/prometheus \
    -v ~/data/prometheus.yml:/etc/prometheus/prometheus.yml \
    --name prometheus \

This creates the container, maps port 9090 on the host machine to port 9090 (Prometheus) in the container, maps the data directory on the host to the data directory in the container, and copies the config to the container. Try navigating to http://router.home:9090/ in your browser. If all is well you should see Prometheus' console. You can navigate to the settings and look at the targets it is capturing stats from. You should see the one target you configured and it should be up. Once it's all working the last step is to create a service for it so it will run on boot.

Stop the container so that the service can be used to start it:

$> docker stop prometheus

Add a standard Docker container service to your /lib/systemd/system/ directory. For example:

Description=Prometheus container

ExecStart=/usr/bin/docker start -a prometheus
ExecStop=/usr/bin/docker stop prometheus


And finally start and enable the service:

$> sudo systemctl start prometheus
$> sudo systemctl enable prometheus

Alrighty, step one done. Let's move on to the node exporter that will let us collect things like CPU, memory, and network statistics.

Running Node Exporter

Node exporter is the one piece of this puzzle that will run on bare metal instead of as a Docker container. It is possible to run it in a container, but it needs access to so many parts of the host system for proper metrics gathering that it isn't really worth it.

It is written in, and so obviously depends on, Go. You'll have to install it first with:

$> sudo apt-get install golang

You'll also need to setup a GOPATH for it. This is where it will download packages to. Make a directory and set it as the GOPATH:

$> mkdir -p ~/src/go
$> export GOPATH=~/src/go

You can then use go get to download the node exporter package from Github:

$> go get

Go into the newly downloaded package and build it:

$> cd ~/src/go/
$> make

That should build you a node_exporter binary in the same directory. Next you'll need a systemd service to actually run that binary. Add one to the same /lib/systemd/system/ path called node-exporter.service:

Description=Node Exporter



Fire it up and make sure it is enabled on boot:

$> sudo systemctl start node-exporter
$> sudo systemctl enable node-exporter

To make sure it's running try navigating to http://router.home:9100. You should get served a page with a bunch of metrics and their values. This is what we need Prometheus to start scraping. Once you've got the page loading properly add another static config to your prometheus.yml configuration:

  - job_name: 'prometheus'
      - targets: ['router.home:9090']

  - job_name: 'router'
      - targets: ['router.home:9100']

Restart Prometheus to pick up the new config:

$> docker restart prometheus

Now go check out your targets page in the Prometheus console again. You should see a new target for router.home:9100 and it should be up. If it is then Prometheus is successfully pulling stats from the exporter. Woot woot. Now let's make some pretty graphs.

Running Grafana

The steps for getting Grafana going are fairly similar to the steps for Prometheus: make directories for the data, write a config, and run the container. So to begin let's make the data directories. Grafana needs two directories, one for data and one for the configs:

$> mkdir -p ~/data/grafana/data
$> mkdir ~/data/grafana/config

For the configuration the default values will be fine for most things. You will want to configure the server name though so the links point to the right places. I've setup my DNS with a domain called monitoring.home that I will use to point to the instance, but as long as it points to the right place it doesn't matter if you use a domain or just the IP address. Create a config called grafana.ini in the ~/data/grafana/config directory with these lines:

; grafana.ini
instance_name = monitoring.home
domain = monitoring.home
root_url = http://monitoring.home:3000

The config is pretty self explanatory. Just change the values to match your setup. If you want to set up email based alerting then add another section with your email server settings:

enabled = true
host = <your email server>
user = <your email username>
password = <your email password>
from_address = [email protected]

OK, it should be ready to go. Fire up a container like so:

docker run -d \
  -p 3000:3000 \
  --name=grafana \
  -v ~/data/grafana/data:/var/lib/grafana \
  -v ~/data/grafana/config:/etc/grafana \

Again, if you aren't using an ARM based system then don't use the ARM based Docker image. Navigate to http://monitoring.home:3000 (or whatever you called yours) to make sure it is working. You should see a Grafana login page where you can use the default user admin and default password admin to get in. If you can login successfully then move on to creating a systemd service for it.

Stop the container so systemd can take over control:

$> docker stop grafana

And create another standard Docker systemd service as /lib/systemd/system/grafana.service:

# /etc/systemd/system/grafana.service 
Description=Grafana container

ExecStart=/usr/bin/docker start -a grafana
ExecStop=/usr/bin/docker stop grafana


Start the service and enable it at boot:

$> sudo systemctl start grafana
$> sudo systemctl enable grafana

That should take care of keeping Grafana going. You'll still need to set it up though. Login from your browser again with the default user and password. Follow the on boarding steps to set up Prometheus as a data source and for the love of god change the default password. From there you can create a dashboard out of your Prometheus metrics, or just do what I did and steal one from someone else. There are a bunch of dashboard templates available at and you can filter by Prometheus/node exporter support. The one I selected though was called Node Exporter Server Metrics. Download the template JSON and import it into Grafana as a new dashboard. If all goes well it should find all the metrics and give you a pretty dashboard like this:

Grafana dashboard with Prometheus metrics

Ain't it a beaut. And with that your complete monitoring solution should be good to go. There are a bunch more things you can do with it if you have the patience. I've put mine behind an Nginx proxy so you don't always have to type in the port number in your browser. It also adds TLS and basic auth so I'm the only one that can get into it. You can always add more exporters to Prometheus and more dashboards and alerts to Grafana too. I'll leave that up to you though if it is something you care about.

And with that the router build is finished!

Other Posts in This Series