OpenBSD - relayd load balancer with httpd

· 13min · Dan F.

In order to ensure that a website is highly available, OpenBSD has a couple of very functional solutions available that work very well together; relayd and httpd. Until this point, this site has been running on a small VM in the cloud, but recently, we have adopted a more highly available configuration.

Brief history of relayd and httpd

Relayd is an open source load balancer created by OpenBSD back in release 4.1. The original name of the daemon was hoststated, but was changed to relayd a year after the initial release, in 2007. A very good article about the history of httpd can be found here.

Httpd is OpenBSD's officially support and developed web server. About 20 years ago, OpenBSD first imported a heavily modified apache httpd server into the base release. Then, as nginx became popular, it eventually became the web server at release time, though also patched to abide by OpenBSD's more strict security guidelines. Then, back in 2015, OpenBSD released their own httpd, based on relayd, as their default web server.

Relayd and httpd are able to work together to provide a simple, but highly available solution for web hosting, and much more. This article will highlight how to deploy two OpenBSD httpd VM's, running behind an OpenBSD VM using relayd as a load balancer. The two httpd servers should only be accessible through the load balancer, and the configuration will be able to handle one of the two httpd servers being inaccessible during patching or rebooting. The configuration should also be rated A on ssllabs.com.

Setup and design

Now for this article, we will be using VM's deployed on vultr.com, but you can use whichever VPS theoretically, but the steps could be slightly different. We chose vultr simply because we've been happily hosted on their services for the past year without issues.

You will also need to have a domain (or two) domains registered, and properly configured with DNS to point to your load balancer's IP. In this article, we have two public IP's attached to the load balancer, as currently OpenBSD 6.5's relayd does not support TLS for multiple domains on a single IP. This is a feature coming in 6.6, however. In the next release, you can specify which certs you want associated with each domain, therefore eliminating the need for a public IP per TLS-backed site. Some more info below:

  1. Both httpd servers will not be accessible via world-accessible IP
  2. Both httpd servers will be on a private vlan, with their gateway being the load balancer
  3. The load balancer will have pf setup to route internal traffic to the internet with nat and ip forwarding
  4. Each httpd server will host the same two websites
  5. The load balancer will host both http, and https for both websites
  6. Both websites will have TLS setup through acme-client

Here is the best depiction of the load balancer setup that I could came up with:

            - www.example.com ----                                                            --- httpd_server_1 (10.0.0.2)
internet --|                      |--> (55.55.55.55/55.55.55.60) load_balancer (10.0.0.1) ---| 
            - devel.example.com --                                                            --- httpd_server_2 (10.0.0.3)

Deploying the VM's and network setup

We need to create the private network on Vultr that the VM's will use for inter-communication. Simply log in to vultr, click on the networks tab, and create a new network in the region that you will be deploying your VM's into. The network range does not have to match that of this tutorial.

After your private network has been created, go ahead and create three VM's in the same region, all with the new private network you just created attached. When the servers come online, log in to the console, and you should see that there two network interfaces, vio0, and vio1. For extra security purposes, disable the vio0 interface on the httpd servers with echo down > /etc/hostname.vio0 on each server, and then enable the private network on vio1 with the following files, which vultr does not create automatically:

We will also need to ensure that the default route is set on the two httpd servers to point to private IP on the load_balancer. Do so with echo 10.0.0.1 > /etc/mygate on httpd_server_1 and httpd_server_2.

# httpd_server_1:/etc/hostname.vio1
up 10.0.0.2 255.255.255.0

# httpd_server_2:/etc/hostname.vio1
up 10.0.0.3 255.255.255.0

# load_balancer:/etc/hostname.vio1
up 10.0.0.1 255.255.255.0

# On httpd_server_1 and httpd_server_2
echo "10.0.0.1" > /etc/mygate

Next, we need to ensure that the load balancer's PF config is setup to provide nat'd network access to the two httpd servers, so that they can update packages, pull git repos, etc. Below is the only required line for nat to function on the internal network, but we also have included a full example pf.conf. If you already have your own pf set up, I would recommended only using the required line. After you configs have been modified, run pfctl -f /etc/pf.conf.

# Required line for nat on internal network
pass out on vio0 inet from !(vio0) to any nat-to (vio0)
# Full pf.conf example:

##############
### Tables ###
##############

table <ssh_brutes> persist counters
table <http_brutes> persist

#################
### Variables ###
#################

external="vio0"
internal="vio1"

tcp_pass_in= "{ 22 80 443 }"

############################
### recommended settings ###
############################

set skip on lo

set loginterface egress

match in all scrub (no-df random-id max-mss 1440)

antispoof quick for egress

block log all       # block stateless traffic

#############
### rules ###
#############

# pass out traffic on external from internal
match out on $external inet from !($external) to any nat-to ($external)

# allow traffic on internal network
pass on $internal all

# pass out these ports and keep-state
pass out on $external keep state

# allow tcp traffic in through external
pass in on $external proto tcp from any to any port $tcp_pass_in

# set up ssh block (fail to ban)
block in log quick proto tcp from <ssh_brutes> to any label SSH_BRUTES
pass in on $external proto tcp to any port 22 flags S/SA keep state \
(max-src-conn 5, max-src-conn-rate 5/60, overload <ssh_brutes> flush global)

# http max connection filtering
block in log quick from <http_brutes> to any label HTTP_BRUTES
pass in on $external proto tcp to any port {80 443 } flags S/SA keep state \
(max-src-conn 100, max-src-conn-rate 15/5, overload <http_brutes> flush)

You may also want to ensure that the resolv.conf on the two httpd servers are setup to use your own desired dns, instead of the vultr dns servers, but that is optional. At this point though, go ahead and reboot the two httpd servers. Once you are able to log in via the console, you should now have access to ping the outside world, through the load balancer gateway.

At this point, no further networking modifications should be necessary. All network traffic out from the httpd servers should be currently handled by the load balancer just deployed.

httpd.conf configuration

Now that all the VM's are able to communicate, let's create a simple http server on each httpd VM. Each VM will serve out two individual sites, called www.example.com and devel.example.com. Your configuration may not require two, so in that case, you can just replace the example domains with your own domain. Be sure to save your site files under /var/www/htdocs/<domain name>, as shown below.

# httpd_server_1:/etc/httpd.conf

chroot "/var/www"
ext_addr="10.0.0.2"

prefork 2

types {
  include "/usr/share/misc/mime.types"
}

server "www.example.com" {
    listen on $ext_addr port 80
    alias "example.com"
    root "/htdocs/www.example.com"

    # Required for acme-client
    location "/.well-known/acme-challenge/*" {
        root "/acme"
        request strip 2
    }
}

# Optional second site
server "devel.example.com" {
    listen on $ext_addr port 80
    root "/htdocs/devel.example.com"

    # Required for acme-client
    location "/.well-known/acme-challenge/*" {
        root "/acme"
        request strip 2
    }
}
# httpd_server_2:/etc/httpd.conf

chroot "/var/www"
ext_addr="10.0.0.3"

prefork 2

types {
  include "/usr/share/misc/mime.types"
}

server "www.example.com" {
    listen on $ext_addr port 80
    alias "example.com"
    root "/htdocs/www.example.com"

    # Required for acme-client
    location "/.well-known/acme-challenge/*" {
        root "/acme"
        request strip 2
    }
}

# Optional second site
server "devel.example.com" {
    listen on $ext_addr port 80
    root "/htdocs/devel.example.com"

    # Required for acme-client
    location "/.well-known/acme-challenge/*" {
        root "/acme"
        request strip 2
    }
}

relayd.conf config

Next, let's configure the relayd service on the load balancer. The code blocks at the end should stay commented out for now, until we can configure acme-client completely.

# load_balancer:/etc/relayd.conf

table <webservers> { 10.0.0.2, 10.0.0.3 }

log state changes
log connection

http protocol "http" {

    # Let's log various extra things to the log
    match header log "Host"
    match header log "X-Forwarded-For"
    match header log "User-Agent"
    match header log "Referer"
    match url log

    # Update headers passed to the httpd servers
    match request header set "X-Forwarded-For" value "$REMOTE_ADDR"
    match request header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"

    # Set recommended tcp options
    tcp { nodelay, socket buffer 65536, backlog 100 }

    block path "/cgi-bin/index.cgi" value "*command=*"
}

http protocol "https" {

    # Let's log various extra things to the log
    match header log "Host"
    match header log "X-Forwarded-For"
    match header log "User-Agent"
    match header log "Referer"
    match url log

    # Update headers passed to the httpd servers
    match header set "X-Forwarded-For" value "$REMOTE_ADDR"
    match header set "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
    match header set "Keep-Alive" value "$TIMEOUT"

    # Set recommended tcp options
    tcp { nodelay, socket buffer 65536, backlog 100 }

    tls { no tlsv1.0, ciphers "HIGH:!aNULL" }

    block path "/cgi-bin/index.cgi" value "*command=*"
}

relay "http_www.example.com" {
    listen on 55.55.55.55 port 80
    protocol "http"
    forward to <webservers> port 80 mode loadbalance check tcp
}

relay "http_devel.example.com" {
    listen on 55.55.55.60 port 80
    protocol "http"
    forward to <webservers> port 80 mode loadbalance check tcp
}

# Keep these commented out for now
#
# relay "https_www.example.com" {
#     listen on 55.55.55.55 port 443 tls
#     protocol "https"
#     forward to <webservers> port 443 mode loadbalance check tcp
# }
# 
# relay "https_devel.example.com" {
#     listen on 140.82.15.101 port 443 tls
#     protocol "https"
#     forward to <webservers> port 443 mode loadbalance check tcp
# }

After creating the relayd.conf file, go ahead and run the following commands to check the config and start the service.

# Test the config
relayd -n -f /etc/relayd.conf

# Enable the service 
rcctl enable relayd

# Start the service
rcctl start relayd

At this point, you should be able to curl the http URLs for your two sites: curl http://www.example.com and curl http://devel.example.com.

Prepare acme-client

We would like our sites to be accessible through https, which requires ssl certs. In this tutorial, we will be using the OpenBSD acme-client tool to acquire such certifications.

The acme-client tool utilizes the "http-01" challenge type to verify that you indeed own the site that you are requesting certs for. When acme-client runs, the software will create a file in /var/www/acme, and then send that filename over to Let's Encrypt, which then verifies that the files indeed exist at the URL that the certs are being requested for. This works fine without issues on servers that have the httpd servers local to the public IP of the site. However, in our case the load balancer will be requesting the certs, as the relayd service running on the server will require the certs. However, this is problematic for us since the temporary file will need to be accessible via http, and there will be no httpd service running on the load balancer.

Since acme-client will be ran from the load balancer, the temporary file created by acme-client will have to be accessible via the httpd servers, as acme client will be be looking for the file to be shared from the httpd servers. Luckily, there is a easy solution to this. We will create an NFS share on the load balancer, sharing out load_balancer:/var/www/acme to the two httpd servers. This will enable the two web servers to share out the files created by acme-client on the load balancer.

Create /etc/exports, then enable and start the required NFS services:

# load_balancer:/etc/exports
echo '/var/www/acme -alldirs -ro -network=10.0.0.0 -mask=255.255.255.0' > /etc/exports

# Start and enable services
rcctl enable portmap mountd nfsd
rcctl start portmap mountd nfsd

Next, we will need to mount up the NFS share on the two httpd servers. Add a new entry to /etc/fstab on httpd_server_1 and httpd_server_2, then mount up the share.

# on httpd_server_1 and httpd_server_2:
echo '10.0.0.1:/var/www/acme /var/www/acme nfs ro,nodev,nosuid 0 0' >> /etc/fstab

# Mount up share on both servers
mount /var/www/acme

Now that the NFS shares have been mounted on the two httpd servers, let's configure acme-client on the load balancer so that we can create certs for the two websites.

Once the acme-client configuration has been created, go ahead and run the following commands to request and create the SSL certs for your site. These commands should create two crt's under /etc/ssl, and two private keys in /etc/ssl/private. Relayd will look for these to establish SSL connections, looking specifically for :.crt for certs, and :.key for the keys. Again, in future releases, relayd will not be tied to these limitations, as you will be able to specify a cert and key name for each domain.

# /etc/acme-client.conf

authority letsencrypt {
  api url "https://acme-v01.api.letsencrypt.org/directory"
  account key "/etc/acme/letsencrypt-privkey.pem"
}

domain www.example.com {
    alternative names { example.com }
    domain key "/etc/ssl/private/55.55.55.55:443.key"
    domain full chain certificate "/etc/ssl/55.55.55.55:443.crt"
    sign with letsencrypt
}

domain devel.example.com {
    domain key "/etc/ssl/private/55.55.55.60:443.key"
    domain full chain certificate "/etc/ssl/55.55.55.60:443.crt"
    sign with letsencrypt
}
# On each httpd server, run both:
acme-client -ADv www.example.com
acme-client -ADc devel.example.com

After the acme-client commands have been executed successfully, you will need to go back into the relayd.conf and uncomment that last code block so that https connections can be made. Run rcctl reload relayd to bring these settings into the running relayd.

# load_balancer:/etc/relayd.conf

...

# These lines should now be uncommented
relay "https_www.example.com" {
    listen on 55.55.55.55 port 443 tls
    protocol "https"
    forward to <webservers> port 443 mode loadbalance check tcp
}

relay "https_devel.example.com" {
    listen on 140.82.15.101 port 443 tls
    protocol "https"
    forward to <webservers> port 443 mode loadbalance check tcp
}

The last step to this setup is to ensure that the SSL certs for your sites are updated on a timely bases. Certs from Let's Encrypt last 90 days, but acme-client can update certs if they are expiring within 30 days. Create two cron entries to facilitate cert renewal.

* 0 * * * /usr/sbin/acme-client www.example.com && rcctl reload relayd && logger 'Updated www.example.com certs'
* 0 * * * /usr/sbin/acme-client devel.example.com && rcctl reload relayd && logger 'Updated devel.example.com certs'

Conclusion

At this point, your load balancer setup to receive http and https connection, relaying requests to the back-end http servers. You should be able to patch and reboot either one of the httpd servers safely without your clients loosing access to either site. One side note though, it would be prudent to lock down pf on the load balancer according to your own preferences, as it also acting as a firewall for your two httpd servers. This is beyond the scope of this article at this time.


Has been tested on OpenBSD 6.5