Design + Computation

Serving My Home

Matthew Borger ‐ November 22, 2017

After my last move I had an opportunity to revisit my server needs. I prefer to host my own data for privacy and performance. Most of the time when I want to work with my documents and email, I’m sitting at home so why move bits around the world? And after reading an ArsTechnica article on how a homebrew router can deliver better performance than your typical consumer grade equipment, I was intrigued to see this for myself.

I plan on keeping this post updated with what practices I employ for my home server, mostly as a reference for myself and for those who might find this information useful. Feedback is welcome!

I do not recommed this setup for scenarios requiring high availability. By combining all of this functionality into one device I have created a single point of failure. When you make a mistake you may stop your wife from watching her favorite tv show. You have been warned!

Functional Requirements

Here’s the list of what I want to accomplish with this server.

  • Fault tolerant data storage

  • Email hosting

  • Website hosting

  • Ability to share photos with family

  • Serve as the router, because why not do it yourself!

    • Traffic prioritization for tasks such as video streaming

Build Requirements

Below are my ideal build requirements to support the functions above.

  • Multiple hot swappable SATA ports, preferably 3+

  • Low power consumption, preferably less than 15 watts

  • At least 2 NICs, preferably 3, for use as my home router

  • 1 power over ethernet connection for my POE enabled access point

  • Backup power, just needs to run long enough to spin down the hard drives

  • Passively cooled components to avoid fan noise

These requirements were surprisingly hard to fill. I found plenty of passively cooled embedded computers but none offered multiple SATA connections. I did come across a motherboard from Habey that checked off everything but I couldn’t find one to buy with the PoE module. I tried emailing them but didn’t get any response, probably because I wasn’t offering to buy a large quantity.

Here’s what I settled on.

  • ASROCK IMB-150N Mini ITX Motherboard with 2 NICs and 2 SATA

  • Will Jaya 2-Bay NAS Enclosure

  • 8GB Ram

  • 2 2TB hard drives

  • APC UPS BE600M1

All of this set me back about $560 from Newegg.

I should also mention I already have my own cable modem and a Ubiquiti wifi access point. I highly recommend the Ubiquiti products, their main market is the enterprise but they have some "lite" models for less than $100 that perform way better than the usual consumer level stuff.

Setting the Bits Up

Now that I have the hardware, I need to get all the software in order. For this I chose to use the latest long term supported release of Ubuntu, which at the moment was 16.04 LTS.

Data Redundancy

For storage I chose ZFS in a mirrored setup. The big gain of ZFS over running ext4 over LVM is its online snapshot capabilities and streaming backups. One minor hitch is that Ubuntu 16.04 doesn’t have all the pieces in place to run root on ZFS. I set aside 50GB for the operating system files on a RAID1 LVM mirror and used the rest of the space for a ZFS mirror.

To acheive true hard drive independence I had to setup an EFI partition on both and ensure grub was installed to both. I also had to comment out the /boot/efi line in /etc/fstab so the system doesn’t require any one particular drive to be present.
I tested removing each hard drive to determine the drive ids and make a note of them. With this information you will know which drive bay to yank out when one of the hard drives fails and one will fail within a few years.

Keeping Track of Time

It’s useful for a server to synchronize its clock. Clock synchronization can be setup just by installing the network time daemon.

$ sudo apt install ntp

Kernel Live Patching

Canonical publishes live kernel patches for their LTS releases. This allows your kernel to have any security issues in the kernel patched immediately without requring a restart. This is a paid service but Canonical lets you use it for free on a few machines, you just need to register for a token and install the snap package.

Networking & Wifi

Connecting your box to the rest of the world is no trivial task. Especially when some standard protocols are block by your ISP (more on that below). The routing configuration connects your box and all of your intranet devices to the Internet. DHCP and DNS are necessary to assign addresses to your devices and provide the consistent name resolution between your intranet and the Internet.


Configuring the routing was heavily inspired by this ArsTechica article. They explain how the routing and firewall rules work in iptables. In their instructions they placed a script in /etc/network/if-pre-up.d directory which is executed just before any interface is brought up. The issue with this is that systemd brings up the interfaces in parallel and the iptables-restore provided with Ubuntu 16.04 doesn’t acquire an exclusive kernel lock so you’ll get inconsistent failures on commit. You could modify the script to only execute for one of the interfaces but there’s another package which maintains your iptables rulesets, ensuring they are only inserted once before any interface is brought up.

$ sudo apt install iptables-persistent

Then all of your iptables configuration should go in /etc/iptables/rules.v4.


DHCP is needed to dynamically assign IP addresses when new devices join the network.

$ sudo apt install isc-dhcp-server
subnet {
    option routers;
    option domain-name-server;
    option broadcast-address;
    option domain-search "";
$ sudo systemctl reload isc-dhcp-server


Just installing bind9 will give you a fully functional caching DNS server which will provide your devices internet access with the above DHCP configuration.

$ sudo apt install bind9

The rest of this section is about configuring endpoints for various services and will make more sense after at least reading about the proxy host.

The domain is registered through DNSExit. I originally chose them because they offered dynamic DNS updates at no extra cost. Since I have to use a proxy host to get around port restrictions, the DNS host records all point to the proxy which has a static IP address. Besides the host record I also created some aliases for mail, cloud, and www. This allows devices outside my home to be routed into my home.

The devices inside my home should not have to send their bits outside of the local network for any service provided at Bind is already setup and acting as a caching DNS provider and it can be configured to handle its own zones.

First let’s create the zone file for how to resolve

; BIND data file for
$TTL    604800 ; 1 week
@       IN      SOA     @ (
                              8         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
        IN      NS      ns
        IN      A
        IN      MX      10      mail
www     IN      CNAME   @
cloud   IN      CNAME   @
ns      IN      A
mail    IN      A
vandyke IN      CNAME   @

Bind’s file format is rather terse. Semicolons indicate comments within the file. For my home network I decided to use the 192.168.7 subnet and vandyke has address 1. @ is a shortcut for the domain root.

Every time you modify this file, the serial number must be incremented. This is how Bind knows to invalidate any record caching.

With that file in place, bind needs to be configured to load it. Append the following to bind’s local configuration.

zone "" {
    type master;
    file "/etc/bind/"
$ sudo systemctl reload bind9

This is all that is needed to resolve to instead of an external IP address.

Access Point

The Unifi access point requires a controller to manage it. The controller software can be installed and kept up to date through their Debian repository.

deb stable ubiquiti
$ sudo apt-key adv --keyserver --recv C0A52C50
$ sudo apt update
$ sudo apt install unifi

Proxy Host

The ISP at my new home is Cox. So far I’ve had a good experience with their service. Installation went well and the first service rep I talked to even knew how to hook up my own modem to their network. That’s a leg up over Time Warner. One complaint is that Cox blocks ports 80, 25 and 143 which complicates operating a few services. To deal with that I setup a cloud server to act as a proxy. My domain registration points to the proxy host and my home server updates my dynamically assinged IP address from Cox to the proxy server. The proxy rewrites all http traffic to https to get around the port 80 block and redirects all SMTP traffic on port 25 to port 587.

To keep track of what computer I’m referring to I gave them distinct hostnames. The computer sitting in my home is called vandyke and the proxy is called portal. These names are important to keep in mind when maintaining the mapping in /etc/hosts.

Current my proxy is hosted on HiFormance. I found a deal for a years worth of hosting for $7.50. has a large listing of VPS providers.
Keeping Track of my Home IP

In order for portal to know where to forward incoming traffic to, it must keep track of the IP address that Cox dynamically assigns my home. The solution to this is usually provided by Dynamic DNS.

I created my own Dynamic DNS update system by hooking into the DHCP cliet on vandyke to trigger a remote update to the hosts file on portal. Vandyke, the computer name, is not a registered domain name registered DNS so in order to map it to an IP address an entry needs to be managed in /etc/hosts on portal.

Get your home’s current ip address.

$ curl
  "ip": "",
  "hostname": "",
  "city": "San Diego",
  "region": "California",
  "country": "US",
  "loc": "32.6958,-117.1189",
  "org": "AS22773 Cox Communications Inc.",
  "postal": "92113"

Manually set the entry in the proxy hosts file.

/etc/hosts localhost portal vandyke

I created a script which updates entries in the hosts file.

if [ $# != 2 ]; then
    echo Usage: ${0##*/} host address
    exit 1
logger -t update-proxy Updating host $1 address to $2
sed -ri -e "/$1$/s/^\S+/$2/" /etc/hosts
The ${0##*/} is a bash string manipulation.

This script has to be run as root to modify the hosts file. I don’t want to automate a remote root login so I created a user specifically for this purpose called update-proxy.

$ sudo adduser --system --home /tmp --disabled-password update-proxy

In order for the update-proxy user to modify the hosts file it has to gain root permissions which can be done through sudo. But I only want root permissions for the update-proxy script above and for no password to be required, espcailly since the user has no password. This can be done be adding a sudoers file.

In order to edit a sudoers file you have to use visudo.

$ sudo visudo -f /etc/sudoers.d/90-update-proxy
# Allow update-proxy user to execute update-proxy
update-proxy ALL=(root) NOPASSWD:/usr/local/bin/update-proxy

Now I want to be able to login remotely to the proxy host using the update-proxy account from vandyke with no password. This can be done with ssh by copying the host’s public key over to the authorized_keys file for the update-proxy user. The host public key is located at /etc/ssh/

Since the update-proxy user has no home directory, I configured ssh to look for authorized keys in an additional location.

AuthorizedKeysFile /etc/ssh/authorized_keys.d/%u %h/.ssh/authorized_keys

Create the directory, the key file, set it’s permissions and reload the SSH service configuration.

$ sudo mkdir /etc/ssh/authorized_keys.d
$ sudo touch /etc/ssh/authorized_keys.d/update-proxy
$ sudo chown update-proxy:nogroup /etc/ssh/authorized_keys.d/update-proxy
$ sudo chmod 600 /etc/ssh/authorized_keys.d/update-proxy
$ sudo systemctl reload ssh

Copy the host public key from vandyke into /etc/ssh/authorized_keys.d/update-proxy on the portal host.

Now we just need the remote update-proxy script to be executed whenever a new ip address is assigned. The DHCP client will execute any scripts placed in its hooks directory. Some variables are provided such as the reason for the state change and the new ip address. Checkout the manpage for all the details.

logger -t update-proxy dhclient hook emitted $reason ip: $new_ip_address
if [ "$reason" = "BOUND" ]; then
        logger -t update-proxy Updating portal with $new_ip_address
        ssh -i /etc/ssh/ssh_host_rsa_key update-proxy@portal sudo update-proxy vandyke $new_ip_address
The logger command writes messages to /var/log/syslog.

With that all in place, when Cox assigns my home a new ip address the proxy server will be updated ensuring that the world can still see this amazing site.

Proxying SMTP traffic

The proxy consists of installing a proxy program and setting up a service to manage it.

$ sudo apt install simpleproxy
Description=SMTP Proxy to Vandyke

ExecStart=/usr/bin/simpleproxy -L 25 -R vandyke:587


The service can then be loaded and started.

$ sudo systemctl daemon-reload
$ sudo systemctl enable smtp-proxy --now
Proxying IMAPS traffic

I set this up the same as the SMTP proxy, from portal port 993 to vandyke 993.

I have had issues with the IMAP push notifications not working. Simpleproxy has a hard coded 5 second connection timeout wich defeats IMAP’s IDLE command. My workaround for now has been to poll for new email.
Proxying HTTP traffic

As above I setup a simpleproxy service which forwards port 443. That takes care of anyone who accesses my website over HTTPS but most people won’t specify that and will their HTTP request will arrive at port 80. As I mentioned above, my ISP blocks port 80 so I can’t just forward the port but HTTPS is fine. To remedy this I setup an Apache web server which redirects all HTTP request to HTTPS. At that point the client’s request will return to the proxy at port 443 and be forwarded to the real web server on vandyke.

Besides installing Apache, the rewrite module needs to be enabled. Debian splits up the Apache configuration using symbolicly linked configuration files. They provide some commands to manage the Apache configuration.

$ sudo apt install apache2
$ sudo a2enmod rewrite
$ sudo cp /etc/apache2/sites-available/{000-default,RewriteToHttps}.conf

I made a copy of the default site configuration. The following configuration can go anywhere just beneath the VirtualHost element.

<VirtualHost *:80>
    RewriteEngine On
    RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [redirect,last]

The Apache documentation can provide more details about what this configuration does. I also removed the DocumentRoot configuration since no files need to be served. With that in place, this configuration can be enabled and Apache reloaded.

$ sudo a2ensite RewriteToHttps
$ sudo systemctl reload apache2

TLS with Let’s Encrypt

Let’s Encrypt is a free, automated and open certificate authority. This service allows you to get a TLS certificate and automate its mainteneace which is just plain awesome. Their automation tool is called Certbot and they provide some great documentation on setting it up. Besides the main domain (e.g. you will want to add any subdomain you want to serve TLS content over such as www, mail or cloud.


Handling email involves setting up two pieces. The mail transfer agent (MTA) handles the receiving of email and the mail user agent (MUA) handles reading and management of email.

Postfix MTA

For an MTA I chose Postfix. It’s one of the most configurable pieces of software I have encountered. The sheer amount of options may seem daunting and it has evolved so many tutorials online still refer to the older options. I would keep the postconf manpage handy to work through the details.

$ sudo apt install postfix

The first configuration step involves enabling the submission protocol on port 587. Remember the proxy host actually communicates over the submission port to get around port 25 being blocked. The submission line in /etc/postfix/ just needs to be uncommented.

The following are the options I modified from their defaults. The order of the options does not matter so you can organize however you want.

smtpd_banner = $myhostname ESMTP $mail_name

smtpd_tls_cert_file = /etc/letsencrypt/live/
smtpd_tls_key_file = /etc/letsencrypt/live/

smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_tls_security_level = may
smtpd_tls_auth_only = yes

myhostname = vandyke

mydestination = $myhostname,, localhost.localdomain, localhost

relayhost = []:submission
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = texthash:/etc/postfix/sasl/auth
smtp_sasl_security_options = noanonymous

home_mailbox = mail/
smtp_dns_support_level = enabled
smtp_host_lookup = dns, native

Sending email from a dynamically allocated ip address is problematic because you don’t know if a previous owner abused it landing your address on a blacklist. Many ISPs also just block outgoing email to help reduce spam. Your ISP should provide an email relay for you send email through using your account credentials to authenticate. Your credentials are stored in a separate file configured by the smtp_sasl_password_maps option. Postfix supports many kinds of lookup tables for configuration. A simple format to use is texthash. Just point to a text file where each line contains a space separated key value pair.

/etc/postfix/sasl/auth    username:password

After a restart postfix should be good to go.

$ sudo systemctl restart postfix

Dovecot MUA

The MUA scene has many great options including Dovecot and Cyrus. I chose to go with Dovecot because of its simpler configuration.

$ sudo apt install dovecot-imapd

Dovecot’s configuration is split up into separate files for organization. Below are the options I modified from the defaults.

disable_plaintext_auth = yes
auth_mechanisms = plain, login
mail_location = maildir:~/mail
unix_listener /var/spool/postfix/private/auth {
  mode = 0666
ssl = required
ssl_cert = </etc/letsencrypt/live/
ssl_key = </etc/letsencrypt/live/

Restart the service and hook up your favorite email client.

$ sudo systemctl restart dovecot

Documents & Photos

NextCloud. Check it out. I’ll write up another post about how I use it to organize my data and share photos with family.


I wasn’t happy with any static site generators out there. At some point I’ll write that up in its own post.

Losing Power, Gracefully

I chose the APC UPS because they have great linux support, even providing their own management daemon. My main goal is to ensure filesystem consistency by shutting down the computer before complete power loss.

$ sudo apt install apcupsd

The default configuration is set to shutdown the computer when the battery is at 5% capacity or estimates 3 minutes of remaining runtime. The configuration file is well commented. Below are the lines I changed to enable support for my UPS.


You also need to indicate that you’ve configured the daemon.


And restart the daemon.

$ sudo systemctl restart apcupsd