April 23, 2016
After covering the history, design and build of my 2016 home server and NAS I am ready to talk about the software configuration.
The first part is easy, but when configuring the bleeding edge software packages on Linux things get interesting.
In theory I could have bought a proxmox or unraid license and called it done there, but my distribution of choice is Ubuntu and when it comes to Linux, it is better to stick to the environment you know. The current build of Ubuntu is 15.10 which has quite old versions of some critical software which is where the problems begin. Ubuntu 16.04 will fix most of the problems I had, but I wasn't patient enough to wait 2 months.
Firstly I decided that rather than using Linux bridges that I would setup Open V Switch for the best performance. Recent releases have made exponential speed gains in performance which would allow my VMs to have minimal impact on CPU performance, luckily Ubuntu has a relatively recent version bundled.
Secondly I had to choose a VM environment. I had originally wanted to use Xen but after some research I discovered that KVM was now mature and used by Proxmox and Unraid and supported GPU passthrough. This is where my trouble began and I solved all my problems by upgrading to a PPA with a later version of KVM.
My KVM configuration for GPU passthrough ended up simpler than most tutorials after realising I needed to upgrade KVM/qemu and it was smooth sailing. The Xeon D processor has Intel VT-D for directed IO which first needed to be enabled. Then Ubuntu had to be instructed to ignore the GPU, and KVM to not tell the VM that the GPU is being passed through to get NVidia cards to work. The Xeon D does not have an integrated GPU which means that the Intel IGPU hack can safely be ignored. You still use the aspeed GPU to boot the hypervisior.
In setting up GPU passthrough, I used the tutorial from the VFIO blogspot page. I only got it to work when I had switched to the TianoCore UEFI.
The version of ZFS bundled with ubuntu immediately detected my existing pool, and I was able to add the new disks to my pool. I reconfigured the pool to have two vdevs each with one new and one old disk. I then added a 32 GB partition of the Crucial MX200 drive for SLOG (to replace ZIL), and another 32 GB partition for L2ARC. I then formatted the remaining 186 GB for general use.
I Configured the following VMs in total:
- Crashplan - for my offsite backups (important!)
- Plex - for accessing my media library on my Xbox One
- Cacti - for SNMP monitoring
And I will be adding more:
- Tvheadend - connected to plex for live TV
- Several for web development
My server has been running over a month now and has been stable. The NVidia drivers for the GTX 950 are my only issue requiring semi-frequent reboots of the Windows VM. I believe it has to do with the audio over HDMI functionality which will randomly slow down video playback. Being a VM means my file server has no down time. I had to downgrade from the most recent driver as it would constantly crash. Many people seem to have problems with drivers, and I can't say I've had a lot of luck with display drivers.