31 July 2016

Homelab part 3: Management

In my opinion, a homelab should be volatile. Most of my lab is used in a simple cycle:
  1. You think of something to try, test or wreck.
  2. You build it as fast as possible while cutting as little corners as you can in order to make sure the results are valid.
  3. Execute your test plan.
  4. Evaluate the results to see if they are as expected. Troubleshoot or repeat the tests as necessary.
  5. Decide if you need your setup for more tests. If yes, shut it down or safe. If in doubt or you no longer need the setup, delete all the bits.
This cycle does not mean I try one thing at a time. What it does mean is that I try to remove clutter as much as possible.
In order to set up a lab as fast as possible, there are some parts of the lab I don't rebuild with every test. This is the management part of the lab. To make sure the management stuff doesn't get wiped, I put up a separate physical server just to host the management roles. So what are these roles?
  • vCenter server for the deployment of templates and tracking of performance over longer periods of time. I also use the vSphere Web Client to manage the lifecycle of the virtual machines. 
  • Sexilog to collect logs and alerts and display them on a dashboard.
  • A Windows virtual machine to use as an RDP jump box to the lab when I'm not at home. This is also the Windows server that runs all the PowerCLI/PowerShell scripts in the homelab.
  • A virtual NAS to store ISOs, templates and random bits of data. This is also the primary data storage device in the house containing all the photos, documents and other important data.
I specifically chose not to set up a domain controller because I prefer to set up a fresh copy for every test (a simple PowerCLI/PowerShell workflow makes this really easy). This way I know for certain that specific settings I use for one setup don't interfere with another.
The resources needed for these workloads are quite modest. This enables me to look at a low-power option that is affordable (usually, the nice low-power servers that have some grunt are costly. The Supermicro SYS-E200-8D for example). After a lot of contemplation I decided to try a very low cost option and see if I could make it work. I had 16GB DDR3 SO-DIMM and a PicoPSU lying around. That should be enough to run my management workloads. I went looking for a low-power motherboard with a few CPU cores so I wouldn't have to worry about CPU contention. Because of this reason, a quad core CPU was preferable to a dual core option.
I found an Asrock N3700-ITX and I decided to give it a shot. It looked a bit underpowered with a quad core Atom (Braswell) processor, passive cooling and four SATA ports. The N3700 provides a little higher turbo speed over the N3150. No idea if this really helps but the price difference is small enough to try. If I didn't have a PicoPSU, I'd have bought the Asrock N3150DC-ITX because it has a 12V DC input and comes with the appropriate 65W adapter.

The first attempts to get ESXi to run on the system were unsuccessful. Many thanks to Antonio Jorba for solving the problem. Deploying the vCenter appliance was simple enough once I figured out how to connect to the Host Client and such. An SSD stores all of the virtual machines and two connected 5TB disks take care of storage for the virtual NAS. Running just ESXi 6.0 idle with a single SSD connected uses 11W (balanced power management). The complete system with the disks and the virtual machines running uses around 25W. That equals about 50 euros a year in power if I leave it running 24/7. So it meets the requirements.

Shopping list:

  • Asrock N3700-ITX
  • PicoPSU 90W
  • 80W 12V Leike adapter
  • 2x 8GB Corsair Vengance
  • Samsung SM843T - 480GB
  • Random shoe box I dug out of the waste paper bin

I'm still looking for a nice case to put the board, SSD and two HDDs into. Something the size of an average shoe box would be perfect. If you have a good suggestion, let me know!

No comments:

Post a Comment