30 August 2016

In Win MS04 home server case

Out with the old
For some time I've been looking for a nice computer case to house my small homelab server. Finding a case that fits a mini-ITX motherboard, two 3,5" HDDs and a 2,5" SSD while omitting the PSU has been hard. So hard that I gave up. During my search I put the homeserver into a shoe box and for the past few months, the server has been running happily in it's bright red "case".

In with the new
One of the Dutch one-day offer sites offered an In Win MS04 with a 265W PSU with a 30% discount. I first came across this case when I was shopping for the SuperMicro Xeon-D based home server and thought it looked nice but a bit expensive. The discount and the fact that my colleagues were making fun of my makeshift bright red case made me decide to go for this offer. It arrived in a plain beige box with some tape and some foam to keep it save.

In case you don't know this case, here are some highlights:
  • 4 hot swap drive bays
  • Slim ODD bay
  • internal 2,5" HDD bay
  • mini-itx motherboard tray
  • 265W 80+ bronze PSU
  • 120mm PWM fan
  • one low profile PCI-E slot
  • power button with blue LED
The metal case is about as big as an HP Microserver Gen8 but fits a standard sized mini-itx motherboard. Hooray for choice!

Installation
Since I'm only replacing the case, the contents is still the old homelab management server. The removable motherboard tray made installing the motherboard a breeze. The screws for the motherboard were in a clearly labeled plastic bag. After installation of the motherboard, connecting all the wires for the frontpanel was easy. Only the front USB3.0 connector was a bit finicky since it required nimble fingers to get underneath the drive bays and push the connector to the board right-side up. 

Minor issue with the cabling: the Pico-PSU molex connector to feed power to the backplane is a very, very tight fit. There's not enough slack in the cable to unplug the molex. A motherboard with a 24pin connector on the "north-end" won't have this issue though. 
 I like the drive trays that allow you to mount both 3,5" and 2,5" drives in a caddy (not simultaneously). One houses a 2,5" SSD and two others house a 3,5" HDD each.

In the end I think it's a nice case that will serve it's purpose well. For now I'm not using the included PSU because my 80W Pico-PSU delivers enough power with greater efficiency. Maybe I'll use the included PSU when I upgrade to another motherboard or maybe I won't. 

Here you can see it humming along next to it's big brother. The 25W usage is just the management server running ESXi 6.0u2 and three VMs (VCSA, Xpenology and Server 2016).

23 August 2016

Homelab part 4: Suspending the lab

Since my homelab is just a playground to try and test things, there's no point it keeping it running when I'm not actively using it. I've decided to shut down or suspend the virtual machines that are running and make a script that saves the state of the lab so it all comes back when it's time to go play. I use a Windows virtual machine as a jumphost and ESXi as the hypervisor so I've chosen PowerCLI as the glue that sticks it all together.

The first script is used to start the homelab.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#variables
$ipmiCred = Get-Credential ADMIN
$Ip = "192.168.2.229"
$vcenterCred = get-credential root
$vcenterserver = '192.168.2.200'
$vmhost = '192.168.2.230'
$PoweredOnVMspath = 'c:\temp\PoweredOnVMs.csv'

#boot host using IPMI
try {
Get-PcsvDevice -TargetAddress $Ip -ManagementProtocol IPMI -Credential $ipmiCred | Start-PcsvDevice
}
catch {
$ErrorMessage = $_.Exception.Message
$FailedItem = $_.Exception.ItemName

write-host "error connecting to IPMI: $faileditem $errormessage" 
}

#load PowerCLI
. 'C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI\Scripts\Initialize-PowerCLIEnvironment.ps1'

#connect to vCenter server
try{
Connect-VIServer -Server $vcenterserver -Credential $vcenterCred
}
catch{
write-host 'Connection to vCenter server failed' -ForegroundColor Red
}
#wait for the host to start up
do {
sleep 10
$ServerState = (get-vmhost $vmhost).ConnectionState
}
while ($ServerState -eq 'NotResponding')
Write-Host "$vmhost is still booting"
#load the list of VMs that were powered on last time
try{ 
$PoweredOnVMs = Import-Csv -Path $PoweredOnVMspath
}
catch{
Write-Host 'Import CSV of powered on VMs failed' -ForegroundColor Red
}
#start VMs that were powered on last time
try{ start-vm $PoweredOnVMs.name}
catch{Write-Host 'VMs power on failed' -ForegroundColor Red} 
 So this script starts the ESXi host using IPMI and loads the CSV file that contains the running virtual machines from last time and starts them as soon as the ESXi host is connected to vCenter.

The second script is used to store the running virtual machines into the CSV file and shut down the ESXi host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#variables
$vcenterCred = get-credential root
$vcenterserver = '192.168.2.200'
$vmhost = '192.168.2.230'
$PoweredOnVMspath = 'c:\temp\PoweredOnVMs.csv'

#load PowerCLI
. 'C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI\Scripts\Initialize-PowerCLIEnvironment.ps1'

#connect to vCenter server
try{
Connect-VIServer -Server $vcenterserver -Credential $vcenterCred 
}
catch{
write-host 'Connection to vCenter server failed' -ForegroundColor Red
}

#find powered on VMs
try{
$PoweredOnVMs = get-vmhost $vmhost | get-vm | Where-Object{$_.PowerState -eq 'PoweredOn'}
} 
catch{ write-host 'Failed to find Powered On VMs' -ForegroundColor Red}

#export powered on VMs to CSV
try{ 
$PoweredOnVMs | Export-Csv -Path $PoweredOnVMspath
}
catch { Write-Host 'failed to export CSV' -ForegroundColor Red} 

# shut it all down, start with VMs that have VMtools installed
foreach ($PoweredOnVM in $PoweredOnVMs) { try{ Shutdown-VMGuest $PoweredOnVM.Name -Confirm:$false} catch{Suspend-VM $PoweredOnVM.Name -Confirm:$false}}
write-host 'Shutting down VMs and waiting some minutes' -foregroundcolor Green
#wait for the VMs to shut down or suspend

do {
start-sleep 10
$VMState = (get-vmhost $vmhost|get-vm)
}
while ($VMState.PowerState -eq 'PoweredOn')

#wait a few more seconds
start-sleep 15
#shut down the ESXi host
write-host 'Shutting down ESXi host'
try{Stop-VMHost $vmhost -Force -Confirm:$false | Out-Null}
catch{Write-Host 'ESXi host shutdown failed' -ForegroundColor Red}

As with all things in a homelab, these scripts are subject to change as soon as I think of something new. Suggestions are welcome!

17 August 2016

AHCI controller passthrough with a Supermicro Xeon-D motherboard

So someone asked me if it is possible to passthrough the onboard AHCI SATA controller from a Supermicro Xeon-D motherboard to a VM. Since I use ESXi, that's what I'll use to show you this. Hyper-V 2016 features discrete device assignment in the latest technical preview builds, you can try this with Hyper-V too. Maybe I'll do a blogpost on that some other time.

[10 feb 2021 - Update] This still works in ESXi 7.0.1 but the passtru.map has been placed in the subfolder /etc/vmware/.

I started by having a look in the list of devices that are available for passthough. The Lynx Point AHCI Controller was not on the list of devices since SATA controllers are unsupported for passthrough. Let's fix that!

Log in to your host using SSH or use the DCUI locally.
Find out the PCI ID using the following command:
esxcli storage core adapter list

Your onboard SATA controller is usually listed as vmhba0. The ID we're looking for is listed in the Description. So in this case, it's 0000:00:1f.2.

Enter the next command to find the PID. Substitute the PCI ID with your own ID if needed.
lspci -n | grep 0000:00:1f.2
In this case, the PID we're looking for is 8c02. Add this PID at the bottom of /etc/passthrou.map like so.
Edit: In ESXi 7.01, this file can be found here: /etc/vmware/passthru.map
Save your changes and reboot the ESXi host.

After the reboot, you'll be able to select the Lync Point AHCI controller.

Mark the device for passthrough and reboot the host.

When the host is up again, edit the VM settings to add the PCI device.


After adding the PCI device to the VM, boot it to see the result.












In my case, the VM I added the controller to is a Windows 7 virtual machine. To show you the controller and the connected disks, I added a screenshot of the device manager. I have two Samsung 850 Pro SSDs connected to the onboard SATA controller.
Reminder: Since the files for the VM itself have to be stored somewhere, you'll need another storage device. In my case, all the VM files are stored on an NVMe SSD. This method can be used to create a dedicated storage VM with a number of hard drives. It's a great way to give a FreeNAS VM direct access to a number of SATA drives.

Many thanks to Hilko for showing me the commands in the "Energy efficient ESXi server" thread on got.tweakers.net. Most of this blogpost is a shameless copy of his post.