Back for 2020

It’s been a hot minute since I’ve concentrated on writing blog posts, after getting vExpert recognition I’ve let the content slide into non existence.

A busy work and family schedule meant I didn’t have the time but I’m hitting 2020 with renewed vigour and I hope these pages will again be filled with useful content.

Since the last posts (the majority of which were written at the beginning 2017!!) allot has changed, I’ve moved from the realm of infrastructure into infrastructure engineering and I’ve been exclusively working on vRealize Automation and Orchestrator and I’ve clocked up a ton of enterprise miles using the product on multiple projects.

I had a whole blog series written around how to “cloudify” vRealize 7.* including how to work CI/CD pipelines into your environment, coding practices, VM mobility and all that lovely stuff but with the release of vRA8 and the completely new design and extensibility engine I thought I’d start a fresh, the further I dive into infrastructure engineering the more I enjoy it and the more I drift into other product sets that are not VMware centric.

So going forward this blog will have a wider scope, it will likely have VMware product’s at its core but will branch off to look at alternatives, as the industry changes I want to fully transfer from being an “Infrastructure/VMware engineer” into being an automation engineer (DevOps engineer if you like)

With vRA8s new extensibility engine and native integration with Ansible, Azure, AWS etc this seems like the perfect time to do that.

With the above in mind, the first post will be about my new (ish) lab.

Create a custom ESXi ISO for Intel NUC install

Add the VMware depot index to the software depot

Add-EsxSoftwareDepot <a href=""></a>

Add the USB 3.0 Network Adapter VIB to the software depot. You’ve vSuperstar William Lam to thank for this!!

Add-EsxSoftwareDepot c:\image\

Clone an ESXi image (the one specified below is 6.5u1) give it a name and set a vendor

New-EsxImageProfile -CloneProfile "ESXi-6.5.0-20170702001-standard" -name "ESXi-6.5.0-u1-NUC7" -Vendor ""

Remove the current e1000e driver from the newly created ESXi image

Remove-EsxSoftwarePackage -ImageProfile "ESXi-6.5.0-u1-NUC7" -SoftwarePackage "net-e1000e"

Remove the current ne1000 driver from the newly created ESXi image

Remove-EsxSoftwarePackage -ImageProfile "ESXi-6.5.0-u1-NUC7" -SoftwarePackage "ne1000"

remove the current USB NIC driver from the newly created ESXi image

Remove-EsxSoftwarePackage -ImageProfile "ESXi-6.5.0-u1-NUC7" -SoftwarePackage "vmkusb"

Add the USB NIC VIB we added to the software depot earlier, to the newly created ESXi image

Add-EsxSoftwarePackage -ImageProfile "ESXi-6.5.0-u1-NUC7" -SoftwarePackage "vghetto-ax88179-esxi65"

Add a working version of the e1000e driver, to the newly created ESXi image

Add-EsxSoftwarePackage -ImageProfile "ESXi-6.5.0-u1-NUC7" -SoftwarePackage "net-e1000e"

export the new image to an ISO

Export-ESXImageProfile -NoSignatureCheck -ImageProfile "ESXi-6.5.0-u1-NUC7" -ExportToISO -filepath c:\image\ESXi-6.5.0-u1-NUC7.iso

Once you have the exported ISO, download rufus and build a bootable ISO

GUI Based VSAN Bootstrap VCSA Deployment

When deploying my 3 node NUC VSAN lab I got to try out the new bootstrap vcsa gui installer blogged about here…

Once you download and unpack the VCSA you’ll find a file called installer within the following path %filepath%\vcsa-ui-installer\win32\installer.exe (there’s a MAC & LINUX option too)

You need to have installed ESXi on your target host first, also if your networking is trunked to the ESXi host, make sure you tag the VM port group where the VCSA will reside before deploying.

By default this installer will enable the MGMT VMK for VSAN, this was fine with me as once the VCSA was deployed I retrospectively changed all the VSAN related host settings.

You will also need the VCSA to be resolvable by the DNS server you specify during the setup, I already had a Domain Controller deployed within workstation on my laptop which was routable from my lab network so I used that.

The Gui is relatively straight forward, ensure you select the option to install on a new virtual san cluster.




Select the disks on the host for the approproiate teir, if you want to enable Dedup do it now as the disk group will need to be evaucated to enable dedup and compression at a later date!


Do the traditional next next finish and it’ll start deploying. The VCSA deployment is now two step, once this stage is complete you need to log into the VCSA Mgmt page to complete the remainder of the setup.


Here is the proof of the pudding; my incredibly annoyingly named disks are now claimed by VSAN


The host is also now a member of a VSAN cluster.


A single host vsanDatastore


You can now complete the configuration through the vCenter

The Intel NUC\VSAN Home Lab

Having a home lab is a vital part of self-progression, I have always re-invested in myself, committing to self-study and home learning and it has helped me with career progression, so after my pizza box mistake I was in need of some toys to play with to continue that progression.

I looked into a number of options including Supermicro servers, HP Microservers and even building my own hosts in micro ATX cases but ultimately I don’t think anything gives you the same bang for their buck (or £ in my case) as the Intel NUC

The Intel nuc is the “official” unofficial VMware home lab!! There are posts all over the tinterweb from vSuperstars about why these bite size comps are ideal for a home lab, the downside being that they are not that cheap.

My BOM (bill of materials) doesn’t really differ from anyone else’s however if you’ve managed to stumble your way to my blog first, it goes like this






  • 3 x 5GB USB 3.0 thumb drives (To install ESXi on)

That lot cost me over £2400 (don’t tell the misses!!)

The above will build you a 3 Node all flash vSAN cluster (you will need a switch)


The only thing that probably differs in my Lab to others who have chosen NUCs is that I’m using an M.2 for the capacity tier and an SSD HD for the caching tier. The only reason for this was I still had the 60gb SSDs from my pizza box mistake so I put them to good use!

The NUC only has one on board NIC so the USB to Dual port Gigabit adapter will give me three NICs per NUC (try saying that fast with a mouth full of Doritos!)

Two NICs for vSAN and NFS traffic my “Storage NICS” and 1 for everything else (vMotion, Data, MGMT)

I already have a Buffalo Terastation and a Cisco 3750, which I used to complete the lab setup, the 3750 is used as layer 2 only, I created non-routable subnets for vMotion, vSAN, NFS & VXLAN (NSX is in the lab, more on that later)

Since all devices are patched into the same 3750 switch, I didn’t have to worry about routing those subs outside the switch but carving them up in their own VLAN helps limit broadcast traffic.

Mgmt and VM data sit on the same subnet, it’s not ideal… but it’s a lab.

The Terastation is used to house ISO\OVAs and maybe eventually the occasional low I/O VM, it has two NICs. One NIC is on the unrouteable NFS sub. The other NIC is on my mgmt.\data subnet so I can manage it!

All NUC NICS (USB & ONBOARD) support an MTU of more than 1600, which means I can have NSX in my lab, however unfortunately they don’t quiet support Jumbo MTU, again it’s a lab but in real world scenarios VSAN, NFS and probably vMotion too should be across a JUMBO frame enabled NIC. vSAN should also be backed ideally by a 10gb SWITCH when using all flash, but the 1GB Switch in my case works just fine… it’s a lab…just tell yourself “I wont buy a 10gb switch… I don’t need a 10gb switch…”

So “back of a fag packet” logical & physical designs…





There are a few things to consider when building the NUC lab.

  1. Some versions of ESXi don’t support the NUC/USB NICS so you’ll need to create a custom ISO image with the required VIBs. That MAY have changed with versions later than 6.5 u1 (I installed 6.5 U1) I’ve not kept in the loop with what does and doesn’t work. William Lams blog or maybe able to help you on that front.

My details to create a custom ISO for 6.5 U1 can be found here

  1. If you’re planning to use VSAN as your sole storage resource, then you’ll need to “bootstrap” VSAN to deploy your vCenter, as VSAN configuration is predominately done in vCenter and since you cant deploy a vCenter without storage, you get stuck in a chicken and egg scenario. William LAM has some great tech blogs on how to do this manually, but since vCenter 6.5 u1 there’s an option within the VCSA deployment gui to configure VSAN as you deploy the VCSA. I’ve briefly blogged about that here.

The LAB performs really well, I’ll throw up a bench marking blog at some point, but for now I hope you found the above useful!

Pizza box mistake…

Since leaving EMC and subsequently losing access to a lab, I had been mulling over a home lab for some time. Ultimately, what I wanted to do was keep upfront costs down. I built a 3 node DL360 G6 vSAN lab with 3 x 10K SAS HDs and a 60gb SATA SSD per node, 24gb RAM per node, 2 x Intel Xeon procs, it didn’t perform to shabby and all for a few hundred quid, pretty good buy or so I thought!

I’m not stupid (honest) I’d read horror stories about large electricity bills and thought “meh, how bad can it be”. Luckily Scottish power (my energy provider) are on hand with their year on year electricity comparison graph to show just how bad it can be, from 15.05kWh on average (over 92 days) to 27.03kWh on average (over a shorter 78 days)

My capex saving is now an opex cost…

Take my advice, avoid the pizza boxes… learn from my mistake, like UK Top Gear, I have been ambitious… but rubbish!

Now to start planning my next lab…

By the way, I have three DL360 G6s going if anyone wants to buy them?