In a previous post (https://enterpriseadmins.org/blog/virtualization/nested-vmware-workstation-lab-using-linked-clones/) I mentioned a nested ‘smash lab’ using VMware Workstation. This post will focus on a couple of the component VMs: lab-esxi-02 and lab-esxi-03, which are nested ESXi 7.0.3 and ESXi 8.0.0 hosts, each containing a corresponding vCenter Server Appliance.
These two nested ESXi hosts only differ in the version of ESXi that is installed. Each has 2 vCPU, 20GB RAM, a 16GB SCSI 0:0 device (for ESXi install) and a 100GB SCSI 1:0 device (for a VMFS datastore). I decided to manually install ESXi, so that I could choose specific builds. I installed 1 patch prior to the latest available, just in case I had a need to attempt an upgrade I wanted to ensure that an upgrade was available. Other than the default next > next > finish installs, I only made 2 changes to these hosts:
- Configured networking from the DCUI. This involved setting the IP address to
172.16.10.3, where the last octet matches the host name, as well as setting the default gateway and DNS server IP to
172.16.10.1, which is the lab side of the domain controller.
- Create a VMFS datastore named
local-hddthat used the 100GB SCSI 1:0 device. I could have automated this, but since it was a super simple task I decided to just knock it out in the UI.
Once ESXi was installed, I added a deployed corresponding vCenter Server Appliances to the local datastore. For this I first created DNS records for the appliances with associated IP addresses, created a copy of the
<cd-rom>\vcsa-cli-installer\templates\install\embedded_vCSA_on_ESXi.json file, specified values for hostname, datastore name, etc and then deployed through the command line with
.\vcsa-deploy install C:\tmp\lab-vcsa-13.json --accept-eula --acknowledge-ceip --no-ssl-certificate-verification and waited until the process completed. I ran into two different challenges with this. First, running a nested 64-bit guest requires that “Virtualize Intel T-x/EPT or AMD-V/RVI” be selected on the processor configuration. Credential Guard was enabled on my system and had to be disabled to allow for the VCSA to start. The other challenge that I encountered was that by default
ntp_servers is defined as
time.nist.gov in the JSON configuration file. I didn’t change this, but deployed the VCSA when my laptop could not reach the internet. The VCSA startup failed and reviewing log files showed all time stamps with a
1970-01-01 timestamp. I remembered that NTP was set to an internet address, so I tried to deploy again after updating the JSON file to point to time.example.org, which was a CNAME configured to the DNS server lab-mgmt-01.example.org, and this worked without error.
Once the VCSA was running, I debated on whether or not I should create inventory (like a new Data Center object, containing a Cluster with my nested ESXi host, etc) but decided to leave the VCSA completely unconfigured. This will allow me to address the configuration each time the environment is deployed. In the past I’ve created a minimal config, so time will tell which route is best. If having a minimal configuration is more practical, I can address that issue and create a new snapshot if needed.
With my ESXi host & VCSA deployed, I powered down the VCSA and ESXi host. Once the Workstation ESXi VM was powered off, I created a new snapshot so this could be used as a
parent virtual machine for future linked clones.