My completed (for now) Home Office improvements

I’ve recently made several improvements to my home office / desk setup. Several people have asked for details on the parts/pieces I used to complete this, so I wanted to take a moment and document them here.

I started with an Autonomous SmartDesk 2 – Home Office motorized sit/stand desk. There is a premium version of this desk that appears to have more range (can go lower/higher) than the home version, but for my needs the home office version seemed to have very similar specs and was slightly less expensive.

I already had a very good desk chair (the Autonomous ErgoChair 2), but also added a Topo Comfort Mat for when I’m standing. This was a touch more than I planned to spend, but came highly recommended and initial impressions are very good.

I’m using two different monitor arms, one from my previous setup and a new ErGear Adjustable Gas Spring mount which was very inexpensive (only $25) and supported the weight required by my monitor (the specs for the monitor arm show it supports up to 26.5 pounds).

Lighting was the toughest problem to solve. I wanted to have a primary light source that was centered in front of me, to reduce shade cast from one side of my face for video calls, but also desired more light than most round / camera lights provide. I also preferred an indirect light source that would bounce of the ceiling first, but not be blocked by my monitor when in a standing position. My first thought was a very tall floor lamp, but I didn’t really want a base that I would be kicking right in front of my feet, and couldn’t find anything tall enough anyway. I found some desk mounted lamps that looked like they might work, but I couldn’t find any with the right height to be able to reach over my monitor. What I settled on was a fairly custom solution. I purchased a Twinkle Star Floor lamp with Remote Control, but disassembled / rewired part of it and mounted it to an unused monitor arm I had laying around using some bolts and rubber wire clamps. The end result is a desk mounted lamp that’s the perfect height (and adjustable), but also has a remote control that can adjust light temperature and brightness.

I hope you find this info useful!

Clone Template to new vCenter

On a recent TAM Lab session (TAM Lab 082) I covered several methods for managing vSphere templates. At the end of the presentation we had a brief bonus method that showed using PowerCLI to clone a template to a different vCenter. This could be used once you update your primary copy of a template in vCenter1 and you wanted to make that template available in a different environment. The script used in the demo is available below.

$creds = Get-Credential

$sourceVC = Connect-ViServer -Credential $creds
$destVC   = Connect-ViServer -Credential $creds

$destTemplateName = 'template-tinycore11.1_tamlabtest'
$splatNewVM = @{
  Name     = $destTemplateName
  Template = 'template-tinycore11.1'
  VMHost   = ''
$vm = New-VM @splatNewVM -Server $sourceVC

$splatMoveVM = @{
  VM                = $vm
  NetworkAdapter    = (Get-NetworkAdapter -VM $vm -Server $sourceVC)
  PortGroup         = (Get-VirtualPortGroup -Name 'VLAN10' -Server $destVC)
  Destination       = (Get-VMHost '' -Server $destVC)
  Datastore         = (Get-Datastore 'test-esx-33_nvme' -Server $destVC)
  InventoryLocation = (Get-Folder 'vc1_TAMLab082' -Server $destVC)
Move-VM @splatMoveVM

Get-VM $destTemplateName -Server $destVC | 
Set-VM -Name 'template-tinycore11.1-bonus' -ToTemplate -Confirm:$false

This script uses splatting to improve readability. You can read more about_Splatting here. There are a couple basic components. First we connect to both vCenters, in this case using the same credentials. We then create a new VM from template on the source side, then move that VM to the destination side, and finally rename the destination VM and convert it to a template. We did this as multiple steps as Move-VM has additional parameters to assist with changing the destination network adapter to specify the correct portgroups and such.

Skyline Health Diagnostics custom SSL certificates

I recently read about the new Skyline Health Diagnostics tool. Here is a short quote that captures what this tool is all about.

VMware Skyline Health Diagnostics for vSphere is a self-service tool to detect issues using log bundles and suggest the KB remediate the issue. vSphere administrators can use this tool for troubleshooting issues before contacting the VMware Support. The tool is available to you free of cost.

Installation was very straight forward and captured in the official documentation here. I deviated slightly, and opted to use the Photon 3.0 OVA instead of installing Photon from the ISO image. If you go down this route, you may want to grow your root filesystem beyond the 16GB from the initial OVA, if you need help with that check out this blog post.

After setting up the appliance, I decided to replace the SSL certificates for the web site. This is described in the documentation as well, but I made a couple of changes that I’ll outline below.

The first change I made was to the SSL conf file. The official documentation has you edit the entries in the [req_distinguished_name] section so that the entries for commonName and DNS.1 are correct. In addition to doing this, I added a DNS.2 and DNS.3 option to capture the short name of the server as well as a friendly / alias name.

After applying the certificate, I was able to access the interface with the FQDN of the server, but not with the alias information or IP address. After doing a bit of research I found that adding a server_name entry to the nginx configuration would allow me to specify additional names/alias information. I also noticed that there wasn’t an automatic redirect from http to https, which I wanted to add. To do this I edited the nginx configuration with vi /etc/nginx/nginx.conf and added the following information:

server {
        listen 80 default_server;
        server_name _;
        return 301 https://$host$request_uri;

This above section tells nginx to listen on port 80, accept any wildcard server names that come in, then do a http 301 redirect to the https port, using the original host name and URL that were provided.

I then edited the existing sever 8443/ssl section and added the following below the listen 8443 ssl; line: server_name servername;

The above line instructs nginx to listen for incoming requests for my FQDN, short servername, IP address, or friendly alias. I could have used the wildcard symbol as shown in the http example, but I wanted to limit SSL requests to only values which were included in the SAN certificate to prevent possible certificate warnings.

The above screenshot shows the relevant section of the /etc/nginx/nginx.conf file. The green lines to the left denote the entries which were added.

Additionally, I had to update firewall rules to allow incoming HTTP requests. I used sample entries described here: to make this happen, specifically running the following two commands:

iptables -A INPUT -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT

With these minor changes I’m able to access Skyline Health Diagnostics using FQDN, IP Address, and Alias using HTTPS. If I forget and accidentally try to connect over HTTP, nginx will help me out and do the necessary redirect.

Photon OS 3.0 Grow Filesystem

I recently had a need to grow the root filesystem on a Photon OS 3.0 virtual machine. This is something I’ve done before with Ubuntu Linux (blog post here), but Photon OS does not include the growpart command out of the box, so that process did not work.

After resizing the virtual machine disk, I rescaned from within the running OS (I could have also rebooted, but who wants to do that) by running this command:

echo 1 > /sys/class/block/sda/device/rescan

I then installed the parted utility with the command tdnf install parted and started by typing parted at the shell prompt. Once at the parted menu, I typed print to show the disk information. A warning appeared letting me know there was extra space available for use and asking if I wanted to fix the GPT to use all the space, which I entered Fix to accept.

From here you can see that Disk /dev/sda has a size of 26.8GB, but the last partition #3 ends at 21.5GB. It happens that the last partition is the one I want to grow, so I type resizepart 3 . I’m notified that the partition is being used and I confirm with Yes. To answer the new question for where I want the partition to end, I entered 26.8GB which was the value previously returned by the print command.

With the partition table issues resolved, I entered quit to exit the parted utility.

Back at the shell, I ran resize2fs /dev/sda3 to resize the filesystem to consume all of the space. I can now confirm that my filesystem has the extra free space with the command df -h .

After testing this out, I realized the environment where I really needed to make the change did not have internet access, even through a proxy. Because of this I was unable to install parted with tdnf. Not to worry, we have two different workarounds for that.

First, we can install the parted RPM manually without tdnf. To do this we browse to the Photon repo here: Looking through the list of packages, we find the one starting parted-*, in this case parted-3.2-7.ph3.x86_64.rpm. We download the file and copied it to the Photon VM (using WinSCP or the like). From our shell prompt we then run rpm --install /tmp/parted-3.2-7.ph3.x86_64.rpm to install the manually downloaded RPM. We can then run the rest of the steps as outlined previously.

Alternatively, if we don’t want to install parted in the guest and are okay with a brief outage, we could use a bootable CD image which contains partition management tools. One such tool that I’ve had good luck with is gparted. You can download the image here: This tool takes care of all the steps, both growing the partition and extending the filesystem.

Setting up Pi-hole DNS

At home I attempt to maintain two networks — a reliable home network with WiFi and internet access and a separate home lab where services may not be as reliable. These networks overlap at times (for example, the home lab still needs internet access) but I don’t want to end up in a situation where the lab needs to be working for watching something on Netflix.

One place where these two requirements sometimes conflict is around name resolution. I would like everything on my home network to be able to resolve my lab resources, but I don’t wan to point everything at my lab DNS server.

The best solution I’ve found for this has been Pi-hole ( This is a service that can run on a variety of platforms and provide network-wide ad blocking — and more.

I have two instances of this running in my lab — one runs on a Raspberry Pi (to be separate from my lab infrastructure) and another backup instance runs in an Ubuntu 20.04 VM.

Deploying Pi-hole in a VM

This couldn’t be easier. I started with an Ubuntu Server 20.04 VM (using the template from here & the deploy process from here). I then ran the easy installer method by piping the install script to bash using:

curl -sSL | bash

I followed the wizard and accepted the defaults. At the end of the process I changed the autogenerated admin password to something easier to remember. You can do this with the command:

pihole -a -p

Once this is complete, you can update DHCP scopes to use your Pi-hole IP addresses as the DNS server(s) for your network.

Customizing to resolve Lab domain names

Once clients are using Pi-hole to resolve DNS names, we can enable conditional forwarding to handle lab specific domains. In the Pi-hole web interface there is an option to enable this (under Settings > DNS > Advanced DNS settings), but it only supports a single domain name and target DNS server. For my lab, I have a couple additional names that need to be resolved, but this is still possible using configuration files.

From an SSH session to the Pi-hole DNS server, we can create a file using a command/path such as:

nano /etc/dnsmasq.d/05-custom.conf

In this file we will add server entries for any domain name we want to forward. Multiple entries should provide redundancy in the event one of our lab DNS servers is unavailable.


NTP while we are at it?

One other service that I run on this Pi-hole VM (and Raspberry Pi) is NTP. While not the main purpose of this post, I’ll list the commands below to enable an NTP Server on Ubuntu 20.04.

apt install chrony

Once chrony is installed, we can configure it by editing the configuration file:

nano /etc/chrony/chrony.conf

We can review which upstream DNS servers are used, but more importantly we need to define which systems can use this host as an NTP server. We do that my adding the following lines to the end of the configuration file:

# Define the subnets that can use this host as an NTP server

Which will allow any device in the network to be able to query us for NTP. You can list additional ‘allow’ lines as needed.

Once the configuration file is updated, we can restart the service to apply changes using:

systemctl restart chrony.service

We can now query the NTP server using a variety of tools. On Windows I prefer ntptool.exe ( This should show the output that we were successfully able to query the new time server.

From the NTP server we can list recently checked in clients using the command:

sudo chronyc clients


Now we have an appliance that can provide DNS and NTP for our home network, and will forward any lab specific DNS queries to lab DNS servers. If those lab DNS servers are unavailable it doesn’t impact our home network.