Simplified Home Lab Domain Name Resolution

In previous posts we discussed considerations for building a Home Lab and for deciding on network ranges to use.  Once we know the networks we will be using, its time to start thinking about services that will be required.  For me the first service is always Domain Name Server (DNS).  This is a foundational building block for many systems which expect forward and reverse lookups to be working before you deploy (like vCenter Server). 

Its somewhat common to see folks use a name in the reserved top-level domain of .local suffix, like corp.local.  I’m not a fan of this practice, as RFC 6762 spells out special considerations for this domain, and that it should be used for link-local names only. 

I’ve also seen folks use domains with a common top-level domain, like bwuchlab.com.  As of this writing that domain is available for sale, so there wouldn’t be any conflict or such if I started using it internally without registering the domain.  I feel like this is a bad practice as someone else could buy that domain, I’d have no control over it, but may have some documentation or screenshots that include that name.  I prefer to use a name that I own.  For example my internal DNS domain is lab.enterpriseadmins.org.  I already own the domain enterpriseadmins.org and have control over who would get to use the subdomain LAB.  If I wanted to create an external DNS host name of something.lab.enterpriseadmins.org, I could do that no problem and could even get an SSL certificate issued for that name if needed.  I wouldn’t be able to do this with a domain that I don’t own.

If you don’t need/want to own a domain name, another good option is to use a reserved name.  For example, RFC 2606 reserves the names example.com, example.net, and example.org for documentation purposes.  You could use them in your internal network without prior coordination with the Internet Assigned Numbers Authority (IANA).  This works well as you can take screenshots and such for documentation where example names in screenshots work, for example:

Screenshot of command prompt, ping results for the name syslog.example.com returning a valid response from IP 192.168.45.80.

Once we have selected the DNS names that we are going to use, we also need to figure out a way to get our clients to point at our DNS server for resolution. I don’t like pointing clients directly at my lab infrastructure for name resolution. Just like I mentioned in a previous post about Home Lab Networking, I don’t want a lab problem to prevent the TV from working. There are a couple of options in this space, I’ve included a few options below with links to more information.

Home Lab Networking

When building a lab network, it is helpful to put in a little time upfront and consider your learning objectives versus the complexity you can add into the system.  If we are looking to gain a bit of experience with products like vSphere or vRealize Operations, its possible we could deploy a handful of components into our existing home network, assign them static IP addresses and be done – no complexity added at all.  However, if we plan to dive deep into NSX-T, build overlay networks and configure routing, and we want to BGP peer with our physical network, we are going to be adding a bit of complexity.  I like to start somewhere in the middle, where there is some flexibility to get complicated if needed, but simple enough that I don’t spend all my time managing networking.  I also like to logically separate the ‘home’ network from the ‘lab’ network.  The last thing I want to do is make some DHCP or name resolution changes in my lab and the family can no longer watch TV.

I’ve seen a lot of folks build logically separated networks for their labs.  I’ve seen others do stuff that I’d consider crazy.  Sure, you could make your internal network 1.1.1.0/24 so that its very few characters to type, but don’t try using Cloudflare’s DNS service.  Instead of re-using internet routable blocks of traffic, RFC 1918 dedicates three different ranges of IP Addresses for private/internal use that you can pick from.  They are:

  • 10.0.0.0/8 [10.0.0.0 through 10.255.255.255]
  • 172.16.0.0/12 [172.16.0.0 through 172.31.255.255]
  • 192.168.0.0/16 [192.168.0.0 through 192.168.255.255]

If you connect to a VPN for a large enterprise, it is very common for them to assign an IP from the 10.0.0.0/8 or 172.16.0.0/12 blocks.  When this happens, the VPN provided route statements may prevent you from accessing those same IP ranges if they are in use in your home network.  Because of this, I’ve historically leaned towards the 192.168.0.0/16 range for home lab purposes.  This still gives you the ability to segment/subnet into smaller networks.  For example, I have a VLAN 10 that maps to 192.168.10.0/24 and is only used for temporary lab VMs.  I have another VLAN 32 that maps to 192.168.32.0/24 and is used for a handful of separate lab gear to represent a disaster recovery site.  All in I have about a dozen networks with 24-bit masks for various purposes, some routed and others not.  Some of these have very valid reasons, like logically separating storage traffic from guest traffic.  In other cases there is a bit of unnecessary separation for the production management workload VMs for things like vROps and Log Insight and virtual desktops.  At the scale of my lab this separation is not really required, I maintain the extra complexity for the sake of having extra complexity like you’d find in a real production environment. 

Once we have selected the IP address range(s) we will be using its time to start considering name resolution.  I have some thoughts on that as well, so be on the look out for a future post where we will dive into that.

Home Lab Design Considerations

I’ve recently had several discussions with a friend about building a home lab. We covered various aspects including defining requirements, creating networks, and figuring out DNS domain names to use.  In the next few blog posts, we will document lessons learned.

To get started we had to articulate our goal — to learn VMware SDDC products, gain hands on experience, and prepare for certification exams.  The second consideration was how the lab was going to be used.  For example, did we want an environment that could be quickly recreated for a specific purpose or did we need an environment on the other end of the spectrum with a long life / persistence?  This longer life environment would likely require that we perform upgrades, troubleshoot when things broke, and have sample data for systems like vRealize Operations or Log Insight.  For the first use case of an ephemeral, non-persistent environment, a solution like the VMware Hands On Labs would be perfect.  It would have no cost, be easy to instantiate, and then could be torn down and recreated with just a few clicks.  For a more persistent environment, we would likely need to acquire our own hardware, which would add costs and force us to understand system requirements of all of the solutions we would like to deploy in advance and go through a capacity planning exercise.  Through our discussions a decision was reached that a more persistent environment would be necessary.

The next step was to document hardware requirements.  I would recommend starting by creating a list of all of the workloads that you would like to run in the lab.  Interesting data points you’d want to capture for each workload would be the number of virtual CPU required, the amount of RAM needed and total disk capacity.  In my home lab experience the 1st place you’ll run low is on RAM, so I would take the total you think you are going to need, double that number, and start there.  With rough hardware requirements in hand, the next step would be to find a solution that meets budget constraints.  There are a lot of options in this space, from large, loud, & power-hungry refurbished server grade hardware on one end down to small, quiet, & power saving micro devices like the Intel NUC.   There are plenty of sites to review some of the hardware other folks are using, and its worth checking out the collection of HomeLab BOMs at https://github.com/lamw/homelab

With hardware ordered and on the way the next step will be to talk about networking. Please look for the next post where we will discuss getting started with home lab networking.

My completed (for now) Home Office improvements

I’ve recently made several improvements to my home office / desk setup. Several people have asked for details on the parts/pieces I used to complete this, so I wanted to take a moment and document them here.

I started with an Autonomous SmartDesk 2 – Home Office motorized sit/stand desk. There is a premium version of this desk that appears to have more range (can go lower/higher) than the home version, but for my needs the home office version seemed to have very similar specs and was slightly less expensive.

I already had a very good desk chair (the Autonomous ErgoChair 2), but also added a Topo Comfort Mat for when I’m standing. This was a touch more than I planned to spend, but came highly recommended and initial impressions are very good.

I’m using two different monitor arms, one from my previous setup and a new ErGear Adjustable Gas Spring mount which was very inexpensive (only $25) and supported the weight required by my monitor (the specs for the monitor arm show it supports up to 26.5 pounds).

Lighting was the toughest problem to solve. I wanted to have a primary light source that was centered in front of me, to reduce shade cast from one side of my face for video calls, but also desired more light than most round / camera lights provide. I also preferred an indirect light source that would bounce of the ceiling first, but not be blocked by my monitor when in a standing position. My first thought was a very tall floor lamp, but I didn’t really want a base that I would be kicking right in front of my feet, and couldn’t find anything tall enough anyway. I found some desk mounted lamps that looked like they might work, but I couldn’t find any with the right height to be able to reach over my monitor. What I settled on was a fairly custom solution. I purchased a Twinkle Star Floor lamp with Remote Control, but disassembled / rewired part of it and mounted it to an unused monitor arm I had laying around using some bolts and rubber wire clamps. The end result is a desk mounted lamp that’s the perfect height (and adjustable), but also has a remote control that can adjust light temperature and brightness.

I hope you find this info useful!

Skyline Health Diagnostics custom SSL certificates

I recently read about the new Skyline Health Diagnostics tool. Here is a short quote that captures what this tool is all about.

VMware Skyline Health Diagnostics for vSphere is a self-service tool to detect issues using log bundles and suggest the KB remediate the issue. vSphere administrators can use this tool for troubleshooting issues before contacting the VMware Support. The tool is available to you free of cost.

https://blogs.vmware.com/vsphere/2020/09/introducing-vmware-skyline-health-diagnostic-tool.html

Installation was very straight forward and captured in the official documentation here. I deviated slightly, and opted to use the Photon 3.0 OVA instead of installing Photon from the ISO image. If you go down this route, you may want to grow your root filesystem beyond the 16GB from the initial OVA, if you need help with that check out this blog post.

After setting up the appliance, I decided to replace the SSL certificates for the web site. This is described in the documentation as well, but I made a couple of changes that I’ll outline below.

The first change I made was to the SSL conf file. The official documentation has you edit the entries in the [req_distinguished_name] section so that the entries for commonName and DNS.1 are correct. In addition to doing this, I added a DNS.2 and DNS.3 option to capture the short name of the server as well as a friendly / alias name.

After applying the certificate, I was able to access the interface with the FQDN of the server, but not with the alias information or IP address. After doing a bit of research I found that adding a server_name entry to the nginx configuration would allow me to specify additional names/alias information. I also noticed that there wasn’t an automatic redirect from http to https, which I wanted to add. To do this I edited the nginx configuration with vi /etc/nginx/nginx.conf and added the following information:

server {
        listen 80 default_server;
        server_name _;
        return 301 https://$host$request_uri;
    }

This above section tells nginx to listen on port 80, accept any wildcard server names that come in, then do a http 301 redirect to the https port, using the original host name and URL that were provided.

I then edited the existing sever 8443/ssl section and added the following below the listen 8443 ssl; line: server_name servername.lab.enterpriseadmins.org servername 192.168.10.100 skylinehealth.example.com;

The above line instructs nginx to listen for incoming requests for my FQDN, short servername, IP address, or friendly alias. I could have used the wildcard symbol as shown in the http example, but I wanted to limit SSL requests to only values which were included in the SAN certificate to prevent possible certificate warnings.

The above screenshot shows the relevant section of the /etc/nginx/nginx.conf file. The green lines to the left denote the entries which were added.

Additionally, I had to update firewall rules to allow incoming HTTP requests. I used sample entries described here: https://www.digitalocean.com/community/tutorials/iptables-essentials-common-firewall-rules-and-commands to make this happen, specifically running the following two commands:

iptables -A INPUT -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT

With these minor changes I’m able to access Skyline Health Diagnostics using FQDN, IP Address, and Alias using HTTPS. If I forget and accidentally try to connect over HTTP, nginx will help me out and do the necessary redirect.