Horizon Virtual Desktop – Non Persistent Ubuntu 20.04

I recently built a new Ubuntu Desktop 20.04 machine to be used as an instant clone golden image for a Horizon 7.13 environment. I kept some notes on the steps I followed on my desktop, but after seeing a funny tweet this week (https://twitter.com/DennisCode/status/1475343774695886849), I decided I should share this document. This blog post will outline those steps to create a Ubuntu 20.04 Desktop for use as a Horizon 7.13 non persistent pool.

The first step was creating a virtual machine.

  • VM Name: vdi_g01_ubuntu-2004 — this is my naming convention for VDI golden images. The VDI is pretty obvious, the g01 is for golden image #1 and then the last part is the OS name.
  • Compatible with: ESXi 7.0 U2 and later (vmx-19) — my test hosts for VDI are running the latest available 7.0 release. I typically only visit compatibility levels when VMs are initially created. Since I’m able to run this version, and knowing 6.5/6.7 reach end of support in just a few months, I decided to go with the latest available. You’ll want to specify something that is capable of running on the versions of ESXi available in your environment.
  • Guest OS Family: Linux
  • Guest OS Version: Ubuntu Linux (64-bit)
  • Virtual Hardware Configuration
    • vCPU: 1
    • Memory: 2GB
    • New Hard disk: 24GB, thin provisioned
    • SCSI Controller: LSI Logic Parallel
    • New Network: — this is the port group I use for VDI desktops
    • Video Card > Number of displays 2
    • Video Card > Total video memory: 32mb
  • VM Options

I then selected ‘launch remote console’ to open the VMware Remote Console. I find this is the easiest way to install operating systems from ISO image. During the install I skipped the file check (just to save time) and selected ‘Install Ubuntu’ and accepted defaults. For my name I entered template-admin and for computer name I entered vdig01ubu2004.lab.enterpriseadmins.org. I entered a good password, selected require password to log in and didn’t enable Active Directory (we’ll do that later).

After the final reboot, we’ll login as our template-admin user, launch the terminal and install sshd, so that we can use it for the rest of the configuration. We’ll do this by entering sudo apt install openssh-server -y. With that complete we can find our IP address — either from the vCenter Server > VM > Summary tab, or by typing ip addr at that terminal session. We can then ssh into the system for the majority of the next configuration items.

The first few changes I made were generic system wide changes.

# apply any system updates & remove any obsolete packages
sudo apt update && sudo apt upgrade -y
sudo apt clean && sudo apt -y autoremove --purge

# Prevent ctrl-alt-del from causing a reboot
sudo systemctl mask ctrl-alt-del.target

# Disable auto-suspend
sudo systemctl unmask sleep.target suspend.target hibernate.target hybrid-sleep.target

# refresh systemctl
sudo systemctl daemon-reload

# Disable auto-updates
sudo sed -i 's/APT::Periodic::Update-Package-Lists "1"/APT::Periodic::Update-Package-Lists "0"/' /etc/apt/apt.conf.d/20auto-upgrades

# Disable LTS Upgrade MOTD
sudo sed -i '16 s/.*Prompt.*/Prompt=never/' /etc/update-manager/release-upgrades

# Remove some of the initial setup packages
sudo apt remove --purge gnome-initial-setup gnome-online-accounts update-manager-core -y

# setup script to generate new ssh keys at boot
cat << 'EOL' | sudo tee /etc/rc.local
#!/bin/sh -e
# rc.local
test -f /etc/ssh/ssh_host_dsa_key || dpkg-reconfigure openssh-server
exit 0

# make sure the script is executable
sudo chmod +x /etc/rc.local

I then made a few changes related to authentication and Active Directory membership. This is likely a questionable security practice, as we are letting all members of Domain Users to be able to not only login but run commands with sudo. For purposes of this lab desktop it works like I like, but you may want to make a few changes for your environment. In the past, I’ve used pbis-open for Active Directory authentication, but it appears that project is no longer maintained as of 2019. For this example, I will switch to sssd, as it is called out as supported by Horizon documentation and is recommended for ease of deployment (https://docs.vmware.com/en/VMware-Horizon/2111/linux-desktops-setup/GUID-D8E3A4AA-83E9-46A4-8BBA-824027146E93.html).

# install required components
sudo apt install sssd sssd-tools realmd libnss-sss adcli samba-common-bin  -y

# join the domain
echo 'VMware1!' | sudo realm join lab.enterpriseadmins.org --user svc-windowsjoin --computer-ou='OU=EUC VDI Linux Parents,OU=LAB Computers,DC=lab,DC=enterpriseadmins,DC=org' --os-name='other' --verbose

# Tell pam.d to create the home directory at login
echo "session required pam_mkhomedir.so skel=/etc/skel umask=0022" | sudo tee /etc/pam.d/common-session

# Since we only have the one domain, lets use short names for most things
sudo sed -i 's/use_fully_qualified_names = True/use_fully_qualified_names = False/g' /etc/sssd/sssd.conf

echo '%domain\ users  ALL=(ALL) ALL,!ROOTONLY' | sudo tee -a /etc/sudoers
# Note that '\ ' is to escape the space between domain & users.  In PBIS-Open spaces were denoted as ^ and those had to be changed for group names to show the escaped space (example: '\ ' without the quotes).

# Note: I did need to delegate extra permissions to this service account compared to PBIS Open.  I followed this guide:
# https://www.computertechblog.com/active-directory-permissions-required-to-join-linux-and-windows-computers-to-a-domain/

I also like to suppress some of the initial popup and configuration steps. I use these desktop for testing purposes and they are disposed of at logoff — there isn’t a lot of value in getting comfy and setting up everything perfectly as a user as those settings are lost at logoff. There were a couple different options listed here: https://askubuntu.com/questions/1028822/disable-the-new-ubuntu-18-04-welcome-screen and I applied them all as they seemed somewhat hit or miss in my testing.

sudo mkdir /etc/skel/.config
sudo touch /etc/skel/.config/gnome-initial-setup-done
sudo sed -i '/\[daemon\]/a InitialSetupEnable=false' /etc/gdm3/custom.conf

I like to install a few packages to do administrative type activities. These include:

  • firefox — a web browser (installed by default)
  • net-tools — the legacy network tools like ifconfig
  • powershell / powercli — a scripting language & module I use often
  • remmina — a remote desktop client
  • zenmap — a GUI for the nmap scanner
# install net-tools
sudo apt install net-tools -y

# install Powershell & PowerCLI
# Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb

# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb

# Update the list of products
sudo apt-get update

# Install PowerShell
sudo apt-get install -y powershell

# Start PowerShell, but as sudo se we can install PowerCLI for all users
sudo pwsh
Install-Module vmware.powercli -scope:allusers -Confirm:$false
Set-PowerCLIConfiguration -ParticipateInCeip $true -Scope:AllUsers -Confirm:$false

# Confirm that VMware* modules are available in the /usr/local/share/powershell/Modules path:
ls /usr/local/share/powershell/Modules
exit # this is to exit pwsh

# Install remmina
sudo apt install remmina -y

# Install zenmap
sudo apt install curl ndiff -y

wget http://archive.ubuntu.com/ubuntu/pool/universe/p/pygtk/python-gtk2_2.24.0-5.1ubuntu2_amd64.deb
sudo apt install ./python-gtk2_2.24.0-5.1ubuntu2_amd64.deb -y
wget http://archive.ubuntu.com/ubuntu/pool/universe/n/nmap/zenmap_7.60-1ubuntu5_all.deb
sudo apt install ./zenmap_7.60-1ubuntu5_all.deb -y

# cleanup & remove installers
rm python-gtk2_2.24.0-5.1ubuntu2_amd64.deb packages-microsoft-prod.deb zenmap_7.60-1ubuntu5_all.deb

There are a few edits I made to keep these apps pinned to the menu on the desktop. The following lines should make that happen.

echo "user-db:user" | sudo tee /etc/dconf/profile/user
echo "system-db:local" | sudo tee -a /etc/dconf/profile/user
sudo mkdir /etc/dconf/db/local.d
echo "[org/gnome/shell]" | sudo tee /etc/dconf/db/local.d/00-favorite-apps
echo "favorite-apps = ['firefox.desktop', 'org.remmina.Remmina.desktop', 'nautilus.desktop', 'gedit.desktop', 'gnome-terminal.desktop']" | sudo tee -a /etc/dconf/db/local.d/00-favorite-apps
sudo dconf update

I also like to leave this desktop open for long periods of time and like to come back to it without needing to reauthenticate. Again, for production purposes this is a questionable practice, but this is a lab. You can leave this out if you’d like, but I wanted to leave the example in case. The gsettings commands are per user only, so to make this setting available to other users that may use the pool, we’ll write them to a profile script that is executed at login.

echo 'gsettings set org.gnome.desktop.lockdown disable-lock-screen true' | sudo tee /etc/profile.d/02-weak-security.sh
echo 'gsettings set org.gnome.desktop.screensaver lock-delay 3600' | sudo tee -a /etc/profile.d/02-weak-security.sh
echo 'gsettings set org.gnome.desktop.screensaver lock-enabled false' | sudo tee -a /etc/profile.d/02-weak-security.sh
echo 'gsettings set org.gnome.desktop.screensaver idle-activation-enabled false' | sudo tee -a /etc/profile.d/02-weak-security.sh

Now to install the Horizon Agent. I could download the installer from inside the desktop, but I try to launch the fewest apps possible in my golden image & don’t like logging into websites from that template. Instead, I download the agent installer and post it to an internal web server, then download that copy with wget as shown below.

cd /tmp
wget http://www.example.com/VMware-horizonagent-linux-x86_64-7.13.1-19066964.tar.gz
tar -xzf VMware-horizonagent-linux-x86_64-7.13.1-19066964.tar.gz
cd VMware-horizonagent-linux-x86_64-7.13.1-19066964/
./install_viewagent.sh -A yes -G yes

# Lets set clipboard redirection system wide, both directions.  Default is 2. 0 means disable; 1 means both direction; 2 means from client to agent only, 3 means from agent to client only.
sudo sed -i 's/#Clipboard.Direction=1/Clipboard.Direction=1/g' /etc/vmware/config

# Update the config to use sssd instead of pbis-open (https://communities.vmware.com/t5/Horizon-for-Linux/True-SSO-for-CentOs-7-Instant-Clone-agent-initialization-state/td-p/509274)
echo "OfflineJoinDomain=sssd" | sudo tee -a /etc/vmware/viewagent-custom.conf

I also like to have my Certificate Authority root cert installed as a trusted authority for both the system and Firefox. I previously wrote a post on that here: https://enterpriseadmins.org/blog/lab-infrastructure/installing-windows-ca-root-certificate-on-linux-and-firefox/. I’m going to follow those steps in this golden image as well.

To shut things down (now, and anytime we make changes to our template) I like to run the following:

# clean up the command history of the VM template-admin user
sudo rm -rf /tmp/*
sudo rm -rf /var/tmp/*
sudo rm -f /etc/ssh/ssh_host_*
history -c
sudo shutdown -h now

I then like to edit the ‘Notes’ property of the template with the date & a note about what was changed. This field is copied each time the VM is cloned, so I have a bit of an idea of what was included. For example, I would enter something like 2021-12-29: Initial creation of Ubuntu 20.04 Non Persistent VDI Desktop. Once the note is added, I would create a snapshot with the same message. From here, I was able to create a Horizon Instant Clone Pool, selecting our new VM and snapshot.

I hope you find this post helpful, feel free to drop any suggestions or comments below.

Getting Started with SaltStackConfig PowerShell Module

In the previous post (here), we looked into Getting Started with SaltStack Config. We created and kicked off a few tasks from the web interface. Occasionally we’ll need to report on some data as well. The web interface offers the ability to download many result/output tables as CSV or JSON, but what if we wanted to do something with that data programmatically? Fortunately there is an API available (documentation here: https://developer.vmware.com/apis/1179/saltstack-config-raas). Unfortunately, I couldn’t find many examples of consuming this API with PowerShell and ran into an issue as I was getting started (related to credentials). Once I got those sorted out, I was able to create a quick inventory script that I wanted (to simply return minion names & a few “grains” like the Operating System & OS Version). However, with the bit of info I picked up along the way, I decided to try and wrap things up into a PowerShell Module for future needs. This module is available on GitHub (https://github.com/vmware/PowerCLI-Example-Scripts/tree/master/Modules/SaltStackConfig) and the following post will focus on how to get started using that module.

The first step to using this SaltStackConfig module is to get the required files to the system where you run scripts. The easiest way I know to do this is to download the full project repo (there is a Code > Download ZIP button at https://github.com/vmware/PowerCLI-Example-Scripts). With the zip file downloaded, I like to right click and see if the ‘unblock’ button appears in the bottom right — if so, I uncheck that (doing this prior to unzipping the file will save some time as we don’t need to recursively run Unblock-File on everything that was extracted).

I then extract the files I need, in this case the folder Modules\SaltStackConfig and place them in one of the PowerShell Module paths (to find where these are, you can open a powershell window and type $env:PSModulePath).

With the module copied into one of the correct paths, it will load automatically the next time we start PowerShell. Once we have that new PowerShell session, with the module now available, we can connect to the SaltStack Config environment. This cmdlet will connect to the RaaS API and create a global variable that we can reference to use future API calls (for this PowerShell session only).

C:\> Connect-SscServer 'salt.example.com' -User 'root' -Password 'VMware1!'

Here is the sample inventory task I was interested in that started everything:

C:\> (Get-SscMinionCache).grains |Select-Object host, osfullname, osrelease |Sort-Object host

host             osfullname                               osrelease
----             ----------                               ---------
cm-vrssc-01      VMware Photon OS                         3.0
core-control-21  Microsoft Windows Server 2022 Standard   2022Server
dr-control-01    Microsoft Windows Server 2016 Standard   2016Server
raspberrypi      Raspbian                                 10
svcs-sql-01      Microsoft Windows Server 2016 Standard   2016Server
t147-ubuntu-01   Ubuntu                                   20.04
t147-ubuntu-02   Ubuntu                                   20.04
t147-ubuntu18-01 Ubuntu                                   18.04
t147-win22-01    Microsoft Windows Server 2022 Standard   2022Server

I liked how the osfullname property looked for Windows machines, but for the Ubuntu & Photon releases I wanted to combine the osfullname and osrelease columns, so I went with a slightly modified Select-Object statement that contains some if/else logic that pulled together the output exactly how I wanted to display it:

C:\> (Get-SscMinionCache).grains | Select-Object host, @{Name='FriendlyOSName';Expression={ if ($_.osfullname -match 'Windows' ) { $_.osfullname } else { "$($_.osfullName) $($_.osrelease)"}}} | Sort-Object host

host             FriendlyOSName
----             --------------
cm-vrssc-01      VMware Photon OS 3.0
core-control-21  Microsoft Windows Server 2022 Standard
dr-control-01    Microsoft Windows Server 2016 Standard
raspberrypi      Raspbian 10
svcs-sql-01      Microsoft Windows Server 2016 Standard
t147-ubuntu-01   Ubuntu 20.04
t147-ubuntu-02   Ubuntu 20.04
t147-ubuntu18-01 Ubuntu 18.04
t147-win22-01    Microsoft Windows Server 2022 Standard

I decided to write a couple other wrapper functions for some other API methods that I thought I might end up using. In the next few sections I’ll show how to find a specific job that was run, the activity around that job, and the specific results from the execution.

In Task 2 of the previous post (https://enterpriseadmins.org/blog/scripting/getting-started-with-saltstack-config), we created a job to push BgInfo to our test servers. This function will return all jobs, but we’ll filter the output to just entries that contain bginfo. The syntax will be Get-SscJob | Where-Object {$_.name -match 'bginfo'} and sample output would look like this:

uuid     : b39de5cb-d01c-4cc7-a886-c746ae2b4150
name     : EnterpriseAdmins BGInfo Test
desc     :
cmd      : local
tgt_uuid : e98739a9-a058-42a3-b3e4-73450de38ced
fun      : state.apply
arg      : @{arg=System.Object[]; kwarg=; hiddenArgsObj=}
masters  : {}
metadata : @{auth=}
tgt_name : zCustomWinServerT147

When we ran that job, it generated some activity on our SSC appliance. We’ll find that specific activity by looking for only the entries where the Job_UUID matches the output from the above command, and since we may have ran the task multiple times, we’ll also filter it for only instances started in the last couple of days. The syntax will be Get-SscActivity | Where-Object {$_.job_uuid -eq 'b39de5cb-d01c-4cc7-a886-c746ae2b4150' -AND $_.start_time -gt '2021-12-20'}

jid             : 20211222185741967000
state           : completed_all_successful
cmd             : local
user            : bwuchner
user_uuid       : 6fe029b6-9e2e-4501-8c57-1776084bd3a8
job_uuid        : b39de5cb-d01c-4cc7-a886-c746ae2b4150
job_name        : EnterpriseAdmins BGInfo Test
job_desc        :
tgt_uuid        : e98739a9-a058-42a3-b3e4-73450de38ced
tgt_name        : zCustomWinServerT147
tgt_desc        :
tgt_type        : compound
tgt             : G@os:Windows and G@nodename:t147-win22-01
sched_uuid      :
sched_name      :                                                                                                       fun             : state.apply                                                                                           is_highstate    : False                                                                                                 job_source      : raas                                                                                                  expected        : 1
returned        : 1
not_returned    : 0
returned_good   : 1                                                                                                     returned_failed : 0                                                                                                     duration        :                                                                                                       masters_to      : {salt}                                                                                                masters_done    : {salt}
create_time     : 2021-12-22T18:58:02.307191
origination     : Ad-Hoc
start_time      : 2021-12-22T18:57:41.96700Z

And finally, we’ll want to find the status of all the data returned from that job. We’ll get the JID value from above and include it in a filter to the last function we’ll be covering. The final example syntax is: (Get-SscReturn -jid 20211222185741967000).full_ret | Select-Object id, success

id                                     success
--                                     -------
t147-win22-01.lab.enterpriseadmins.org    True

These are just a few examples, but each function includes some help, so feel free to use PowerShell help to get any usage examples for the other functions. For reference, here is a short list of the initial wrapper functions available:

C:\> Get-Command -Module SaltStackConfig

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Connect-SscServer                                  0.0.5      SaltStackConfig
Function        Disconnect-SscServer                               0.0.5      SaltStackConfig
Function        Get-SscActivity                                    0.0.5      SaltStackConfig
Function        Get-SscData                                        0.0.5      SaltStackConfig
Function        Get-SscJob                                         0.0.5      SaltStackConfig
Function        Get-SscMaster                                      0.0.5      SaltStackConfig
Function        Get-SscMinionCache                                 0.0.5      SaltStackConfig
Function        Get-SscReturn                                      0.0.5      SaltStackConfig
Function        Get-SscSchedule                                    0.0.5      SaltStackConfig

If you run into any issues, or think of another function that would be helpful to have, please feel free to submit an issue on the Github repo at https://github.com/vmware/PowerCLI-Example-Scripts.

Getting Started with SaltStack Config

I’ve recently started looking at vRealize Automation SaltStack Config in a lab. In this post I’ll step through a rough outline of the lab environment / setup and cover a couple simple tasks that I was able to complete using SaltStack Config (SSC).

In the lab environment I have a vRealize Automation SaltStack Config instance that was deployed using vRealize Suite Lifecycle Manager (vRSLCM). The vRSLCM platform made this very simple — I downloaded the right installer, answered a few simple questions like IP address, DNS servers, default password, etc and when I came back the appliance was running and ready to use.

With my SSC Appliance running, I needed a few Minions to manage. Having a Windows background, I decided to use the Windows minion on a couple of test servers. I ran a silent installer on test systems, using the following syntax:

\\fileserver\mgmt\_agents\Salt-Minion-3004-Py3-AMD64-Setup.exe /master=cm-vrssc-01.lab.enterpriseadmins.org /S

Once the minion was installed, I browsed to Minion Keys > Pending in the appliance web interface and accepted the pending key requests. This allows encrypted communication between the appliance and minions. With the lab setup background out of the way, lets get to the tasks we want to solve.

Task 1: Deploy a custom PowerShell profile

For the first example I wanted to do a single file copy from the SSC appliance to my minions. This could be a configuration file or such, but for demo purposes I decided I would copy a standard PowerShell profile to the machine for all users.

Browse to Config > File Server. In the top left there is a dropdown list that says base or sse. I selected sse and in the path name text box entered enterpriseadmins\powershell\profile.ps1 and changed the file type dropdown from SLS to TXT, pasted in my PowerShell profile (here is an example if you need one: https://github.com/vmware/PowerCLI-Example-Scripts/blob/master/Scripts/At_Your_Fingertips/NewProfile.ps1), and clicked save. This is the file contents that I’d like to copy to all of my minions.

In the same Config > File Server area we will create a new file. This time I left the file type dropdown as SLS and entered enterpriseadmins\powershell\profile.sls. This is the ‘state’ file that is going to contain the instructions of what to copy and where to place it on the minion filesystem, and overwrite if the file already exists. For this file I entered the following:

    - name: 'C:\windows\system32\WindowsPowerShell\v1.0\profile.ps1'
    - source: salt://{{ slspath }}/profile.ps1
    - replace: True

With this new state saved, we browse over to Config > Jobs. From here I created a new job with the following criteria:

  • Targets = Windows Servers
  • Function = state.apply
  • Environments = sse
  • States = enterpriseadmins.powershell.profile

I saved this new Job and then ran it (by selecting the three dots to the left of the job name & clicking run now). Looking at my minion I see the file was created & contains the expected contents. Launching PowerShell also results in my new profile loading correctly.

This is a pretty simple demo, but does show how we can manage a file on a possibly large group of systems & easily make changes if needed.

Task 2: Installing BgInfo on Windows servers

Many years ago I created a batch file to “install” a BgInfo configuration on servers in a lab. It did a couple file copies and made a registry entry. This allowed me to have the hostname on the desktop so I knew what I was looking at. I don’t run the batch file much anymore, as I just had this configuration baked into a template so it showed up each time a new VM was deployed. This worked well, but what if I ever wanted to swap out that configuration file? I’m not going to manually do this… and now that I have SSC, lets see if we can recreate that wheel.

The first step is get our bginfo.exe file (https://docs.microsoft.com/en-us/sysinternals/downloads/bginfo) copied to all our machines. I could try and have minions try and download this from the internet, but I have a default firewall rule to deny everything, and I may not want clients going to the internet for security reasons. In the above Task 1 example we stored our PowerShell profile on the embedded Salt file server, but it was just text and we could paste it in through the web interface. We can get this working with binary files, its just a bit different process. The first thing we need to do is to ssh to our SSC Appliance. From here we need to make a directory using the command mkdir /srv/salt. In this example /srv already existed and we just created the salt subdirectory. The /srv/salt folder gets served up by the SSC file server. To keep things tidy, I’m going to create another subfolder for my stuff (using the command mkdir /srv/salt/enterpriseadmins). We can then copy our binary file here (using something like WinSCP).

We now need to decide where to store our BgInfo config file (bgi extension) & state file. We could manage this from the web interface and store in the sse path (like the above example) but I decided to keep all of my BgInfo bits in the same path — I don’t think there is a right or wrong answer here, this was just a personal preference. My BgInfo config file is named Lab-Server.bgi and I called my state file bginfo-config.sls, the contents of that file are below:

    - name: 'C:\sysmod\bginfo.exe'
    - source: salt://enterpriseadmins/Bginfo.exe
    - replace: True
    - makedirs: True

    - name: 'C:\sysmod\LAB-Server.bgi'
    - source: salt://enterpriseadmins/LAB-Server.bgi
    - replace: True
    - makedirs: True

#Update Registry Key
    - vname: 'LAB-Server-BGInfo'
    - vdata: 'C:\sysmod\bginfo.exe C:\sysmod\lab-server.bgi /SILENT /NOLICPROMPT /timer:0'
    - vtype: REG_SZ

Its important to note that all of these file names / paths are case sensitive. This is probably obvious, but I spent some time troubleshooting it so I figured it was worth mentioning. Once we have all three files in place (Bginfo.exe, LAB-Server.bgi, and bginfo-config.sls) we can configure a job. For criteria on this I used:

  • Targets = Windows Servers
  • Function = state.apply
  • Environments = (leave blank)
  • States = enterpriseadmins.bginfo-config

I saved this new Job and then ran it. Looking at my minion I saw the files & registry key were created. The next login resulted in my updated BgInfo config being applied. Update complete — everywhere, all at once. Thanks SaltStack Config!

Installing Windows CA Root certificate on Linux and Firefox

In my lab I have a Windows domain controller which has certificate services installed and configured. For Windows systems, group policy properly delivers this certificate to clients. However, in my Linux system this certificate is not installed. Additionally, even after installing the certificate for system use, the Firefox web browser doesn’t immediately trust this root certificate. In the following article we will walk through the steps needed to configure this Windows CA root certificate on Linux (Ubuntu 20.04) and Firefox.

  1. Go to the Windows CA server, in my case https://ca.example.com/certsrv/.
  2. Select ‘Download a CA certificate, certificate chain, or CRL.
  3. Select DER and ‘Download CA certificate’
  4. This will download a certnew.cer file
  5. Convert the certificate to the proper format with openssl. We can do this step on either Windows or Linux, in the sample below we will use our Windows system:

openssl x509 -inform DER -in "C:\Users\bwuchner\Downloads\certnew.cer" -out d:\tmp\ca-example-com.crt

  1. We must now get the contents of this ca-example-com.crt file copied to our Linux VM. At this point the certificate is in a text format, so I chose to create a new file and paste in the contents. For example:

sudo nano /usr/local/share/ca-certificates/ca-example-com.crt

  1. We must now change the permissions of the file such that the owner has read/write and all other users can read. We will do this with the following command:

sudo chmod 644 /usr/local/share/ca-certificates/ca-example-com.crt

  1. Now that the certificate is in the proper location, format, and permissions, we’ll run the update process:

sudo update-ca-certificates

From here we could test and confirm that our certificate is properly installed on the system by trying to access a site using this cert. For example: wget https://vc1.example.com

This should no longer return text similar to Unable to locally verify the issuer's authority.

Next we need to update Firefox to trust this root certificate as well. We will do this by creating a custom Firefox policy on the system. To begin we will create a policy file with a text editor, for example:sudo nano /usr/lib/firefox/distribution/policies.json

In this file we will add the following JSON formatted string: { "policies": { "Certificates": { "ImportEnterpriseRoots": true, "Install": ["/usr/local/share/ca-certificates/ca-example-com.crt"] } } }

The next time you start Firefox, this root certificate will be trusted and you should no longer receive warnings when browsing your internal sites.

Simplified Home Lab Domain Name Resolution

In previous posts we discussed considerations for building a Home Lab and for deciding on network ranges to use.  Once we know the networks we will be using, its time to start thinking about services that will be required.  For me the first service is always Domain Name Server (DNS).  This is a foundational building block for many systems which expect forward and reverse lookups to be working before you deploy (like vCenter Server). 

Its somewhat common to see folks use a name in the reserved top-level domain of .local suffix, like corp.local.  I’m not a fan of this practice, as RFC 6762 spells out special considerations for this domain, and that it should be used for link-local names only. 

I’ve also seen folks use domains with a common top-level domain, like bwuchlab.com.  As of this writing that domain is available for sale, so there wouldn’t be any conflict or such if I started using it internally without registering the domain.  I feel like this is a bad practice as someone else could buy that domain, I’d have no control over it, but may have some documentation or screenshots that include that name.  I prefer to use a name that I own.  For example my internal DNS domain is lab.enterpriseadmins.org.  I already own the domain enterpriseadmins.org and have control over who would get to use the subdomain LAB.  If I wanted to create an external DNS host name of something.lab.enterpriseadmins.org, I could do that no problem and could even get an SSL certificate issued for that name if needed.  I wouldn’t be able to do this with a domain that I don’t own.

If you don’t need/want to own a domain name, another good option is to use a reserved name.  For example, RFC 2606 reserves the names example.com, example.net, and example.org for documentation purposes.  You could use them in your internal network without prior coordination with the Internet Assigned Numbers Authority (IANA).  This works well as you can take screenshots and such for documentation where example names in screenshots work, for example:

Screenshot of command prompt, ping results for the name syslog.example.com returning a valid response from IP

Once we have selected the DNS names that we are going to use, we also need to figure out a way to get our clients to point at our DNS server for resolution. I don’t like pointing clients directly at my lab infrastructure for name resolution. Just like I mentioned in a previous post about Home Lab Networking, I don’t want a lab problem to prevent the TV from working. There are a couple of options in this space, I’ve included a few options below with links to more information.