Getting Started with SaltStack Config

I’ve recently started looking at vRealize Automation SaltStack Config in a lab. In this post I’ll step through a rough outline of the lab environment / setup and cover a couple simple tasks that I was able to complete using SaltStack Config (SSC).

In the lab environment I have a vRealize Automation SaltStack Config instance that was deployed using vRealize Suite Lifecycle Manager (vRSLCM). The vRSLCM platform made this very simple — I downloaded the right installer, answered a few simple questions like IP address, DNS servers, default password, etc and when I came back the appliance was running and ready to use.

With my SSC Appliance running, I needed a few Minions to manage. Having a Windows background, I decided to use the Windows minion on a couple of test servers. I ran a silent installer on test systems, using the following syntax:

\\fileserver\mgmt\_agents\Salt-Minion-3004-Py3-AMD64-Setup.exe /master=cm-vrssc-01.lab.enterpriseadmins.org /S

Once the minion was installed, I browsed to Minion Keys > Pending in the appliance web interface and accepted the pending key requests. This allows encrypted communication between the appliance and minions. With the lab setup background out of the way, lets get to the tasks we want to solve.

Task 1: Deploy a custom PowerShell profile

For the first example I wanted to do a single file copy from the SSC appliance to my minions. This could be a configuration file or such, but for demo purposes I decided I would copy a standard PowerShell profile to the machine for all users.

Browse to Config > File Server. In the top left there is a dropdown list that says base or sse. I selected sse and in the path name text box entered enterpriseadmins\powershell\profile.ps1 and changed the file type dropdown from SLS to TXT, pasted in my PowerShell profile (here is an example if you need one: https://github.com/vmware/PowerCLI-Example-Scripts/blob/master/Scripts/At_Your_Fingertips/NewProfile.ps1), and clicked save. This is the file contents that I’d like to copy to all of my minions.

In the same Config > File Server area we will create a new file. This time I left the file type dropdown as SLS and entered enterpriseadmins\powershell\profile.sls. This is the ‘state’ file that is going to contain the instructions of what to copy and where to place it on the minion filesystem, and overwrite if the file already exists. For this file I entered the following:

MyFileCopyTask:
  file.managed:
    - name: 'C:\windows\system32\WindowsPowerShell\v1.0\profile.ps1'
    - source: salt://{{ slspath }}/profile.ps1
    - replace: True

With this new state saved, we browse over to Config > Jobs. From here I created a new job with the following criteria:

  • Targets = Windows Servers
  • Function = state.apply
  • Environments = sse
  • States = enterpriseadmins.powershell.profile

I saved this new Job and then ran it (by selecting the three dots to the left of the job name & clicking run now). Looking at my minion I see the file was created & contains the expected contents. Launching PowerShell also results in my new profile loading correctly.

This is a pretty simple demo, but does show how we can manage a file on a possibly large group of systems & easily make changes if needed.

Task 2: Installing BgInfo on Windows servers

Many years ago I created a batch file to “install” a BgInfo configuration on servers in a lab. It did a couple file copies and made a registry entry. This allowed me to have the hostname on the desktop so I knew what I was looking at. I don’t run the batch file much anymore, as I just had this configuration baked into a template so it showed up each time a new VM was deployed. This worked well, but what if I ever wanted to swap out that configuration file? I’m not going to manually do this… and now that I have SSC, lets see if we can recreate that wheel.

The first step is get our bginfo.exe file (https://docs.microsoft.com/en-us/sysinternals/downloads/bginfo) copied to all our machines. I could try and have minions try and download this from the internet, but I have a default firewall rule to deny everything, and I may not want clients going to the internet for security reasons. In the above Task 1 example we stored our PowerShell profile on the embedded Salt file server, but it was just text and we could paste it in through the web interface. We can get this working with binary files, its just a bit different process. The first thing we need to do is to ssh to our SSC Appliance. From here we need to make a directory using the command mkdir /srv/salt. In this example /srv already existed and we just created the salt subdirectory. The /srv/salt folder gets served up by the SSC file server. To keep things tidy, I’m going to create another subfolder for my stuff (using the command mkdir /srv/salt/enterpriseadmins). We can then copy our binary file here (using something like WinSCP).

We now need to decide where to store our BgInfo config file (bgi extension) & state file. We could manage this from the web interface and store in the sse path (like the above example) but I decided to keep all of my BgInfo bits in the same path — I don’t think there is a right or wrong answer here, this was just a personal preference. My BgInfo config file is named Lab-Server.bgi and I called my state file bginfo-config.sls, the contents of that file are below:

BGInfo-Executable:
  file.managed:
    - name: 'C:\sysmod\bginfo.exe'
    - source: salt://enterpriseadmins/Bginfo.exe
    - replace: True
    - makedirs: True

BGInfo-Config:
  file.managed:
    - name: 'C:\sysmod\LAB-Server.bgi'
    - source: salt://enterpriseadmins/LAB-Server.bgi
    - replace: True
    - makedirs: True

#Update Registry Key
'HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run':
  reg.present:
    - vname: 'LAB-Server-BGInfo'
    - vdata: 'C:\sysmod\bginfo.exe C:\sysmod\lab-server.bgi /SILENT /NOLICPROMPT /timer:0'
    - vtype: REG_SZ

Its important to note that all of these file names / paths are case sensitive. This is probably obvious, but I spent some time troubleshooting it so I figured it was worth mentioning. Once we have all three files in place (Bginfo.exe, LAB-Server.bgi, and bginfo-config.sls) we can configure a job. For criteria on this I used:

  • Targets = Windows Servers
  • Function = state.apply
  • Environments = (leave blank)
  • States = enterpriseadmins.bginfo-config

I saved this new Job and then ran it. Looking at my minion I saw the files & registry key were created. The next login resulted in my updated BgInfo config being applied. Update complete — everywhere, all at once. Thanks SaltStack Config!

Installing Windows CA Root certificate on Linux and Firefox

In my lab I have a Windows domain controller which has certificate services installed and configured. For Windows systems, group policy properly delivers this certificate to clients. However, in my Linux system this certificate is not installed. Additionally, even after installing the certificate for system use, the Firefox web browser doesn’t immediately trust this root certificate. In the following article we will walk through the steps needed to configure this Windows CA root certificate on Linux (Ubuntu 20.04) and Firefox.

  1. Go to the Windows CA server, in my case https://ca.example.com/certsrv/.
  2. Select ‘Download a CA certificate, certificate chain, or CRL.
  3. Select DER and ‘Download CA certificate’
  4. This will download a certnew.cer file
  5. Convert the certificate to the proper format with openssl. We can do this step on either Windows or Linux, in the sample below we will use our Windows system:

openssl x509 -inform DER -in "C:\Users\bwuchner\Downloads\certnew.cer" -out d:\tmp\ca-example-com.crt

  1. We must now get the contents of this ca-example-com.crt file copied to our Linux VM. At this point the certificate is in a text format, so I chose to create a new file and paste in the contents. For example:

sudo nano /usr/local/share/ca-certificates/ca-example-com.crt

  1. We must now change the permissions of the file such that the owner has read/write and all other users can read. We will do this with the following command:

sudo chmod 644 /usr/local/share/ca-certificates/ca-example-com.crt

  1. Now that the certificate is in the proper location, format, and permissions, we’ll run the update process:

sudo update-ca-certificates

From here we could test and confirm that our certificate is properly installed on the system by trying to access a site using this cert. For example: wget https://vc1.example.com

This should no longer return text similar to Unable to locally verify the issuer's authority.

Next we need to update Firefox to trust this root certificate as well. We will do this by creating a custom Firefox policy on the system. To begin we will create a policy file with a text editor, for example:sudo nano /usr/lib/firefox/distribution/policies.json

In this file we will add the following JSON formatted string: { "policies": { "Certificates": { "ImportEnterpriseRoots": true, "Install": ["/usr/local/share/ca-certificates/ca-example-com.crt"] } } }

The next time you start Firefox, this root certificate will be trusted and you should no longer receive warnings when browsing your internal sites.

Simplified Home Lab Domain Name Resolution

In previous posts we discussed considerations for building a Home Lab and for deciding on network ranges to use.  Once we know the networks we will be using, its time to start thinking about services that will be required.  For me the first service is always Domain Name Server (DNS).  This is a foundational building block for many systems which expect forward and reverse lookups to be working before you deploy (like vCenter Server). 

Its somewhat common to see folks use a name in the reserved top-level domain of .local suffix, like corp.local.  I’m not a fan of this practice, as RFC 6762 spells out special considerations for this domain, and that it should be used for link-local names only. 

I’ve also seen folks use domains with a common top-level domain, like bwuchlab.com.  As of this writing that domain is available for sale, so there wouldn’t be any conflict or such if I started using it internally without registering the domain.  I feel like this is a bad practice as someone else could buy that domain, I’d have no control over it, but may have some documentation or screenshots that include that name.  I prefer to use a name that I own.  For example my internal DNS domain is lab.enterpriseadmins.org.  I already own the domain enterpriseadmins.org and have control over who would get to use the subdomain LAB.  If I wanted to create an external DNS host name of something.lab.enterpriseadmins.org, I could do that no problem and could even get an SSL certificate issued for that name if needed.  I wouldn’t be able to do this with a domain that I don’t own.

If you don’t need/want to own a domain name, another good option is to use a reserved name.  For example, RFC 2606 reserves the names example.com, example.net, and example.org for documentation purposes.  You could use them in your internal network without prior coordination with the Internet Assigned Numbers Authority (IANA).  This works well as you can take screenshots and such for documentation where example names in screenshots work, for example:

Screenshot of command prompt, ping results for the name syslog.example.com returning a valid response from IP 192.168.45.80.

Once we have selected the DNS names that we are going to use, we also need to figure out a way to get our clients to point at our DNS server for resolution. I don’t like pointing clients directly at my lab infrastructure for name resolution. Just like I mentioned in a previous post about Home Lab Networking, I don’t want a lab problem to prevent the TV from working. There are a couple of options in this space, I’ve included a few options below with links to more information.

Home Lab Networking

When building a lab network, it is helpful to put in a little time upfront and consider your learning objectives versus the complexity you can add into the system.  If we are looking to gain a bit of experience with products like vSphere or vRealize Operations, its possible we could deploy a handful of components into our existing home network, assign them static IP addresses and be done – no complexity added at all.  However, if we plan to dive deep into NSX-T, build overlay networks and configure routing, and we want to BGP peer with our physical network, we are going to be adding a bit of complexity.  I like to start somewhere in the middle, where there is some flexibility to get complicated if needed, but simple enough that I don’t spend all my time managing networking.  I also like to logically separate the ‘home’ network from the ‘lab’ network.  The last thing I want to do is make some DHCP or name resolution changes in my lab and the family can no longer watch TV.

I’ve seen a lot of folks build logically separated networks for their labs.  I’ve seen others do stuff that I’d consider crazy.  Sure, you could make your internal network 1.1.1.0/24 so that its very few characters to type, but don’t try using Cloudflare’s DNS service.  Instead of re-using internet routable blocks of traffic, RFC 1918 dedicates three different ranges of IP Addresses for private/internal use that you can pick from.  They are:

  • 10.0.0.0/8 [10.0.0.0 through 10.255.255.255]
  • 172.16.0.0/12 [172.16.0.0 through 172.31.255.255]
  • 192.168.0.0/16 [192.168.0.0 through 192.168.255.255]

If you connect to a VPN for a large enterprise, it is very common for them to assign an IP from the 10.0.0.0/8 or 172.16.0.0/12 blocks.  When this happens, the VPN provided route statements may prevent you from accessing those same IP ranges if they are in use in your home network.  Because of this, I’ve historically leaned towards the 192.168.0.0/16 range for home lab purposes.  This still gives you the ability to segment/subnet into smaller networks.  For example, I have a VLAN 10 that maps to 192.168.10.0/24 and is only used for temporary lab VMs.  I have another VLAN 32 that maps to 192.168.32.0/24 and is used for a handful of separate lab gear to represent a disaster recovery site.  All in I have about a dozen networks with 24-bit masks for various purposes, some routed and others not.  Some of these have very valid reasons, like logically separating storage traffic from guest traffic.  In other cases there is a bit of unnecessary separation for the production management workload VMs for things like vROps and Log Insight and virtual desktops.  At the scale of my lab this separation is not really required, I maintain the extra complexity for the sake of having extra complexity like you’d find in a real production environment. 

Once we have selected the IP address range(s) we will be using its time to start considering name resolution.  I have some thoughts on that as well, so be on the look out for a future post where we will dive into that.

Home Lab Design Considerations

I’ve recently had several discussions with a friend about building a home lab. We covered various aspects including defining requirements, creating networks, and figuring out DNS domain names to use.  In the next few blog posts, we will document lessons learned.

To get started we had to articulate our goal — to learn VMware SDDC products, gain hands on experience, and prepare for certification exams.  The second consideration was how the lab was going to be used.  For example, did we want an environment that could be quickly recreated for a specific purpose or did we need an environment on the other end of the spectrum with a long life / persistence?  This longer life environment would likely require that we perform upgrades, troubleshoot when things broke, and have sample data for systems like vRealize Operations or Log Insight.  For the first use case of an ephemeral, non-persistent environment, a solution like the VMware Hands On Labs would be perfect.  It would have no cost, be easy to instantiate, and then could be torn down and recreated with just a few clicks.  For a more persistent environment, we would likely need to acquire our own hardware, which would add costs and force us to understand system requirements of all of the solutions we would like to deploy in advance and go through a capacity planning exercise.  Through our discussions a decision was reached that a more persistent environment would be necessary.

The next step was to document hardware requirements.  I would recommend starting by creating a list of all of the workloads that you would like to run in the lab.  Interesting data points you’d want to capture for each workload would be the number of virtual CPU required, the amount of RAM needed and total disk capacity.  In my home lab experience the 1st place you’ll run low is on RAM, so I would take the total you think you are going to need, double that number, and start there.  With rough hardware requirements in hand, the next step would be to find a solution that meets budget constraints.  There are a lot of options in this space, from large, loud, & power-hungry refurbished server grade hardware on one end down to small, quiet, & power saving micro devices like the Intel NUC.   There are plenty of sites to review some of the hardware other folks are using, and its worth checking out the collection of HomeLab BOMs at https://github.com/lamw/homelab

With hardware ordered and on the way the next step will be to talk about networking. Please look for the next post where we will discuss getting started with home lab networking.