Keeping pihole up to date with Aria Automation Config

I’ve recently begun keeping components of my lab up to date using Aria Automation Config. I’ve scheduled a daily job to inventory Linux packages that need updated and a weekly task to update Linux VMs and reboot if necessary. Both of these tasks leave a paper trail showing what updates were made, so I can refer back to them if needed.

I recently was checking the pihole admin interface and noticed some text at the bottom of the page that said ‘Update available!’ This is an easy process to complete, just SSH into the appliance and run pihole -up. However, since I’m keeping other systems up to date automatically, I wanted to add this service into the mix.

I debated on whether or not I should tack this process on to the end of the current OS update state file, or create a new state. I opted for option 2, but wrote the state in a way that it could run on any system and only run the commands if present. I created a new state file named /updates/pihole.sls with the following contents:

{%- if salt['file.file_exists']('/usr/local/bin/pihole') %}
Update-pihole:
  cmd.run:
    - name: /usr/local/bin/pihole updatePihole
{%- endif %}

This is a pretty basic state, it checks for the presence of the pihole script file, and if found, tries to run the updatePihole argument.

Before running the state on a test system, the footer looked like:

Pi-hole v5.17.2 FTL v5.23 Web Interface v5.20.2 · Update available!

The stdout of the minion return stated:
[i] Update local cache of available packages…\r\u001b[K [✓] Update local cache of available packages\n [i] Existing PHP installation detected : PHP version 7.3.31-1~deb10u5\n [i] Checking for git…\r\u001b[K [✓] Checking for git\n [i] Checking for iproute2…\r\u001b[K [✓] Checking for iproute2\n [i] Checking for dialog…\r\u001b[K [✓] Checking for dialog\n [i] Checking for ca-certificates…\r\u001b[K [✓] Checking for ca-certificates\n\n [i] Checking for updates…\n [i] Pi-hole Core:\tup to date\n [i] Web Interface:\tupdate available\n [i] FTL:\t\tup to date\n\n [i] Pi-hole Web Admin files out of date, updating local repo.\n [i] Check for existing repository in /var/www/html/admin…\r\u001b[K [✓] Check for existing repository in /var/www/html/admin\n [i] Update repo in /var/www/html/admin…HEAD is now at be05b0f v5.21 (#2860)\n\r\u001b[K [✓] Update repo in /var/www/html/admin\n\n [i] If you had made any changes in '/var/www/html/admin/', they have been stashed using 'git stash'\n [i] Local version file information updated.

After the state.apply operation completed, refreshing the web interface the footer changed to:

Pi-hole v5.17.2 FTL v5.23 Web Interface v5.21

We can see that the web interface was updated from v5.20.2 to v5.21.

I created a job to apply this state file, then created two schedules to stagger the patching to different minions on different days. This was a pretty quick solution to keeping the pihole software up to date on a schedule, using the centralized scheduling & reporting of Aria Automation Config.

Posted in Lab Infrastructure, Scripting, Virtualization | Leave a comment

Keeping Linux up to date with Aria Automation Config — part 2

In a recent post (available here), we created a simple Aria Automation Config (formerly SaltStack Config) state file which reported on and applied available Linux OS updates. In this post we’ll revisit a minor change to this state file.

After creating the previous state which applies available updates every Saturday morning, I noticed sometimes logging into Linux VMs would return a message of *** System restart required ***. I found that this text was coming from the file /var/run/reboot-required which was created when a package required a system restart.

I’ve modified the state file applied by my scheduled job to accommodate this reboot as shown below:

update_pkg:
{% if grains['os'] == 'VMware Photon OS' %}
  pkg.uptodate:
    - refresh: True
{% else %}
  pkg.uptodate:
    - refresh: True
    - dist_upgrade: True
{% endif %}

{# Check if the system requires a reboot, and if so schedule it to happen in the next 15 minutes, randomize to prevent boot storm #}
{%- if salt['file.file_exists']('/var/run/reboot-required') %}
Reboot-if-needed:
  module.run:
    - name: system.reboot
    - tgt: {{ grains.id }}
    - at_time: {{ range(1,15) | random }}
{%- endif %}

In this version, we continue to use pkg.uptodate to apply updates, but after doing so we check for the presence of /var/run/reboot-required. If found, we schedule a system reboot to happen at least one minute in the future (to give the salt-minion time to report back). In this case we are randomizing the time of these reboots to minimize a boot storm, with a maximum future time of 15 minutes.

Posted in Lab Infrastructure, Scripting, Virtualization | Leave a comment

vSphere Custom Images & How to Compare Image Profiles

Occasionally there is a need to create a custom ESXi image as either an installable ISO or a depot/zip bundle. For example, when setting up a new host, you may wish to automatically include specific drivers for a particular network card or storage adapter. There are a variety of ways to do this.

PowerCLI Image Builder Cmdlets

PowerCLI has been able to create custom images for many years. In this example, I plan to combine the ESXi 8.0 Update 2 image from VMware with the HPE Server addon (from https://www.hpe.com/us/en/servers/hpe-esxi.html). This specific image combination is already available directly from HPE, but the steps to manually combine the bundles should be the same if the combination is not available, for example if we wanted to include 8.0u2x (where x is a lettered patch release).

The first step is to get our two files, the stock VMware image (VMware-ESXi-8.0U2-22380479-depot.zip) and the HPE addon (HPE-802.0.0.11.5.0.6-Oct2023-Addon-depot.zip). We will add both of these depots to a PowerCLI session using the following:

Add-EsxSoftwareDepot -DepotUrl '.\VMware-ESXi-8.0U2-22380479-depot.zip','.\HPE-802.0.0.11.5.0.6-Oct2023-Addon-depot.zip'

When these depots are added, the Depot Url will appear onscreen, its in the format zip:<localpath>depot.zip?index.xml). We’ll want to note the path listed for the HPE addon as we will use that again shortly. With these depots added we can now query for image profiles. Only the ESXi image will have profiles, but there are likely multiple versions and we want to see what is available.

Get-EsxImageProfile

Name                           Vendor          Last Modified   Acceptance Level
----                           ------          -------------   ----------------
ESXi-8.0U2-22380479-no-tools   VMware, Inc.    9/4/2023 10:... PartnerSupported
ESXi-8.0U2-22380479-standard   VMware, Inc.    9/21/2023 12... PartnerSupported

As mentioned, multiple versions are available, one has VMware Tools (standard) and the other does not (no-tools). We will make a copy of the standard profile

$newProfile = New-EsxImageProfile -CloneProfile 'ESXi-8.0U2-22380479-standard' -Name 'ESXi-8.0U2-22380479_HPE-Oct2023' -Vendor 'HPE'

We will now add all of the HPE addons to the copy of our image profile. This is where we’ll need that local depot path mentioned above.

Add-EsxSoftwarePackage -ImageProfile $newProfile -SoftwarePackage (Get-EsxSoftwarePackage -SoftwareDepot zip:D:\tmp\custom-image\HPE-802.0.0.11.5.0.6-Oct2023-Addon-depot.zip?index.xml)

In this example we added all of the packages from the depot, but we could have included only a subset of specific VIBs by name if desired. We could have also included other VIBs from different depots (for example, from a compute vendor AND other VIBs from a storage vendor).

With our custom image created, combining the VMware and HPE bits, we can now export as ISO or Bundle (ZIP). In this example I’ll export both. The Bundle (ZIP) will be used for some comparisons later.

Export-EsxImageProfile -ImageProfile $newProfile -ExportToIso -FilePath 'PowerCLI_ESXi-8.0U2-22380479_HPE-Oct2023.iso'
Export-EsxImageProfile -ImageProfile $newProfile -ExportToBundle -FilePath 'PowerCLI_ESXi-8.0U2-22380479_HPE-Oct2023.zip'

vCenter Image Managed Clusters

Starting in vSphere 7, there was an ability to manage hosts with a single image that can create a custom image in the web interface. The screenshot below is from the workflow that comes up when creating a new cluster, we just need to pick the values from the provided drop down lists.

Similar to the above PowerCLI example, we are going to create an image that combines the ESXi 8.0 U2 build with a specific HPE Vendor Add-on (802.0.0.11.5.0-6). Once the cluster creation is complete, the image can be exported from the UI. Select the elipises > Export > and select JSON (for a file showing the selections made), ISO (for an image that can be used for installation), or ZIP (for updating an existing installation). I’m going to download a ZIP to be used in the next step. This results in a file named OFFLINE_BUNDLE_52d9502b-7076-7cb2-49b9-cbee13c57f0a.zip.

Comparing Images

The above two processes attempted to create similar images with identical components (the same ESXi image & HPE addon). We may have a need to compare images like these… either by comparing the depot files or the depot file to a running ESXi host. This section will focus on those comarisons.

Since we have two ZIP archive files, the first inclination might be to simply compare the file size or MD5 checksum. However, if we look at the file size (lenght property below), we’ll notice that the files differ slightly in size. This difference can be explained by a number of things, such as the different strings used for various names.

Get-ChildItem PowerCLI*.zip,offline*.zip | Select-Object Name, Length

Name                                                       Length
----                                                       ------
PowerCLI_ESXi-8.0U2-22380479_HPE-Oct2023.zip            686582727
OFFLINE_BUNDLE_52d9502b-7076-7cb2-49b9-cbee13c57f0a.zip 686552303

What we really need to do is compare the VIB contents of these bundles to see if any files are missing or versions inconsistent. This can be easily completed in PowerCLI. The first step is to import these depots into our session, we can do that as follows:

Add-EsxSoftwareDepot PowerCLI_ESXi-8.0U2-22380479_HPE-Oct2023.zip,OFFLINE_BUNDLE_52d9502b-7076-7cb2-49b9-cbee13c57f0a.zip

With both bundles imported, we can check and see what image profiles we have available. We should see two — one from Lifecycle Manager and the other using the name specified in our PowerCLI example. In this step we’ll create a variable for each profile to be used later

Get-EsxImageProfile

Name                           Vendor          Last Modified   Acceptance Level
----                           ------          -------------   ----------------
VMware Lifecycle Manager Ge... VMware, Inc.    11/20/2023 4... PartnerSupported
ESXi-8.0U2-22380479_HPE-Oct... HPE             11/20/2023 5... PartnerSupported


$ipLCM = Get-EsxImageProfile -Name 'VMware Lifecycle Manager*'
$ipPCLI = Get-EsxImageProfile -Name 'ESXi-8.0U2-2*'

If we dig into the image profiles, we’ll find that each as a VibList property that contains the included VIBs. Digging deeper, we’ll see that each VIB has a Guid that combines the VIB name and version (ex: $ipLCM.VibList.Guid will return the list for one profile; a sample row would look like VMware_bootbank_esx-base_8.0.2-0.0.22380479). Now that we have a field that has details on the various VIBs, we can have PowerShell compare them. The first command below will likely return nothing, the second should return all VIBs from our bundle:

Compare-Object $ipLCM.VibList.Guid $ipPCLI.VibList.Guid

Compare-Object $ipLCM.VibList.Guid $ipPCLI.VibList.Guid -IncludeEqual

With the above, we can confirm that our two bundles (ZIP files) have the same contents.

Another question that I’ve heard is can we confirm that a running ESXi host matches this bundle or if any changes are required? One option to do this is with esxcli software profile update --dry-run (documented here: https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-esxi-upgrade/GUID-8F2DE2DB-5C14-4DCE-A1EB-1B08ACBC0781.html). However, that typically requires the new bundle to be copied to the host. Since we already have this bundle locally, and imported into a PowerCLI session, we can ask the ESXi host for a list of VIBs and do a comparison locally.

$esxcliVibs = (Get-EsxCli -VMHost 'test-vesx-71' -V2).software.vib.list.invoke()
Compare-Object $ipLCM.VibList.Guid $esxcliVibs.ID

The above example returns a list of VIBs from an ESXi host, then compares the ID value to the Guid from the imported image. If any discrepancies are identified, they’ll be listed. As with the above comparison of the two image files, we can add an -IncludeEqual switch to ensure that the command is actually returning (as it will return all of the VIBs instead of nothing).

Posted in Scripting, Virtualization | Leave a comment

Keeping Linux up to date with Aria Automation Config

I have several persistent Windows & Linux VMs in my lab. The Windows VMs get their OS updates via Windows Server Update Service (WSUS) managed by group policy. This works pretty consistently and keeps everything current. The Linux VMs are a mix of Photon OS 4 and 5 as well as Ubuntu 20.04, and every time I ssh in I see a notification that updates are available. If I have a few minutes I’ll usually take the opportunity to get current… but these VMs could run for weeks without an interactive ssh login, leaving some security risk on the table.

In my lab I have an Aria Automation Config (formerly known as Salt Stack Config) appliance that I’ll use occasionally. It has functionality to run scheduled jobs on managed minions, but I’ve not taken time to set that up — until today.

Inventory

The first step I wanted to tackle was an inventory of patches that are required for the various endpoints. Looking around at available out-of-box functions in Aria Automation Config, I found a command pkg.list_upgrades that looked promising. I ran it against all minions and then looked at the resulting out. The raw return is available as JSON, so I imported it into Powershell (Get-Content timestamp-return.json | ConvertFrom-Json) and started looking at the details. There is an item for each minion and that item contains a return note property which looks similar to this:

$json[0].return |fl


libpq5          : 11.22-0+deb10u1
python-urllib3  : 1.24.1-1+deb10u2
python3-urllib3 : 1.24.1-1+deb10u2

For my purposes I’m mostly interested in the count of the return items… the number of packages that need to be updated. To find these counts, I came up with the following oneliner:

$json | Select-Object minion_id, has_errors, @{N='NumUpdates';E={($_.return | Get-Member -Type NoteProperty | Measure-Object).Count}} | ?{$_.NumUpdates -ne 0} |Sort-Object minion_id

minion_id                              has_errors NumUpdates
---------                              ---------- ----------
dr-extdns-21.lab.enterpriseadmins.org       False          8
h135-linux-01.lab.enterpriseadmins.org      False          7
h135-linux-02.lab.enterpriseadmins.org      False        105
net-cont-21.lab.enterpriseadmins.org        False         75
net-wanrtr-01.lab.enterpriseadmins.org      False         46
raspberrypi                                 False          3
saltmaster                                  False         34
svcs-cont-21.lab.enterpriseadmins.org       False         61

Looking at this list I can see that my updates are all over the place, some are fairly current, others are way off.

Apply the Updates

To apply updates to Linux, I first created a small state file called updates/os.sls and entered the following contents:

update_pkg:
  pkg.uptodate:
    - refresh: True

I ran applied this state to an Ubuntu box, and once complete logged in to check the system to make sure it was fully patched. Running an apt list upgradable I found some missing updates. I re-applied the state file, hoping that another round of updates would get me current, but the same updates were still missing. After doing some searching online, I found that I needed to include an extra line in my state file to include dist_upgrade so version 2 looked like this:

update_pkg:
  pkg.uptodate:
    - refresh: True
    - dist_upgrade: True

Applying this revised state to the Ubuntu VM got it all matched up with the apt list upgradable results. I then expanded this task to a few more VMs, but ran into a problem when applying the revised version 2 of the state to a Photon OS VM. The results of the Photon VMs showed an error message of: No such option: --dist_upgrade , which is the additional line I had just added to get better patching coverage.

I could have created two different state files, jobs, and associated schedules — one to target Ubuntu and another for Photon, using both of the above versions of the state file, but I wanted to have a more generic / shared update process for all Linux systems. Instead of creating duplicate states/jobs, I decided to add some if logic to my state file. The out of the box sse/apache/init.sls showed a perfect example of how to accomplish this. For the final version of my state file, I look to see if the OS is Photon, and if so apply the pkg.uptodate without the dist_upgrade flag, otherwise I do include it. The syntax looks like:

update_pkg:
{% if grains['os'] == 'VMware Photon OS' %}
  pkg.uptodate:
    - refresh: True
{% else %}
  pkg.uptodate:
    - refresh: True
    - dist_upgrade: True
{% endif %}

Now when I apply this state to a Linux VM, it updates both system types without error. The sample apache state also has some else if examples if we need to get more specific in the future… for example, pulling in Windows patching with the same state. For now, the single if/else works just fine.

Report on the Updates

The output from the above job which applies the updates/os.sls state returns a JSON body similar to our first reporting task. I tried importing this JSON into Powershell to see what sort of interesting reporting was available. One set of data that we can see is the list of packages which were updated, including the old & new version numbers. Here is an example from one minion:

$json[0].return.'pkg_|-update_pkg_|-update_pkg_|-uptodate'.changes


apt                          : @{new=2.0.10; old=2.0.6}
ufw                          : @{new=0.36-6ubuntu1.1; old=0.36-6}
bolt                         : @{new=0.9.1-2~ubuntu20.04.2; old=0.8-4ubuntu1}
<list truncated for readability>

For comparison to the initial report of which updates were needed, I also wanted to summarize what was updated to see if my counts were similar. The following output is from an initial test run on a subset of minions:

$json | Select-Object Minion_id, has_errors, @{N='NumPkgs';E={ ($_.full_ret.return.'pkg_|-update_pkg_|-update_pkg_|-uptodate'.changes| Get-Member -Type NoteProperty | Measure-object).Count }} | Sort-Object minion_id

minion_id                              has_errors NumPkgs
---------                              ---------- -------
dr-extdns-21.lab.enterpriseadmins.org       False       8
h135-linux-01.lab.enterpriseadmins.org      False       7
h135-linux-02.lab.enterpriseadmins.org      False     105
net-wanrtr-01.lab.enterpriseadmins.org      False      46

Comparing the above output to the initial report we can see that all available updates were applied.

I’ve since created a schedule to run each of these jobs on a regular basis. For now, the inventory job will run daily so that I can see/track progress, and the update job will run every weekend. Hopefully the next time I ssh into a Linux VM I won’t be presented with a laundry list of required updates.

Posted in Lab Infrastructure, Virtualization | 2 Comments

Easy wildcard certificate for home lab

In my home lab I have a container running Nginx Proxy Manager (discussed in this previous post). This proxy allows for friendlier host names and SSL for various services in the lab. Using a wildcard DNS record and wildcard SSL certificates makes for a super easy way to onboard new services.

To get started, I first needed to pick a parent domain name to use for services. I already have a DNS zone for example.com so I decided to put these services under apps.example.com. To make this super easy to manage, I created a new domain under example.com with the name apps. It has a single CNAME record of asterisk (*) and the FQDN points to the host name of my container host. Screenshot of the DNS record from my Windows DNS server below:

I then created a wildcard certificate for *.apps.example.com from my internal CA. There are many ways to create a certificate signing request (CSR), but since I have a lot of Aria Suite products in the lab, I like using the Aria Suite Lifecycle > Locker > Certificates > Generate CSR button. This gives me a UI to populate the fields and kicks out a single file with both the CSR & private key. I use the CSR to generate a web server certificate from my internal CA, then download the base64 certificate. I edit the resulting .cer file and append my CAs public key to create a proper chain. Now that I have a certificate and private key, I can move into the Nginx Proxy Manager UI.

From the Nginx Proxy Manager UI, I select the SSL Certificates tab. I add a new SSL certificate and populate the required fields, screenshot below:

When I go to the Hosts > Proxy hosts tab I can now very easily add hosts with SSL capabilities. I no longer need to make a certificate for each service or even manually create DNS records. For example, lets say my internal IPAM solution needs a certificate. Instead of creating a ‘friendly’ DNS record and dedicated certificate, I can use this Nginx Proxy and wildcard certificate. We can simply add a new proxy host, enter a domain name such as ipam.apps.example.com, enter the correct host/port details, and select the correct certificate.

On the SSL tab of the new host we can pick our wildcard.apps.example.com certificate and select force SSL.

Now when I browse to http://ipam.apps.example.com/, I’m automatically redirected to the secure version of the site:

This does inject a new dependency — the Nginx Proxy Manager container needs to be running for me to reach these secure services — but in this case the container is running on a host that is typically online/working.

Posted in Lab Infrastructure, Virtualization | Leave a comment