When debugging inconsistencies between Photon OS systems, say one is failing and another is stable, it’s useful to compare their installed package versions. In one recent case, I needed a quick way to do just that from my admin workstation. Here’s how I solved it using PowerShell and the Posh-SSH module.
In this test case, both hosts have a user account with the same name/password, so only one credential was created.
# Prompt for SSH credentials
$creds = Get-Credential
# Connect to Host 1 and get package list as JSON
$host1 = '192.168.10.135'
$host1session = New-SSHSession -ComputerName $host1 -Credential $creds -AcceptKey
$host1json = (Invoke-SSHCommand -Command 'tdnf list installed -json' -SessionId $host1session.SessionId).Output | ConvertFrom-Json
# Connect to Host 2 and get package list as JSON
$host2 = '192.168.127.174'
$host2session = New-SSHSession -ComputerName $host2 -Credential $creds -AcceptKey
$host2json = (Invoke-SSHCommand -Command 'tdnf list installed -json' -SessionId $host2session.SessionId).Output | ConvertFrom-Json
# Compare the resulting package lists
$compared = Compare-Object -ReferenceObject $host1json -DifferenceObject $host2json -Property Name, Evr
# Group the results by package name and build tabular results for side-by-side compare
foreach ($thisPackage in ($compared | Group-Object -Property Name)) {
[pscustomobject][ordered]@{
Name = $thisPackage.Name
$host1 = ($thisPackage.Group | ?{$_.SideIndicator -eq '<='}).Evr
$host2 = ($thisPackage.Group | ?{$_.SideIndicator -eq '=>'}).Evr
}
}
The script gets a list of all installed packages from each host as JSON (using tdnf list installed -json), then converts the JSON output to a powershell object. The two list of installed packages are then compared using Compare-Object. Finally, we loop through each unique package and create a new object to compare the versions side by side.
I’ve included the first 10 rows of output below for reference.
Looking at this output, we can see which packages are different between our two hosts.
Conclusion
Comparing installed packages across Photon OS systems can be an invaluable troubleshooting and auditing tool – especially when dealing with configuration drift, unexpected behavior, or undocumented changes. By using PowerShell and the Posh-SSH module, you can quickly automate the comparison process without needing to log in to each system manually. Hopefully, this gives you a solid starting point for your own comparisons and debugging tasks.
vCenter Server 8.0 allows administrators to federate identity with Entra ID (formerly Azure AD), enabling seamless SSO and MFA. However, integrating this setup with automation tools like PowerCLI introduces a few challenges. This guide walks through enabling and using PowerCLI with federated logins.
After enabling this federated identity feature, a few additional considerations are required when connecting using PowerCLI. In most Entra ID environments multifactor authentication is enforced, for example via conditional access policy. As such, attempting to login with just a username and password will fail. Here is a sample error response:
> Connect-VIServer vc3.example.com -User h163-user2@lab.enterpriseadmins.org -Password VMware1!
Connect-VIServer : 4/29/2025 6:29:26 PM Connect-VIServer Cannot complete login due to an incorrect user name or password.
At line:1 char:1
+ Connect-VIServer vc3.example.com -User h163-user2@lab.enterpriseadmin ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Connect-VIServer], InvalidLogin
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_SoapException,VMware.VimAutomation.ViCore.Cmdlets.Command
s.ConnectVIServer
The good news is we can still allow clients/end users to login via their federated identities with a little setup.
Administrator / Grant access to PowerCLI User
As an administrator, we’ll create a new OAuth2 client. We will share this client details with the client who wishes to use PowerCLI. In the codeblock below we’ll use splatting to make the code a bit more readable.
In the above example, we assigned the output of New-VIOAuth2Client to a variable and did not specify a Secret parameter. With this configuration, a secret will be automatically generated, but that value is not returned in the default output of the cmdlet. We’ll use $newClient.secret to view the new secret of: s1A9RxZ0FbBEGoMplD0HcbQITBODtX85. We’ll need to share the ClientID and Secret value with the person wishing to authenticate.
Client / PowerCLI User
In the step above, our administrator created a New-VIOAuth2Client for us and shared the following details:
ClientID = h366-powercli-native-Brian
Secret = s1A9RxZ0FbBEGoMplD0HcbQITBODtX85
We’ll now use those values to login to our vCenter Server using PowerCLI.
This results in our default web browser opening to an Azure / Entra AD login page. After successfully entering our credentials, we are redirected to a page that looks like the following image:
The text states: PowerCLI authenticated successfully. Please continue in the PowerShell console. You can close this window now. If you look closely at the URL, you’ll note the page is the RedirectUrl we specified above.
We’ll now take the $ouathSecContext return from the previous codeblock and use it to create a $samlSecContext and use that to connect to our vCenter.
The above commands will return a successful login prompt:
Name Port User
---- ---- ----
test-vcsa-03.lab.enterprisead… 443 LAB.ENTERPRISEADMINS.ORG\h163…
We can now run PowerCLI cmdlets using our federated identity.
Conclusion
Using Entra ID federation with vCenter Server 8.0 is a great way to step up your security game, especially with MFA in the mix. As we’ve seen, it can trip up tools like PowerCLI if you’re expecting username and password logins.
Thankfully, with a little setup (like creating an OAuth client and using the right security contexts), we can still automate tasks and scripts using federated identity. If this is something your team will do often, it’s worth putting together a quick internal guide or template for setting up new PowerCLI clients. It’ll save time and keep everyone on the same page.
I powered down a nested host, edited the virtual machine settings, and added the advanced option ethernet0.linkspeed, setting its value to 25000 as pictured below.
After powering the virtual machine back on, I checked the ESXi network settings in vCenter and confirmed that the updated link speed was reflected, pictured below.
Expanding the vmnic0 details, I could see the adapter was still of type vmxnet3, and the link speed was correctly set to the specified value.
Conclusion
Updating virtual machine link speeds work for nested ESXi hosts as well. The valid range for linkspeed is 10000 to 65000. This makes common speeds, like 25 Gbps and 40 Gbps possible.
Two years ago, I documented the integration of Pi-Hole with VMware Aria Operations using the Aria Operations Management Pack Builder (MPB) and Pi-Hole’s API. Since then, both tools have undergone significant updates: MPB has a revamped 2.0 interface, and Pi-Hole 6 introduced a completely new API with formal documentation and session-based authentication. In this updated guide, I’ll walk you through building a Pi-Hole Management Pack using these modern tools, highlighting key changes and providing a detailed, hands-on tutorial for tech enthusiasts and system admins.
What’s New with the Pi-Hole API?
Formal Documentation and Accessibility
With the release of Pi-Hole 6, the API has been overhauled to be more robust and user-friendly. Official documentation is now available at http://pi.hole/api/docs, which resolves to your local Pi-Hole server to ensure version compatibility. If you haven’t installed Pi-Hole yet, you can explore the documentation for all branches online, such as the Pi-Hole API documentation (master branch). Additionally, Pi-Hole provides a built-in UI at https://YourPi-holeServerName/api/docs/ where you can test API requests directly—a fantastic feature for developers.
Session ID Authentication: A Shift from Tokens
One major change in Pi-Hole 6 is the move from token-based to session ID-based authentication. I discovered this shift when my existing Management Pack stopped collecting data after upgrading to version 6. The API authentication documentation outlines several authentication options, I opted for the App Password method. Here’s how to set it up:
In the Pi-Hole Admin UI, navigate to the Web Interface / API section of Settings, make sure the interface is toggled from Basic to Expert and click Configure App Password.
The UI will display a one-time App Password—copy it immediately, as it won’t be shown again. Store it securely (e.g., in a password manager).
To verify the new authentication method, I used the API UI (https://YourPi-holeServerName/api/docs/) to test a request with my App Password. After clicking Try, I received a session.sid, confirming successful authentication.
I then tested the /stats/summary endpoint (used in my original article) with the session ID, ensuring it returned the expected data. I now had a basic enough understanding of the API to apply it to building a Management Pack.
Released in October 2024, version 2.0 of the Aria Operations Management Pack Builder brings a refreshed UI and improved functionality. While MPs from version 1 can be imported, the Pi-Hole API changes require a fresh build for this example. I’ll assume you’ve already deployed the MPB appliance (refer to the official MPB deployment guide) and are ready to start.
Step 1: Configure the Connection in MPB
First, connect MPB to your Aria Operations server:
Access the MPB Web UI.
Navigate to the VMware Aria Operations Connection tab.
Create and test a connection to your Aria Operations server.
With the connection established, you’re ready to build your Management Pack.
Step 2: Define the Source
The Source defines the environment (your Pi-Hole server) that MPB will interact with. Here’s how to set it up:
In the Designs tab, click Create.
Name your Management Pack (e.g., “Pi-Hole MP”) and add a description, then click Save.
Configure the Source:
Enter the Hostname, Port, SSL Configuration, and Base API Path of your Pi-Hole server.
Click Next.
Set Up Authentication
Pi-Hole 6 uses session-based authentication, so we’ll configure MPB accordingly:
On the Authentication tab, change the dropdown from Basic to Custom.
Create a label called “App Password” and paste the App Password you generated earlier into the value field.
Check the Sensitive box to secure the password, then click Add Field. This creates a variable ${authentication.credentials.app_password} for use in the MP.
Enable the Session Authentication toggle and click Next.
Configure the Session API Request
From the Pi-Hole API documentation, we know the authentication endpoint is a POST request to /auth. Configure it on the Get Session API Request tab as follows:
Select POST from the dropdown HTTP Method list
In the API Path field, enter auth.
We saw in the API documentation that we need to pass our App Password in the Body of our request in the JSON Format of:
So on the Get Session Request Advanced tab we will use the variable created earlier to pass the App Password (note the use of double-quotes):
3. Click Request to test the configuration. If successful, you’ll see a session.sid in the response.
4. Select the session.sid field from the response body and copy the session ID variable for later use
The Pi-Hole API documentation lists four ways to use the session ID when making requests.
We’ll use the X-FTL-SID header option.
In the Global Request Settings tab, add a header:
Header Name: X-FTL-SID
Header Value: ${session.sid} This ensures the session ID is included in all API requests.
In the Release Session Request tab:
Enable the toggle to request a release
Choose an HTTP Method of DELETE
Set the API Path to auth
We do not add any additional values on the “Release Session Request Advanced” tab. Click Next.
On the Make Release Session Request tab click the Request button to verify a successful session release.
To verify the setup, test the /stats/summary endpoint:
On the Test Connection Request tab, set the API Path to stats/summary.
Click Request and confirm that summary data (e.g., total queries, blocked queries) is returned.
Click Save to complete the Source configuration.
Step 3: Create API Requests
With a Source section successfully tested against our reference environment, we move to the Requests section of the design. This is where we define the API requests to collect the data that our MP uses. Since the point is to just demonstrate a concept, we are going to create just two requests, one named “summary” and one named “hostname”.
From the previous blog we determined that the /stats/summary API endpoint will return most of the data that we would use in our MP. And we used it in our Source section to test our reference environment. If you had read the previous blog you will recall that there was no endpoint that would return the pi-hole server name. Thankfully that has been resolved with the /info/host endpoint so we will use that as our second request for this test.
Request 1: Summary Statistics
In the Requests tab, click Add API Request.
Skip chaining (click Next).
Set the HTTP Method to GET, the API Path to stats/summary, and keep the default Request Name.
No advanced options are needed—click Next.
On the Preview screen, click Request to confirm data is returned (e.g., total queries, blocked queries).
Request 2: Hostname
Repeat the steps above, setting the API Path to info/host.
Test the request to ensure data (e.g., host.uname.nodename) is returned.
Step 4: Define Objects
In the Objects section, select the data from your API requests to include in the Management Pack:
Click Add New Object.
Set the Object Type to “Pi-Hole Server” and click Next.
On the Attributes from the API Request tab:
Expand the stats/summary request and select key metrics like:
queries.total
queries.blocked
queries.percent_blocked
queries.cached
Expand the info/host request and select host.uname.nodename for the server name.
On the Properties and Metrics tab, adjust labels and data types as needed (e.g., use “Count” for total queries, “%” for percent blocked). For more on KPIs, check VMware’s KPI documentation.
Set the Object Instance Name and Object Identifiers to “Pi-Hole Server Name” for clarity.
Step 5: Configure and Build the Management Pack
With the design complete, move to the Configuration section to review settings like MP version and labels. Then:
In the Build section, click Perform Collection to run a test collection and verify the results.
Click Build to generate the .pak file.
New in MPB 2.0: Deploy the Management Pack directly to your Aria Operations server by selecting it from the list and clicking Deploy.
Step 6: Configure the Management Pack in Aria Operations
Install the .pak file in Aria Operations as you would any other Management Pack:
Navigate to Administration > Integrations > Repositories.
Enable the new Management Pack
Add an account for each Pi-Hole server, including the App Password as a credential in Aria Operations.
Within minutes, you’ll be tracking Pi-Hole performance metrics—like total queries and blocked queries—directly in Aria Operations.
Conclusion: Monitor Pi-Hole Like a Pro
By leveraging the updated Pi-Hole API and Aria Operations MPB 2.0, you can seamlessly integrate Pi-Hole metrics into your monitoring environment. This tutorial demonstrates a simple yet powerful Management Pack, but the possibilities are endless—what other applications would you like to monitor with Aria Operations?
I’d love to hear your thoughts! Drop a comment below with your experiences building Management Packs or any other apps you’d like to integrate with Aria Operations. If you found this guide helpful, share it with your network and stay tuned for more tech tutorials.
I have an older Intel NUC in my lab, and although its aging, it still serves a purpose, and I plan to hang on to it for a little while longer. This post will outline some issues I encountered while recently migrating from a USB boot device to a more permanent option. As described extensively in this knowledge base article: https://knowledge.broadcom.com/external/article/317631/sd-cardusb-boot-device-revised-guidance.html, using USB devices is no longer a recommended boot media due to the endurance of the media. In addition, this host recently started throwing an error message:
Lost connectivity to the device mpx.vmhba32:C0:T0:L0 backing the boot filesystem /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0. As a result, host configuration changes will not be saved to persistent storage.
This message appeared on the host object in vCenter Server. I decided this would be a good time to move the boot device to a more durable media. The host had a local disk, which contained a single VMFS volume where I stored a VM containing some backups. I moved this VM to a shared datastore for safe keeping and proceeded to delete the VMFS volume. I wanted to re-install ESXi and specify this device, and not having a VMFS volume makes me more confident when selecting the disk during the ESXi install.
Creating the boot media
For this ESXi host, I knew that I would need the latest ESXi base image (8.0u3d), the Synology NFS Plug-in for VAAI, and the USB NIC Fling Driver. Instead of just installing ESXi and then adding packages, or using New-ImageBundle, I decided to turn to vCenter Server Lifecycle Manager for help. I first created a new, empty cluster object. I then selected the updates tab and selected ‘manage with a single image’, and then ‘setup image manually’. I selected the required ESXi version and additional components, save, and finally ‘finish image setup’. Once complete, I was able to select the ‘…’ and ‘Export’ options, pictured below.
This allowed me to export the image as an ISO image, pictured below:
With the ISO image in hand, I used Rufus to write the ISO image to a USB drive to use as the installation media.
Installing ESXi
Since I only needed to install ESXi on a single host, I decided to do so manually / interactively. Knowing that this was an old host, and the installed CPU was no longer supported on the HCL, I pressed SHIFT+o (the letter o, not the number zero) during bootup to add a couple of boot options:
The install went well, I was able to select my empty local disk to use as an installation target and the system booted up fine afterwards. I noticed I now had a datastore1 on this host which was 32GB smaller than the original VMFS volume.
Configuring USB NIC Fling Driver
My USB NIC was recognized immediately as well, since I had included the driver in the custom image. I added the host to a distributed virtual switch, and mapped uplinks to the appropriate physical NICs, but on reboot the vusb0 device was no longer in use by Uplink 2.
Some of my notes mentioned that I had previously added some lines to the /etc/rrc.local.d/local.sh script to handle this, although I didn’t list which commands. Thankfully I was able to get the system to boot from the failing USB device and review this file. I’ve included the code added below:
vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
sleep 10
count=$(( $count + 1 ))
vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done
esxcfg-vswitch -P vusb0 -V 308 30-Greenfield-DVS
/bin/vim-cmd internalsvc/refresh_network
The esxcfg-vswitch help states that the -P and -V options are used as follows:
-V|--dvp=dvport Specify a DVPort Id for the operation.
-P|--add-dvp-uplink=uplink Add an uplink to a DVPort on a DVSwitch.
Must specify DVPort Id.
The Physical uplink I wanted to add was vusb0 and the DV Port Id for the operation was 308, which could be found on the distributed switch > ports tab > when filtering he ‘connectee’ column for the specific host in question, pictured below:
Now on system reboot, the vusb0 uplink correctly connects to the expected distributed switch.
Lifecycle Manager – Host is not compatible with the image
Once I had the host networking situated, I wanted to verify that vCenter Lifecycle Manager agreed that my host was up to date with the latest image. I was surprised to see that the system said The CPU on the host is not supported by the image. Please refer to KB 82794 for more details.
I knew these CPUs were unsupported, but had expected a less significant The CPU on this host may not be supported in future ESXi releases, which is what I had observed prior to the host rebuild. After some searching, I found this thread: https://community.broadcom.com/vmware-cloud-foundation/discussion/syntax-for-an-upgrade-cmd-to-ignore-cpu-requirements, which proposed edits to the /bootbank/boot.cfg, specifically adding the allowLegacyCPU=true flag to be added to the end of the kernelopt= line. This resolved my issue and allows me to continue functioning with this older system.
Conclusion
This migration process highlights the challenges of maintaining older ESXi hosts while ensuring compatibility. Moving from USB-based boot devices to more durable storage is a critical step, especially as support is phased out for USB/SD boot devices. Leveraging vCenter Lifecycle Manager simplifies image management, though workarounds (such as allowLegacyCPU=true) may be needed for legacy hardware.