VMware Certificate Authority (VMCA) is used within vSphere to secure connections between vCenter Server and ESXi hosts, but what if we need certificates for other systems? In a previous post, I used group policy to add the VMCA Root CA from a vCenter Server to the trusted enterprise root CAs for all systems in my lab. This enables all lab workstations / jump servers to trust certificates issued by VMCA, for example connections to the ESXi hosts. While creating that post, I noticed an ‘issue new leaf certificate’ option in the vCenter UI that I had not seen before:
So can we use the VMCA to issue certificates to non-vSphere components? This post will explore that use case.
Generate a Certificate Signing Request (CSR)
I created a CSR for a nginx web server. There are many ways to create a signing request, such as openssl at the command line or other tools like https://csrgenerator.com/. For my test, I used the web interface in Aria Suite Lifecycle > Locker > Certificates to request a CSR. This created a single file containing both the signing request and key (similar to the csrgenerator.com website). I then copied the appropriate pieces of those certificates to separate files (one .key and one .cer).
Creating the Certificate
In vCenter Server > Administration > Certificates > Certificate Management > Trusted Root tab, I selected the ‘issue new leaf certificate’ link (pictured above). This presented a dialog box to Upload CSR.
I browsed to the CSR file created and selected Next. Completing this workflow provided two file downloads — 15679973-e9ec-4625-a6aa-5437dc0ef6a8.root.crt and 15679973-e9ec-4625-a6aa-5437dc0ef6a8.leaf.crt. The root certificate is the VMCA root certificate that was deployed via group policy in the previous article. The leaf certificate is the new certificate file created for an nginx webserver.
Applying the Certificate
In my nginx configuration, I provide the key file (created in conjunction with the CSR) and the leaf certificate (created from the vCenter Server interface). Accessing the nginx webserver, the browser shows that the connection is secure:
Digging into the certificate details, we can see that our webserver certificate was issued by VMCA.
Conclusion
I had not seen this ‘issue new leaf certificate’ link before and was surprised how easy it was to use VMCA for other non-VMware based services. I could see using this again in a lab environment where a certificate might be necessary, but other PKI solutions are not available.
In my lab I have a jump box that I typically use for management. It is joined to active directory and automatically trusts my primary CA root certificate. This CA issued the Machine SSL certificate for my vCenter Server, so when I go to that system I do not get a certificate warning. However, if I go to an ESXi host, which gets its cert from the self-signed VMCA, I do get a certificate warning. There is another lab I connect to that has a different certificate authority, and I get certificate warnings for all the systems in that environment. I decided to implement a group policy to try and add these additional root CAs to the systems in my lab.
Gathering the root certificates
For the separately managed lab I connect to, the certificate authority runs on Windows. If I navigate to https://ca.other.domain/certsrv/, I can use the “Download a CA certificate, certificate chain, or CRL link” to download the Base 64 encoded CA certificate. This downloads a .cer file that I can import as a trusted root authority to start trusting this lab systems.
For the VMCA certificate, there were a few additional steps. On the landing page for https://vc1.example.com, there is a “Download trusted root CA certificates’ link.
This downloads a zip file, which in my environment contains a handful of certificate file (as well as some certificate revocation lists). I don’t want to import all these certificates, so I need to figure out which one(s) are required.
In the vCenter Server > Administration > Certificates > Certificate Management > Trusted Root tab, I can see a similar sized list of certificates. Some of these are external certificate authorities I’ve created over the years. There is an old entry for a decommissioned external PSC, and another vCenter that was once in an enhanced linked mode relationship. The certificate that I believe I need is the first one, named VMCA_ROOT_CERT. It was issued and expires on the same date as some of the other certs in that list.
If I expand the details of the VMCA_ROOT_CERT cert, I can see a serial number which appears to be unique.
I now need to cross reference the serial numbers of the certificate files I downloaded with this specific cert that I want. Since there are a handful of files, I looked to PowerShell to help me decode each of the certificate files. This block of code:
Unfortunately, I don’t see the desired serial number in that list. In fact, the numbers appear to be completely different, as if the UI is showing a decimal number (numbers only) while PowerShell decoded a hexadecimal value (based on the mix of numbers and letters).
To test that out, I converted the web interface value to hexadecimal and it was F0B6452CFEF6D1A6. I do have a *D1A6 serial number in file 8ff1896d.0.crt, which confirms this is the interesting file containing the proper root certificate.
Creating the Group Policy
Its been years since I’ve worked with group policy. I created a new certificate named Trusted Root CAs. In the properties of this new certificate, I checked the box to ‘Disable User Configuration settings’. This policy will only include the desired Trusted Root CAs, which is a computer based policy, so I don’t need to include anything related to users in this policy.
I then browsed to the appropriate portion of the group policy to add these certificates:
Computer Configuration
Policies
Windows Settings
Security Settings
Public Key Policies
I then right clicked on Trusted Root Certification Authorities and selected Import. I then followed the wizard to import these as computer-based certificates in the ‘Trusted Root Certification Authorities’ store.
I did this for both the Windows based CA and the vCenter Server VMCA based certificate.
Testing the policy
I then linked the new policy to a test organizational unit, which only contained a single test machine. To ensure that the current policies were applied correctly, I ran the following as an administrator:
gpresult /force
I then opened the Microsoft Edge browser and attempted to connect to a vCenter Server in the separate / standalone lab environment and was not prompted with a certificate warning. I then attempted to connect to an ESXi host in my main lab, and again was not prompted with a certificate warning.
Applying setting to all systems
After testing to confirm that I had the correct root CA certificates in the policy, I linked the policy to the root of the domain. I then tested connecting to systems signed by these other roots on my primary admin jump box and did not see certificate warnings there either.
Conclusion
By centrally managing trusted root CAs through Group Policy, I’ve eliminated the certificate warnings that were slowing me down when connecting to vCenter Servers and ESXi hosts across multiple labs. This approach also ensures consistency for all domain-joined management systems without needing to import certificates manually on each machine.
Manually collecting log bundles from vCenter and hosts can be repetitive and time-consuming. Here’s how I automated the process with PowerCLI. I wanted a short script that would let me pick a vCenter server, then do the other relevant tasks:
Export the vCenter log bundle
Pick any single cluster & export a log bundle from the first three hosts in the cluster
Troubleshooting: Underlying connection was closed
When I attempted to use the PowerCLI Get-Log command to generate a bundle, I was met with the error: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. This led me to a knowledge base article with resolved the issue:
This sets the timeout for such operations from a default 5 minutes to 45 minutes (2700 seconds). After setting the timeout for the user, I needed to exit and relaunch my powershell window for the setting to become effective.
Sample Script
The following sample script assumes you are connected to a vCenter Server to which you’d like to collect logs.
Select a single cluster at random, select the first three hosts from that cluster and create a log bundle for each of them. Note: If the selected cluster has fewer than 3 hosts, fewer than 3 bundles may be created.
Conclusion
By increasing the PowerCLI timeout and using just a few lines of script, what was once a repetitive monthly manual process can now be automated and kicked off in just a few minutes. The log bundles will still take some time to generate, but this will happen in the background without attention. This approach not only saves time but also ensures consistency in how log bundles are collected for troubleshooting or archival purposes.
I was recently testing an application and wanted to see how it would behave if its connection to vCenter Server was interrupted. Would the process auto-recover? Would I need to restart a service? To find out, I simulated a connection failure using the built-in firewall on Photon OS. This type of testing can be helpful when validating resiliency, troubleshooting connection handling, or preparing for real-world outages.
The application was running on a Photon OS appliance, so I checked to see if the native iptables firewall was enabled using the following command:
systemctl status iptables.service
This returned a confirmation that the status was loaded/active, pictured below.
Since the firewall was enabled, I checked its configuration using the command:
iptables --list --line-numbers
This lists all the rules and their associated line numbers in the configuration. At the end of the configuration, I could see a block of OUTBOUND requests, which allow everything (based on rule 7).
Chain OUTPUT (policy DROP)
num target prot opt source destination
1 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
2 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
3 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
4 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
7 ACCEPT all -- anywhere anywhere
8 ACCEPT icmp -- anywhere anywhere icmp echo-reply
9 ACCEPT icmp -- anywhere anywhere icmp echo-reply
For my testing, I only needed to create a rule above number 7 that would DROP the requests to the specific vCenter Server my request was going to. I waited for the application to start, then added this firewall rule to drop requests to the vCenter Server, effectively simulating a network interruption:
iptables -I OUTPUT 7 -d 192.168.127.40 -j DROP
Caution: these changes are meant to be temporary and should only be used in test environments.
I then ran the same iptables --list --line-numbers command and confirmed that rule 7 was now my DROP entry and the previous rule 7 (that allowed all traffic) shifted down to rule number 8.
Finally, after testing, I could remove the rule:
iptables -D OUTPUT 7
Conclusion
Using iptables makes it easy to simulate a loss of connectivity to vCenter (or any other target system) without touching physical network infrastructure. This approach is lightweight, repeatable, and useful for testing application resiliency or recovery processes. Just remember that iptables changes made this way are not persistent across reboots, so they’re ideal for temporary testing in a lab or non-production environment.
While troubleshooting a vCenter issue, I needed to replicate an environment where the vCenter Server had no internet access. This scenario is often seen in production but can be tricky to reproduce in a lab. This post will outline how I solved the issue quickly using incorrect default gateways and static routes.
Configure vCenter for No Internet Access
I deployed a new vCenter Server, using normal processes & correct networking settings. The resulting vCenter could reach the internet.
From the virtual appliance management interface (VAMI, port 5480), I selected the Networking option in the left navigation. From there I clicked the ‘Edit’ button in the top right pane. This VCSA only has one physical adapter, so I selected ‘Next’ and changed the IPv4 gateway on the second page to 192.168.0.2. In my lab, this is an unused IP address — the default gateway is actually 192.168.10.1.
I then continued through the workflow and acknowledged that I was ready to continue.
After making this change, I could ping the vCenter from the local subnet, but not from my admin workstation, which was the expected behavior.
I then modified the /etc/systemd/network/10-eth0.network file to append the following text:
This adds a static route so the vCenter Server now knows how to route to all devices, but only in my labs 192.168.0.0/16 network. To make this effective, I ran the following command:
systemctl restart systemd-networkd.service
After restarting networkd, I was able to ping the vCenter from both the local subnet and my admin workstation. However, from the vCenter I was unable to ping non-192.168.x.x addresses. This was the ideal configuration for my specific test.
Restricting Internet Access on a Windows Jump Host
Preventing the vCenter Server from reaching the internet was exactly what I needed. However, I decided to also setup a Windows Server based jump host to connect to this vCenter & I wanted it to be restricted from accessing the internet as well. I used the same process, but in Windows was able to save the configuration without a default gateway provided. To create a persistent static route, I used the following command:
With this route defined, the jump host was able to reach internal addresses but not external addresses.
Adding Selective Internet Access with a Proxy
After blocking all the internet access, there were a few domains that I wished I had access to. To solve this I deployed an Ubuntu Linux VM and turned it into a proxy server by installing one package:
sudo apt install tinyproxy
I selected tinyproxy as it is lightweight, simple, and required minimal config. I edited the /etc/tinyproxy/tinyproxy.conf file, removing a few comments to enable the following settings:
This allows the proxy server to listen for all devices in my lab, enables domain level filtering, and denies all requests by default. This allows me to selectively enable specific domains as needed and have logging to know which domains are attempted to be contacted. I can allow a domain by adding it to the /etc/tinyproxy/filter file and the restarting services with sudo service tinyproxy restart. To review which domains are attempted to be contacted, I just run the command:
sudo tail -f /var/log/tinyproxy/tinyproxy.log
I can configure the jump box or vCenter server to use this proxy by specifying its IP address and the default proxy port of 8888 (configurable in the tinyproxy.conf file).
Conclusion
This setup provided a flexible way to test how vCenter and supporting systems behave in restricted environments. By combining static routes with a filtered proxy, I could mimic a realistic enterprise scenario where internet access is tightly controlled—without losing the ability to selectively allow required domains.