Keep it secure: Automate vCenter Server SSL Certificate replacement

Validity periods for SSL certificates keep shrinking, requiring more frequent certificate replacements. Something that once only needed done every 3 or so years is now required every year. With these increased demands it is common to want to automate this task. This article will show how to request and replace a vCenter Server certificate using a vSphere API and PowerCLI.

Background

Before we start replacing certificates it is important to understand the various options available for managing vSphere certificates. In this article I’ll be focusing on automating the steps for the Hybrid Mode, where we only replace the certificate used by vCenter Server, and we let the default configuration of VMCA handle all the ESXi hosts. The various options are well documented here: https://blogs.vmware.com/vsphere/2020/04/vsphere-7-certificate-management.html.

The GUI Way

Now that we know which certificate replacement method we want to use, we’ll first explore how to complete this certificate replacement in the HTML5 UI. We’ll navigate to vSphere Client > Administration > Certificates > Certificate Management. From here we can see the existing Machine_Cert that is used, which expires in November 2023.

vCenter Server HTML5 UI Machine_Cert

In this tile with our certificate detail, we see an Actions drop down, which contains choices to Renew, Import and Replace Certificate, and Generate Certificate Signing Request (CSR). When I do this in the HTML5 UI, it is typically two steps. First, I create a CSR, which prompts me to enter information about the certificate I’d like to request.

vCenter Server – Generate CSR form

Once this is complete, I’m provided with a long string of text (the CSR) which I need to send to my certificate management / crypto folks for processing. They will return a certificate to me. You may have noticed I do not see the private key at any point during this process — this is because vCenter keeps this information private. Once I have my certificate, I can go back to the tile and provide the certificate files — the private key is already there.

Replace certificate screen, showing private key is embedded
Step 2, showing the Machine SSL Certificate and Trusted root certificates being provided.
UI showing success and that vCenter server services are restarting

It can take some time for services to restart, but I should have another year before I need to replace this certificate again.

This was very straight forward in the UI, the next steps will cover doing the same process using the vSphere API.

Automating the Certificate Signing Request

When browsing vSphere Client > Developer Center > API Explorer, I see two interesting headings under the vcenter API endpoint we can see an entry for certificate_management/vcenter/tls_csr. The description states “Generates a CSR with the given Spec.” If we connect to the vSphere Automation SDK server with Connect-CisServer, we can get this specific service and create the spec. From there, we can populate all the fields and then create our new CSR. Here is a code block showing all these steps with some dummy data.

Connect-CisServer vcenter.example.com
$tlsCSR = Get-CisService -Name com.vmware.vcenter.certificate_management.vcenter.tls_csr
$tlscsrSpec = $tlscsr.Help.create.spec.Create()

$tlscsrSpec.key_size = 2048
$tlscsrSpec.common_name = 'vcenter.example.com'
$tlscsrSpec.organization = 'Example'
$tlscsrSpec.organization_unit = 'Testing'
$tlscsrSpec.locality = 'Greenfield'
$tlscsrSpec.state_or_province = 'Indiana'
$tlscsrSpec.country = 'US'
$tlscsrSpec.email_address = 'vi-admin@example.com'
$tlscsrSpec.subject_alt_name = "vcenter"

$tlsCSR.create($tlscsrSpec).csr

Requesting the Certificate

As in the section where we discussed using the GUI, we now need to provide our CSR to the certificate management folks / crypto team. They are typically responsible for requesting/creating certificates. Depending on the Certificate Authority in use, this step could also be automated, but it is outside the scope of this article.

Replacing the Certificate

Once we have our certificate, in my case a pair of cer files (one for my vCenter Server and one for the root cert of the CA), we can return to our PowerCLI window. I was able to quickly request/approve a new certificate and my PowerCLI session was still open so I didn’t need to use Connect-CisServer again, however you may need to reconnect days later, thats no problem and will work fine. I used Get-Content to read in the two cer files, but by default that reads line by line. Therefore, I used -join to put each line into a single variable, and for good measure removed any leading/trailing spaces with .trim() as I’ve seen that cause issues with certificates in the past. I then used the certificate_management/vcenter/tls service and created the sample specification. This results in a new object being created where cert is required and the other properties (key and root_cert) are optional. For my case I specified just the cert and root_cert since my CSR was generated/stored by the vCenter Server Appliance already.

$cert = ( (Get-Content .\certnew-vcsa-api.cer)  -join "`n").trim()
$rootcert = ( (Get-Content .\certnew-ca.cer) -join "`n").trim()

$tls = Get-CisService -Name com.vmware.vcenter.certificate_management.vcenter.tls
$setTls = $tls.Help.set.spec.Create()

$setTls.cert = $cert
$setTls.root_cert = $rootcert

$tls.set($setTls)

After running that $tls.set method on the last line of the code block above, vCenter services automatically restart, just like in the UI example.

Conclusion

This article shows how similar the HTML5 UI is to these specific APIs for certificate replacement. I hope this helps you automate this repeatable task.

Posted in Scripting, Virtualization | Leave a comment

Keep it secure: Automate Skyline Collector admin password changes

Too frequently I login to my Skyline Collector and am immediately required to change the password. Follow along with me as I explain how I figured out how to use automation to reduce the frustration of this process.

The Skyline Collector admin password will expire every 90 days. Because it’s not necessary to login to the collector frequently, it is common that when I do login, I’m force to immediately change the password. I began looking for an option to change this password programmatically, thus enabling the ability to schedule a task that would update the password before it expired, preferably every 30 days or so. That way when I go to login the password doesn’t need to be immediately changed and I can move along with my task.

Finding the API method

To find the API method being used, I opened the developer tools in my browser, switched to the Network tab, then began watching the monitor while I changed the admin password for my Skyline Collector. When I clicked the button to change password, the ‘request URL’ on the Headers tab shows that the method called is /api/v1/auth/update?auto=false (picture below):

On the ‘Payload’ tab I can see the JSON body that was posted to the /api/v1/auth/update method in the request URL (from the above screenshot). The request body looks like this:

Write a Script to Automate the Password Change

Knowing the API method called as well as the details of the payload gives us the details that we need to write some code. We could use any tool/language, but having a prefernce towards PowerShell I chose that path. The below example does just that — and the results showed Password updated successfully!

$serverName = 'h027-skyline-01.lab.enterpriseadmins.org' # variable for Skyline Collector name/IP.
$changePassBody = @{'username'='admin'; 'oldPassword'='VMware1!'; 'newPassword'='VMware2!'} # JSON payload
# Following line will use variables above to POST the request
Invoke-RestMethod -method POST -Uri "https://$serverName/api/v1/auth/update?auto=false" -Body ($changePassBody | ConvertTo-Json) -ContentType "application/json"

# Output of Invoke-RestMethod from above
message
-------
Password updated successfully.

With this test successful, I tested the code against a collector appliance with an expired password and it worked there also. 

It’s outside of the intent of this brief article but to have this be a complete solution, the remaining tasks to fully automate this process would include: 

  • Reading in a complete list of Skyline Collectors (either from a list in the script or CMDB solution)
  • Retrieving the current password for each collector (from a privileged access management tool like Cyberark / Thyotic)
  • Auto-generating a new password for each collector 
  • Storing the new password in the privileged access management vault for each collector 
  • Schedule this as a recurring task

Hopefully this has given you a helpful example of using your browsers Developer Tools to investigate APIs as well as writing a sample script to use what you find.

Posted in Lab Infrastructure, Scripting | Leave a comment

Getting started with Steampipe.io to query VMware vSphere

I was recently listening to an episode of the Unexplored Territory podcast (episode #023 – Introducing Oracle Cloud VMware Solution with Richard Garsthagen). At the end of each episode, the hosts ask the guest about a technology that should be explored. The response to this question is the first time I heard about steampipe.io. I made a note to take a look at this open source project and wanted to share my notes on getting started.

Steampipe is a SQL like query language for various plugin endpoints, such as AWS, Azure, CSV files, GCP, IMAP, LDAP, VMware vSphere, among others. Plenty of other tools exist to query these endpoints, but this is the first I’ve seen where the exact same syntax can be used to query/join all of them in one result set.

To get started, I decided I would use the prebuilt container image (Using Containers | Documentation | Steampipe). The first step was to create a new directory to store some configuration files. Also, as the processes in the container image run as a non-root user, I made this user (uid 9193) the owner of the folder. Finally, as described in the documentation, I created an alias so that sp could be entered to interact with the container.

# create configuration folder
mkdir -p /data/container/steampipe/sp/config

# make the non-root steampipe user the owner of this configuration folder
chown 9193 /data/container/steampipe/sp/config

# alias the command
alias sp="docker run \
  -it \
  --rm \
  --name steampipe \
  --mount type=bind,source=/data/container/steampipe/sp/config,target=/home/steampipe/.steampipe/config  \
  --mount type=volume,source=steampipe_plugins,target=/home/steampipe/.steampipe/plugins   \
  turbot/steampipe"

Once the folder is defined and our alias is available, we are now able to install Steampipe plugins. For my example, I’m only using the VMware vSphere and CSV file plugins, but you should explore the Steampipe documentation for other available options.

root@lab-dock-14 [ /data/container/steampipe/sp ]# sp plugin install steampipe theapsgroup/vsphere csv

steampipe            [====================================================================] Done
theapsgroup/vsphere  [====================================================================] Done
csv                  [====================================================================] Done

Installed plugin: csv@latest v0.5.0
Documentation:    https://hub.steampipe.io/plugins/turbot/csv

Installed plugin: steampipe@latest v0.6.0
Documentation:    https://hub.steampipe.io/plugins/turbot/steampipe

Installed plugin: vsphere@latest v0.1.3
Documentation:    https://hub.steampipe.io/plugins/theapsgroup/vsphere

After installing the plugins, we should see a few spc files that were automatically configured.

root@lab-dock-14 [ ~ ]# cd /data/container/steampipe/sp/config
root@lab-dock-14 [ /data/container/steampipe/sp/config ]# ls -lh
total 16K
-rw-r--r-- 1 9193 root 1.8K Feb 12 20:34 csv.spc
-rwxr-xr-x 1 9193 root  971 Feb 12 20:34 default.spc
-rw-r--r-- 1 9193 root   50 Feb 12 20:34 steampipe.spc
-rw-r--r-- 1 9193 root  295 Feb 12 20:34 vsphere.spc

There is one spc file for each plugin. We can review these files to see the default configuration, by default the files contain some of the available syntax, but for our simple example we are going to create our own files. To start with we are going to create a very simple CSV file in the /data/container/steampipe/sp/config directory, since that path already has a bind map to /home/steampipe/.steampipe/config inside of the container. Our example CSV file will only contain two columns, one for a server_name and another for the owner. The contents of this server-owner.csv file will look like this:

server_name,owner
core-control-21,Brian Wuchner
core-vcenter01,Brian Wuchner
beet-farm-01,Dwight Schrute

To tell steampipe how to find the CSV file, we are going to create a new csv.spc file. I did this by renaming the default csv.spc (with the syntax mv csv.spc csv.spc.old) and then created a new file (with vi csv.spc). Our new file only contains the following information:

connection "csv" {
  plugin = "csv"
  paths = ["/home/steampipe/.steampipe/config/server-owner.csv"]
}

This is a very straightforward file — it shows that we are using the CSV plugin and specifically looking at the server-owner.csv file. Lets investigate this CSV file to see how things work. First up, we will enter the interactive query mode using sp query and then do a very basic select all statement against the server-owner file. Then we will add a bit more, just to get the hang of SQL again by adding a where clause. Finally, we will .quit the query editor. The output of these commands can be seen below.

root@lab-dock-14 [ /data/container/steampipe/sp/config ]# sp query
Welcome to Steampipe v0.18.5
For more information, type .help
> select * from "server-owner"
+-----------------+----------------+---------------------------+
| server_name     | owner          | _ctx                      |
+-----------------+----------------+---------------------------+
| beet-farm-01    | Dwight Schrute | {"connection_name":"csv"} |
| core-control-21 | Brian Wuchner  | {"connection_name":"csv"} |
| core-vcenter01  | Brian Wuchner  | {"connection_name":"csv"} |
+-----------------+----------------+---------------------------+
> select server_name from "server-owner" where owner ilike '%dwight%'
+--------------+
| server_name  |
+--------------+
| beet-farm-01 |
+--------------+
> .quit

Working with a CSV file is a basic example and we are only using it above to demonstrate how this would work. The article promised we’d cover VMware vSphere, so we will do that next.

To connect to vSphere we previously installed the theapsgroup/vsphere plugin. We now need to define a spc file that tells it where to find the vSphere environments, just like we did with the path to the CSV file before. I start by backing up the default file (mv vsphere.spc vsphere.spc.old) and then creating a new file (vi vsphere.spc). This new vsphere.spc file will have a few more required attributes than the csv.spc from earlier, as we need to pass in server name, username, password, etc. Here is the sample vsphere.spc file from my lab.

connection "vsphere_vc1" {
  plugin = "theapsgroup/vsphere"
  vsphere_server  = "vc1.example.com"
  user  = "svc-vspherero@lab.enterpriseadmins.org"
  password  = "Real-password-here!"
  allow_unverified_ssl = true
}

connection "vsphere_vc3" {
  plugin = "theapsgroup/vsphere"
  vsphere_server  = "vc3.example.com"
  user  = "administrator@vsphere.local"
  password  = "VMware1!"
  allow_unverified_ssl = true
}

connection "vsphere_t106" {
  plugin = "theapsgroup/vsphere"
  vsphere_server  = "t106-vcsa-01.lab.enterpriseadmins.org"
  user  = "administrator@vsphere.local"
  password  = "VMware1!"
  allow_unverified_ssl = true
}


connection "vmware_vsphere_all" {
  plugin = "theapsgroup/vsphere"
  type = "aggregator"
  connections = ["vsphere_vc1","vsphere_vc3","vsphere_t106"]
}

As you can see, I have three different VMware vSphere environments listed, with plugin name, server name, and other details. In addition, there is an aggregator connection that groups all of these environments together. I could have multiple aggregations defined, such as one for prod and another for development instances. Lets run a few test queries just like we did with the CSV example above.

Once we enter the interactive query mode, we will see some autocompletion samples showing what tables are available for our vSphere data. The common stuff like datastore, host, network, and VMs are present.

Lets try a couple of simple examples.

root@lab-dock-14 [ /data/container/steampipe/sp/config ]# sp query
Welcome to Steampipe v0.18.5
For more information, type .help
> select name, moref, product from vmware_vsphere_all.vsphere_host
+---------------------------------------+------------+----------------------------------+
| name                                  | moref      | product                          |
+---------------------------------------+------------+----------------------------------+
| euc-esx-22.lab.enterpriseadmins.org   | host-25601 | VMware ESXi 7.0.3 build-20328353 |
| dr-esx-31.lab.enterpriseadmins.org    | host-25572 | VMware ESXi 7.0.3 build-20328353 |
| core-esxi-33.lab.enterpriseadmins.org | host-34795 | VMware ESXi 7.0.3 build-21053776 |
| core-esxi-34.lab.enterpriseadmins.org | host-34781 | VMware ESXi 7.0.3 build-21053776 |
| euc-esx-21.lab.enterpriseadmins.org   | host-25598 | VMware ESXi 7.0.3 build-20328353 |
| test-vesx-71.lab.enterpriseadmins.org | host-8     | VMware ESXi 8.0.0 build-21203435 |
| test-vesx-72.lab.enterpriseadmins.org | host-12    | VMware ESXi 8.0.0 build-21203435 |
| t106-vesx-03.lab.enterpriseadmins.org | host-105   | VMware ESXi 6.7.0 build-19195723 |
| t106-vesx-01.lab.enterpriseadmins.org | host-303   | VMware ESXi 6.7.0 build-19195723 |
| t106-vesx-02.lab.enterpriseadmins.org | host-106   | VMware ESXi 6.7.0 build-19195723 |
+---------------------------------------+------------+----------------------------------+
> select count(*) from vmware_vsphere_all.vsphere_vm
+-------+
| count |
+-------+
| 212   |
+-------+
> .quit

Ok, this was fun. We can now query CSV files and vSphere data as if we were using a SQL database. This is interesting, but what becomes really powerful is when we tie this together. Since our data is in the same place/format, we can do more complex SQL queries that join one or more data sources together. In the next example, we will return a list of some VM details, along with the owner information from our CSV file.

> select vsphere_vm.name, vsphere_vm.power, vsphere_vm.memory, vsphere_vm.hardware, "server-owner".owner
from vsphere_vm join "server-owner" on vsphere_vm.name = "server-owner".server_name
+-----------------+------------+--------+----------+----------------+
| name            | power      | memory | hardware | owner          |
+-----------------+------------+--------+----------+----------------+
| beet-farm-01    | poweredOff | 1024   | vmx-11   | Dwight Schrute |
| core-control-21 | poweredOn  | 4096   | vmx-15   | Brian Wuchner  |
| core-vcenter01  | poweredOn  | 16384  | vmx-10   | Brian Wuchner  |
+-----------------+------------+--------+----------+----------------+

These are just some examples to get started. If you check out the steampipe.io site, you’ll find more examples and additional plugins covering other technologies, such as net that would allow you to query an SSL certificate (among many other things), ldap which can query LDAP/Active Directory, and other plugins for many cloud providers like AWS and Azure.

Posted in Scripting, Virtualization | Leave a comment

Helpful Docker Container Images for a Homelab

In a previous post, I described setting up a docker host for some containers that I need to run in a lab. This post will focus on a couple of those containers and how I use them.

One service that I need from time to time is a simple SMTP server. I could be configuring a vSphere Alarm Definition, creating a vROps Alert, or testing out a script that sends an email. I don’t really need a full featured email solution, I just want a destination to send email and an easy place to view those messages to see how they look. The best container I’ve found for this is Inbucket. It is an SMTP target with a web interface to monitor incoming messages. Here is a simple screenshot of an email that was successfully delivered and the short line of PowerShell that sent the message:

To get this container running I only need to type a single command (listed below on multiple lines for readability):

docker run --detach \
 --name inbucket \
 --publish 9000:9000 --publish 25:2500 \
 --restart always \
 --env INBUCKET_STORAGE_RETENTIONPERIOD="8h" \
 inbucket/inbucket

This command runs Inbucket in the background, listens on port 25 (for incoming email), displays the output on a web interface at port 9000, and deletes any messages after a super short 8 hour retention window. This is perfect for my requirements.

Another service that I found useful is RequestBin. Very similar to the above email example, I have needed to inspect a webhook payload coming from something like vROps or Log Insight. I once incorrectly formatted a JSON message body, it reached my webhook endpoint, and then nothing happened. It took some time before I realized what I had done, but if I would have been able to see the webhook body it would have been much easier. RequestBin has an online service that you could send your payload to, but is also available as a container that can run locally. Again we can start this container with one line:

docker run --detach \
 --name requestbin \
 --publish 8000:8000 \
 --restart always \
 weshigbee/requestbin

This creates a webserver listening on port 8000 that we can post to and then view the output. This clearly displays the raw body of the post which can be useful in troubleshooting.

These are the two containers that I find myself using most often in a lab. One other container that I’ve setup recently is nginx-proxy-manager. I didn’t really _need_ this, but wanted to test something using more friendly name/aliases than the above examples that required specific port numbers to be specified. Again, its an easy one line command to get this container running:

docker run --detach \
  --name nginx-proxy-manager \
  --publish 443:443 --publish 80:80 --publish 81:81 \
  --restart always \
  --volume /data/nginx-proxy-manager/data:/data \
  --volume /data/nginx-proxy-manager/letsencrypt:/etc/letsencrypt \
  jc21/nginx-proxy-manager:latest

Once the web interface is up, you can use it to create nginx type configurations. For example. I can create a DNS record for mail.example.org or requestbin.example.org and then define a proxy host for mail.example.org to forward to lab-dock-14.example.org:9000 and requestbin.example.org to forward to lab-dock-14.example.org:8000. This way I can have multiple services listening on different ports, but then let nginx deal with the port mapping so I only have to enter the friendlier host name. Most of the time this isn’t required for my testing, as I can remember the port numbers or likely only need one HTTP container at a time and can just use the default port.

Posted in Lab Infrastructure, Virtualization | 2 Comments

Learn how to monitor Pi-hole with vROps using the Management Pack Builder

First of all, thank you to Brian for allowing me to make my first post of what will hopefully be more on EnterpriseAdmins. As a quick introduction, I am a Staff Technical Account Manager at VMware and live near Cleveland, OH.

I recently set out to learn about the VMware Aria Operations Management Pack Builder, which I will abbreviate as MPB for brevity, and in this article, I will bring you along on my learning journey. I found the Communities page to be a good starting point as there were links to the appliance download as well as documentation and other learning tools. And I’ll put it out there now that much of the “figuring out” that occurs through the article was greatly aided by Brian’s help.

The MPB is described in the documentation as a no-code stand-alone appliance that enables the creation of custom management packs for vRealize (Aria) Operations Manager (henceforth referred to as vROps), allowing you to collect data from an external API to to then create or extend resources in vROps with new Data, Relationships, and Events where VMware, or another vendor, has not released an official management pack.

With a basic understanding of what the MPB did, I started looking for an application that was already running in my lab that had both API functionality and interesting content already, and settled on Pi-hole, an ad blocking DNS server. If you don’t already have Pi-hole deployed, you can use Brian’s deployment instructions here. An Internet search showed me that Pi-hole uses “fqdn/admin/api.php” as the base for API calls and brought me to this page which gave enough examples of the Pi-hole API to get started (as a side note, the API structure is not as far along as I would have guessed. We’ll see an example of a shortcoming later in this article).

To explore the Pi-hole API, I started with the “type” request as it does not require authentication. Since my pi-hole server is at 192.168.55.3 and I do not have a TLS certificate enabled, I entered http://192.168.55.3/admin/api.php?type into my browser and got a returned value of {“type”:”FTL”}. This was a good start. I then attempted both the summary and summaryRaw requests, which the document says does not require authorization but found that to not be the case. I set out to determine how to authenticate to the API and found in the Pi-hole admin interface / Settings / API/Web Interface section there is an option to “Show API token”, as shown here

Finding myPi-hole API token

I recorded that token, which I’ll call myAPIToken through the rest of the article, and after a bit of experimenting found that a URL of http://192.168.55.3/admin/api.php?summary&auth=myAPIToken would return the correct dataset. An example of the data from a call to the summary API in my lab is:
{“domains_being_blocked”:173812,”dns_queries_today”:93557,”ads_blocked_today”:20121,”ads_percentage_today”:21.506676,”unique_domains”:14998,”queries_forwarded”:52135,”queries_cached”:20425,”clients_ever_seen”:42,”unique_clients”:26,”dns_queries_all_types”:93557,”reply_UNKNOWN”:1018,”reply_NODATA”:19198,”reply_NXDOMAIN”:7366,”reply_CNAME”:29501,”reply_IP”:35897,”reply_DOMAIN”:149,”reply_RRNAME”:1,”reply_SERVFAIL”:10,”reply_REFUSED”:0,”reply_NOTIMP”:0,”reply_OTHER”:0,”reply_DNSSEC”:0,”reply_NONE”:0,”reply_BLOB”:417,”dns_queries_all_replies”:93557,”privacy_level”:0,”status”:”enabled”,”gravity_last_updated”:{“file_exists”:true,”absolute”:1674986048,”relative”:{“days”:1,”hours”:9,”minutes”:37}}}

If you are newer to exploring API’s, like I am, it will be useful to break down the various parts of the URL that we are going to need to understand when creating and testing/troubleshooting our Management Pack Design.

  • http://192.168.55.3 – the key thing here is that we are using http and not https. This will drive our port and SSL configuration choices
  • /admin/api.php – Pi-hole uses this path to call an API
  • ? – an indicator that we are passing in a value
  • summary – the name of the API request that we are using
  • & – an indicator that there is a second value being passed
  • auth=myAPIToken – a key/value pair that Pi-hole is expecting for an API request that requires authentication

With sufficient information about the Pi-hole API, I turned to the MPB communities page noted earlier for the download and documentation links to install the appliance into my lab. Following the steps in the documentation, I deployed the MPB in my lab, accessed it via a browser, and set the admin password with no issues to call out.

The documentation does a good job of explaining the main constructs of the Management Pack that we are going to build, defining the terms Design, Object, Requests, and Relationships. For my example of building a Pi-hole MP, I identified that the Pi-hole server was going to be my Object as I have two of those to work with, and that the summary API request was going to provide enough properties and metrics to have meaningful data for this experiment. This proved mostly true except for the hostname of my Pi-hole server, which will be detailed later.

Using the Creating a Design section of the documentation as a guide, I began a new design. The thing that would have been easier had I understood from the start about the Source section of the design is that portions of it are for testing only, but some of it will become content in the actual MP that you build. I will note those sections below:

  • Edit the name from “Untitled Design” to “PiHole Server”
    • This will be the name displayed in the Integrations/Repository section of vROps.
  • In the Reference Environment Settings section:
    • Hostname – 192.168.55.3
    • Port – 80
    • SSL Configuration – No SSL
    • Base API Path – leave blank
  • In the Authentication section:
    • Authentication Source Type – Custom
    • Add a Field
      • Label – API token (this will become the prompt for the field where the MP user is prompted to provide their API token when setting up the Adapter)
      • Value – myAPIToken
      • Sensitive box – checked
        This section took me a minute to understand that you are creating a variable, that in my case is called ${authentication.credentials.api_token} that you then use later in place of the auth token. This value will become part of the MP
  • Nothing in the Global Request Settings section
  • Advanced Request Settings
    • Add a Query Parameter (this will become part of the MP)
      • Key – auth
      • Value – ${authentication.credentials.api_token}
        • This is copied from the Authentication section above
    • Add a Query Parameter (this is only used to validate the Reference environment and will not be carried into the MP)
      • Key – summary
      • Value – True (the UI requires us to enter a value, even though the API request doesn’t require this)
  • Make Request section (this section is only for testing your reference environment)
    • HTTP Method – Get
  • Test Request Path – admin/api.php

The URL Preview should look like:
http://192.168.55.3:80/admin/api.php?auth=%24%7Bauthentication.credentials.api_token%7D&summary=true
This should look very much like the test we performed earlier, except for our token being replaced by the variable %24%7Bauthentication.credentials.api_token%7D
When you click the Test button you should see a green box that says, “Successfully connected” and a Check Response link that when you click it displays the results of the summary request. If you don’t get data successfully returned here, compare the URL Preview very carefully to the URL you tested earlier to identify the difference.

With a Source section successfully tested against our reference environment, we move to the Requests section of the design. As the documentation describes, this is where we define the API requests that we need to collect the data that our MP uses. We are going to create two requests, one named “summary” and one named “hostname”. Let’s work through these one at a time.

We determined early on that the summary API request was going to return most of the data that we would use in our MP. And we used it in our Source section to test our reference environment. If you looked at the results of the summary request carefully, you would note that the results do not include the actual hostname of your Pi-hole server. And in fact, there is not an API request that will return the name of your Pi-hole server. If we only have one Pi-hole server this wouldn’t be a horrible problem in vROps, but if we have multiple servers that we want to monitor, there’s not a reliable value to identify which server is which. So, Brian came up with the following clever solution that requires a script to be created on each Pi-hole server but then allows the hostname of the server to be collected:

  1. SSH to your Pi-hole server with a privileged account (in my case “pi”) with the password that you set on that account
  2. Create a new file /var/www/html/get-hostname.php with the following content:
    <?php echo '{"hostname":"' . gethostname() . '"}'; ?>

If you’re not comfortable with linux commands, you could copy or type the following to create the get-hostname.php file:

  1. sudo nano /var/www/html/get-hostname.php
    • this creates the file and opens it in the nano editor
  2. paste the line of code from above
  3. Ctrl+X to Exit
  4. Press Y to save the modified buffer
  5. Press Enter to write to the file

With this file created, we can go back to the MPB UI and define the first request that will return data for our MP by clicking the Add Request button then:

  • Change the name from API Request to hostname
  • Chain from another API request – no change
  • Resource Path
    • get-hostname.php
  • Advanced – no change
  • Get data
    With an HTTP Method of Get, the Preview should look like http://192.168.55.3:80/get-hostname.php and when you click the Request button, should return your Pi-hole server name. We can now click Add Request to create our second request.
  • Change the name from API Request to summary
  • Chain from another API request – no change
  • Resource Path
    • admin/api.php
  • Advanced
    • Body and Headers – no change
    • Query Parameters
      • First Parameter
        • Key – summary
        • Value – true
      • Second Parameter
        • Key – auth
        • Value – ${authentication.credentials.api_token}
          • This is the variable that we defined in the Source section
  • Get data
    With an HTTP Method of Get, the Preview should match what we saw in the Source section and when you click the Request button, should return a dataset.

Now that we have defined the Requests, we move to the Objects section of our design where we can select the data from the API requests that we’d like to include in our MP. Click the Add New Object button and populate it as follows:

  • Change the name to PiHole Server
  • Metrics and Properties from API Request
    • Click the << next to summary and select the metrics that you’d like to collect with vROps. Note as you hover over the names, you will see a value that was obtained when you tested the request in the previous section. At a minimum I would suggest:
      • ads_blocked_today
      • ads_percentage_today
      • status
    • Click the << next to hostname and select ‘hostname’
    • Scroll down and make the following changes to the chart:
      • Hostname – leave as is
      • ads_blocked_today
        • disable Property
        • Set Data Type to DECIMAL
        • Set Unit to Count
      • ads_percentage_today
        • disable Property
        • Set Data Type to DECIMAL
        • Set Unit to %
      • Status – leave as is
  • Object name, Identifier, and Icon
    • Select object instance name – hostname
    • Select object identifiers – hostname

The results will look as follows:

Objects section in Management Pack Builder

There are no Relationships to define, and we are not going to create any Events for this example so we can select the Configuration section where we can review and modify MP version as well as labels and default values for various fields. With this information complete, click the Save button near the top of the Design then click the Build button. Review the Identifiers and Properties and since we didn’t define Relationships or Events, click Next. Click the Perform Collection button to collect sample data and review the Collection Summary to ensure the results were as expected then click Next. Lastly, click the Build button to create the MP. Look for the Build succeeded message then click the Go to build link. Click the name/version of the MP which is a link that will start the download of the .pak file. Save this file and you are now ready to import it into vROps.

Installing the resulting .pak file is done like any other MP, from Data Sources / Integrations / Repositories with the exception that you will have to check the box to “Ignore the PAK file signature checking” as this MP is not signed. Once the MP is installed, you add an Account for each of your Pi-hole, which will include adding Credentials to vROps that is the API token for your server. Before long, you too can track your Pi-hole performance in vROps.

Collecting Pi-hole metrics in VMware Aria Operations Manager

I’d love for you to leave a comment below with what other apps you build a management pack for.

Posted in Lab Infrastructure, Virtualization | 1 Comment