Getting Started with SaltStackConfig PowerShell Module

In the previous post (here), we looked into Getting Started with SaltStack Config. We created and kicked off a few tasks from the web interface. Occasionally we’ll need to report on some data as well. The web interface offers the ability to download many result/output tables as CSV or JSON, but what if we wanted to do something with that data programmatically? Fortunately there is an API available (documentation here: Unfortunately, I couldn’t find many examples of consuming this API with PowerShell and ran into an issue as I was getting started (related to credentials). Once I got those sorted out, I was able to create a quick inventory script that I wanted (to simply return minion names & a few “grains” like the Operating System & OS Version). However, with the bit of info I picked up along the way, I decided to try and wrap things up into a PowerShell Module for future needs. This module is available on GitHub ( and the following post will focus on how to get started using that module.

The first step to using this SaltStackConfig module is to get the required files to the system where you run scripts. The easiest way I know to do this is to download the full project repo (there is a Code > Download ZIP button at With the zip file downloaded, I like to right click and see if the ‘unblock’ button appears in the bottom right — if so, I uncheck that (doing this prior to unzipping the file will save some time as we don’t need to recursively run Unblock-File on everything that was extracted).

I then extract the files I need, in this case the folder Modules\SaltStackConfig and place them in one of the PowerShell Module paths (to find where these are, you can open a powershell window and type $env:PSModulePath).

With the module copied into one of the correct paths, it will load automatically the next time we start PowerShell. Once we have that new PowerShell session, with the module now available, we can connect to the SaltStack Config environment. This cmdlet will connect to the RaaS API and create a global variable that we can reference to use future API calls (for this PowerShell session only).

C:\> Connect-SscServer '' -User 'root' -Password 'VMware1!'

Here is the sample inventory task I was interested in that started everything:

C:\> (Get-SscMinionCache).grains |Select-Object host, osfullname, osrelease |Sort-Object host

host             osfullname                               osrelease
----             ----------                               ---------
cm-vrssc-01      VMware Photon OS                         3.0
core-control-21  Microsoft Windows Server 2022 Standard   2022Server
dr-control-01    Microsoft Windows Server 2016 Standard   2016Server
raspberrypi      Raspbian                                 10
svcs-sql-01      Microsoft Windows Server 2016 Standard   2016Server
t147-ubuntu-01   Ubuntu                                   20.04
t147-ubuntu-02   Ubuntu                                   20.04
t147-ubuntu18-01 Ubuntu                                   18.04
t147-win22-01    Microsoft Windows Server 2022 Standard   2022Server

I liked how the osfullname property looked for Windows machines, but for the Ubuntu & Photon releases I wanted to combine the osfullname and osrelease columns, so I went with a slightly modified Select-Object statement that contains some if/else logic that pulled together the output exactly how I wanted to display it:

C:\> (Get-SscMinionCache).grains | Select-Object host, @{Name='FriendlyOSName';Expression={ if ($_.osfullname -match 'Windows' ) { $_.osfullname } else { "$($_.osfullName) $($_.osrelease)"}}} | Sort-Object host

host             FriendlyOSName
----             --------------
cm-vrssc-01      VMware Photon OS 3.0
core-control-21  Microsoft Windows Server 2022 Standard
dr-control-01    Microsoft Windows Server 2016 Standard
raspberrypi      Raspbian 10
svcs-sql-01      Microsoft Windows Server 2016 Standard
t147-ubuntu-01   Ubuntu 20.04
t147-ubuntu-02   Ubuntu 20.04
t147-ubuntu18-01 Ubuntu 18.04
t147-win22-01    Microsoft Windows Server 2022 Standard

I decided to write a couple other wrapper functions for some other API methods that I thought I might end up using. In the next few sections I’ll show how to find a specific job that was run, the activity around that job, and the specific results from the execution.

In Task 2 of the previous post (, we created a job to push BgInfo to our test servers. This function will return all jobs, but we’ll filter the output to just entries that contain bginfo. The syntax will be Get-SscJob | Where-Object {$ -match 'bginfo'} and sample output would look like this:

uuid     : b39de5cb-d01c-4cc7-a886-c746ae2b4150
name     : EnterpriseAdmins BGInfo Test
desc     :
cmd      : local
tgt_uuid : e98739a9-a058-42a3-b3e4-73450de38ced
fun      : state.apply
arg      : @{arg=System.Object[]; kwarg=; hiddenArgsObj=}
masters  : {}
metadata : @{auth=}
tgt_name : zCustomWinServerT147

When we ran that job, it generated some activity on our SSC appliance. We’ll find that specific activity by looking for only the entries where the Job_UUID matches the output from the above command, and since we may have ran the task multiple times, we’ll also filter it for only instances started in the last couple of days. The syntax will be Get-SscActivity | Where-Object {$_.job_uuid -eq 'b39de5cb-d01c-4cc7-a886-c746ae2b4150' -AND $_.start_time -gt '2021-12-20'}

jid             : 20211222185741967000
state           : completed_all_successful
cmd             : local
user            : bwuchner
user_uuid       : 6fe029b6-9e2e-4501-8c57-1776084bd3a8
job_uuid        : b39de5cb-d01c-4cc7-a886-c746ae2b4150
job_name        : EnterpriseAdmins BGInfo Test
job_desc        :
tgt_uuid        : e98739a9-a058-42a3-b3e4-73450de38ced
tgt_name        : zCustomWinServerT147
tgt_desc        :
tgt_type        : compound
tgt             : G@os:Windows and G@nodename:t147-win22-01
sched_uuid      :
sched_name      :                                                                                                       fun             : state.apply                                                                                           is_highstate    : False                                                                                                 job_source      : raas                                                                                                  expected        : 1
returned        : 1
not_returned    : 0
returned_good   : 1                                                                                                     returned_failed : 0                                                                                                     duration        :                                                                                                       masters_to      : {salt}                                                                                                masters_done    : {salt}
create_time     : 2021-12-22T18:58:02.307191
origination     : Ad-Hoc
start_time      : 2021-12-22T18:57:41.96700Z

And finally, we’ll want to find the status of all the data returned from that job. We’ll get the JID value from above and include it in a filter to the last function we’ll be covering. The final example syntax is: (Get-SscReturn -jid 20211222185741967000).full_ret | Select-Object id, success

id                                     success
--                                     -------    True

These are just a few examples, but each function includes some help, so feel free to use PowerShell help to get any usage examples for the other functions. For reference, here is a short list of the initial wrapper functions available:

C:\> Get-Command -Module SaltStackConfig

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Connect-SscServer                                  0.0.5      SaltStackConfig
Function        Disconnect-SscServer                               0.0.5      SaltStackConfig
Function        Get-SscActivity                                    0.0.5      SaltStackConfig
Function        Get-SscData                                        0.0.5      SaltStackConfig
Function        Get-SscJob                                         0.0.5      SaltStackConfig
Function        Get-SscMaster                                      0.0.5      SaltStackConfig
Function        Get-SscMinionCache                                 0.0.5      SaltStackConfig
Function        Get-SscReturn                                      0.0.5      SaltStackConfig
Function        Get-SscSchedule                                    0.0.5      SaltStackConfig

If you run into any issues, or think of another function that would be helpful to have, please feel free to submit an issue on the Github repo at

Getting Started with SaltStack Config

I’ve recently started looking at vRealize Automation SaltStack Config in a lab. In this post I’ll step through a rough outline of the lab environment / setup and cover a couple simple tasks that I was able to complete using SaltStack Config (SSC).

In the lab environment I have a vRealize Automation SaltStack Config instance that was deployed using vRealize Suite Lifecycle Manager (vRSLCM). The vRSLCM platform made this very simple — I downloaded the right installer, answered a few simple questions like IP address, DNS servers, default password, etc and when I came back the appliance was running and ready to use.

With my SSC Appliance running, I needed a few Minions to manage. Having a Windows background, I decided to use the Windows minion on a couple of test servers. I ran a silent installer on test systems, using the following syntax:

\\fileserver\mgmt\_agents\Salt-Minion-3004-Py3-AMD64-Setup.exe / /S

Once the minion was installed, I browsed to Minion Keys > Pending in the appliance web interface and accepted the pending key requests. This allows encrypted communication between the appliance and minions. With the lab setup background out of the way, lets get to the tasks we want to solve.

Task 1: Deploy a custom PowerShell profile

For the first example I wanted to do a single file copy from the SSC appliance to my minions. This could be a configuration file or such, but for demo purposes I decided I would copy a standard PowerShell profile to the machine for all users.

Browse to Config > File Server. In the top left there is a dropdown list that says base or sse. I selected sse and in the path name text box entered enterpriseadmins\powershell\profile.ps1 and changed the file type dropdown from SLS to TXT, pasted in my PowerShell profile (here is an example if you need one:, and clicked save. This is the file contents that I’d like to copy to all of my minions.

In the same Config > File Server area we will create a new file. This time I left the file type dropdown as SLS and entered enterpriseadmins\powershell\profile.sls. This is the ‘state’ file that is going to contain the instructions of what to copy and where to place it on the minion filesystem, and overwrite if the file already exists. For this file I entered the following:

    - name: 'C:\windows\system32\WindowsPowerShell\v1.0\profile.ps1'
    - source: salt://{{ slspath }}/profile.ps1
    - replace: True

With this new state saved, we browse over to Config > Jobs. From here I created a new job with the following criteria:

  • Targets = Windows Servers
  • Function = state.apply
  • Environments = sse
  • States = enterpriseadmins.powershell.profile

I saved this new Job and then ran it (by selecting the three dots to the left of the job name & clicking run now). Looking at my minion I see the file was created & contains the expected contents. Launching PowerShell also results in my new profile loading correctly.

This is a pretty simple demo, but does show how we can manage a file on a possibly large group of systems & easily make changes if needed.

Task 2: Installing BgInfo on Windows servers

Many years ago I created a batch file to “install” a BgInfo configuration on servers in a lab. It did a couple file copies and made a registry entry. This allowed me to have the hostname on the desktop so I knew what I was looking at. I don’t run the batch file much anymore, as I just had this configuration baked into a template so it showed up each time a new VM was deployed. This worked well, but what if I ever wanted to swap out that configuration file? I’m not going to manually do this… and now that I have SSC, lets see if we can recreate that wheel.

The first step is get our bginfo.exe file ( copied to all our machines. I could try and have minions try and download this from the internet, but I have a default firewall rule to deny everything, and I may not want clients going to the internet for security reasons. In the above Task 1 example we stored our PowerShell profile on the embedded Salt file server, but it was just text and we could paste it in through the web interface. We can get this working with binary files, its just a bit different process. The first thing we need to do is to ssh to our SSC Appliance. From here we need to make a directory using the command mkdir /srv/salt. In this example /srv already existed and we just created the salt subdirectory. The /srv/salt folder gets served up by the SSC file server. To keep things tidy, I’m going to create another subfolder for my stuff (using the command mkdir /srv/salt/enterpriseadmins). We can then copy our binary file here (using something like WinSCP).

We now need to decide where to store our BgInfo config file (bgi extension) & state file. We could manage this from the web interface and store in the sse path (like the above example) but I decided to keep all of my BgInfo bits in the same path — I don’t think there is a right or wrong answer here, this was just a personal preference. My BgInfo config file is named Lab-Server.bgi and I called my state file bginfo-config.sls, the contents of that file are below:

    - name: 'C:\sysmod\bginfo.exe'
    - source: salt://enterpriseadmins/Bginfo.exe
    - replace: True
    - makedirs: True

    - name: 'C:\sysmod\LAB-Server.bgi'
    - source: salt://enterpriseadmins/LAB-Server.bgi
    - replace: True
    - makedirs: True

#Update Registry Key
    - vname: 'LAB-Server-BGInfo'
    - vdata: 'C:\sysmod\bginfo.exe C:\sysmod\lab-server.bgi /SILENT /NOLICPROMPT /timer:0'
    - vtype: REG_SZ

Its important to note that all of these file names / paths are case sensitive. This is probably obvious, but I spent some time troubleshooting it so I figured it was worth mentioning. Once we have all three files in place (Bginfo.exe, LAB-Server.bgi, and bginfo-config.sls) we can configure a job. For criteria on this I used:

  • Targets = Windows Servers
  • Function = state.apply
  • Environments = (leave blank)
  • States = enterpriseadmins.bginfo-config

I saved this new Job and then ran it. Looking at my minion I saw the files & registry key were created. The next login resulted in my updated BgInfo config being applied. Update complete — everywhere, all at once. Thanks SaltStack Config!

Clone Template to new vCenter

On a recent TAM Lab session (TAM Lab 082) I covered several methods for managing vSphere templates. At the end of the presentation we had a brief bonus method that showed using PowerCLI to clone a template to a different vCenter. This could be used once you update your primary copy of a template in vCenter1 and you wanted to make that template available in a different environment. The script used in the demo is available below.

$creds = Get-Credential

$sourceVC = Connect-ViServer -Credential $creds
$destVC   = Connect-ViServer -Credential $creds

$destTemplateName = 'template-tinycore11.1_tamlabtest'
$splatNewVM = @{
  Name     = $destTemplateName
  Template = 'template-tinycore11.1'
  VMHost   = ''
$vm = New-VM @splatNewVM -Server $sourceVC

$splatMoveVM = @{
  VM                = $vm
  NetworkAdapter    = (Get-NetworkAdapter -VM $vm -Server $sourceVC)
  PortGroup         = (Get-VirtualPortGroup -Name 'VLAN10' -Server $destVC)
  Destination       = (Get-VMHost '' -Server $destVC)
  Datastore         = (Get-Datastore 'test-esx-33_nvme' -Server $destVC)
  InventoryLocation = (Get-Folder 'vc1_TAMLab082' -Server $destVC)
Move-VM @splatMoveVM

Get-VM $destTemplateName -Server $destVC | 
Set-VM -Name 'template-tinycore11.1-bonus' -ToTemplate -Confirm:$false

This script uses splatting to improve readability. You can read more about_Splatting here. There are a couple basic components. First we connect to both vCenters, in this case using the same credentials. We then create a new VM from template on the source side, then move that VM to the destination side, and finally rename the destination VM and convert it to a template. We did this as multiple steps as Move-VM has additional parameters to assist with changing the destination network adapter to specify the correct portgroups and such.

Scripts to max out CPU and Memory

Most of the time we want our virtual machines to run as optimally as possible. We don’t want to see CPU contention or high memory conditions. However, on occasion we may want to have some stress to see what that looks like in monitoring tools like vRealize Operations. I created two small scripts that will run in TinyCore Linux, one consuming CPU and the other memory. For info on creating a TinyCore template, you may want to check out this post: Here are the scripts for reference:

cpus=$(grep -ci processor /proc/cpuinfo)
echo "System has $cpus CPUs, starting thread for each."

for i in $( seq 1 $cpus )
  echo " ..starting background process #$i to consume CPU."
  sha1sum /dev/zero &

echo "You can check processes with 'top' sorting by CPU with 'P'."
echo "To end all processes run 'killall sha1sum'"

Note: this file is available at

echo "This script will fill memory to 90% with zeros."
# disable swap, only use RAM
sudo swapoff -a

# find current system memory
mem=$(sudo grep -i memtotal /proc/meminfo | awk '{print $2}')

# calculate 90% of current memory, using 100% will cause instability
fillmem=$(expr $mem / 100 \* 90)

# tmpfs is mounted as 50% by default, remount with our 90% number
echo " ..remounting /dev/shm to use 90% of MemTotal"
sudo mount -o remount,size=$fillmem"k" /dev/shm

# show the current size of tmpfs
df -h | grep tmpfs

# fill that space with 1k block zeros
echo " ..starting memory fill process."
dd if=/dev/zero of=/dev/shm/fill bs=1k

Note: this file is available at:

I placed these files in the tc user home directory (/home/tc) and set them to executable with chmod +x

If you’d like, you can add entries to have these scripts start automatically at boot — if you want an appliance that maxes out resources all the time. To do this, use sudo vi /opt/ and add entries at the end of the file for /home/tc/ & and/or /home/tc/ &. Note: the ending ampersand causes the script to run in the background and not wait for completion.

Typing backup will allow you to make these files & changes persistent.

Note: its much easier getting this text copied over if ssh is installed/configured on your tinycore VM. There is a very good write up on how to do this here: This post also covers how to include credentials (etc/shadow) in the file list backed up by TinyCore, which is also very useful.

vRealize Operations Alerts using Rest Notification Plugin

I have created several vRealize Operations (vROps) alerts in the past, mainly using the Log File Plugin and Standard Email Plugin. However, I recently had someone ask for more information on using the Rest Notification Plugin. I hadn’t used this, so I started looking for more detail on how to get started.

I found a couple really good blog posts on this, specifically and . Both of these posts describe using an intermediary to accept what vROps is sending and convert it into a format that another endpoint expects. There are a handful of integrations provided, so I started looking at one that I could test with. The following post will describe the steps to get this working.

The Test Service
For testing, I’m going to use vROps to send an alert to Slack. This was pretty straight forward, I created a new channel where I wanted the alerts to appear, and then created a new incoming webhook by visiting I created a new app called vrealize-bot using my workspace. Once the app was created, I toggled on the ‘incoming webhooks’ feature, and mapped to my channel. This resulted in a webhook URL that looked like this:

To confirm this was working, I used a quick PowerShell script to try and post to that webhook URL. This isn’t needed, but did prove that my web hook was correctly created.

Invoke-WebRequest `
   -Uri $webhookURL `
   -Method POST `
   -Headers @{"Content-type"="application/json"} `
   -Body (@{"text"="This is a test"}|ConvertTo-Json)

The ‘Shim’
We need a piece of code to covert from the vROps Rest output into a format Slack will accept. The first blog post mentioned above calls out a prebuilt tool to do this — called the loginsightwebhookdemo. There are instructions available on getting this running, but the easiest route for me was to use the docker image. I started by downloading the Photon OS 3.0 OVA, deployed it to an ESXi host, and then enabled docker using these instructions. I ran three commands… only the middle one is required, the other two will just show some supplemental info.

systemctl status docker
systemctl start docker
docker in

Once docker is running, you can start the webhook-shims container using these instructions. As described in the instructions, you launch the bash shell, which gives you the ability to edit files in the container file system to add things like our Slack API URL. If you choose this route, once the files are edited in the loginsightwebhookdemo directory, you’ll need to run ./ from the webhook-shims directory. However, since we are only using the Slack shim in this example, there is an easier way. All we need to do is pull & run the container using these commands:

docker pull vmware/webhook-shims
docker run -it -p 5001:5001 vmware/webhook-shims

With the container running we can access its info page at http://dockerhostnameorip:5001/. This will show everything is up and running and that you can connect to the website.

The vROps Alerts
From the Alerts > Alert Settings > Notification Settings area, we can add a new rule. We will select the Rest Notification Plugin method and add a new instance. We’ll name our instance SlackWebhook_vrealize-bot. We could enter anything we want here, but want to be descriptive as possible. This shows the service we are using, how we are connecting, and the application that will be doing the posting, which seems sufficient. The URL is where the magic happens. We’ll enter a URL like this:


This is the name of the host our container is on, the port that the service is exposed on, the endpoint/slack is to specifies which shim we want to use, and the TTTTTTTTT/BBBBBBBBB/alphaNumericStr0ngOfText comes from our Slack webhook at the beginning of the article. We will leave the username and password blank (all that authentication is done in our custom webhook URL). For content type we’ll select application/json. Pressing TEST should result in two new posts to our Slack channel.

Slack Channel posting from vRealize Operations Rest Notification Plugin / webhook shim

Now all we need to finish up is to define the filtering criteria for which alerts we want to be sent to Slack. For testing, I just set criticality to Immediate or Critical, but we will likely want to narrow that down over time as it is a bit too chatty.