LDAP query performance

The other day I was having a discussion with a co-worker about Active Directory performance related to standard LDAP queries. The complaint (and basis for my involvement in the discussion) was based around poor LDAP performance, which was assumed to be attributed to LDAP queries hitting domain controllers contained within virtual machines. This assumption was based on the fact that some queries completed quickly while similar queries were much slower. After some discussion, and viewing the queries, I believed the problem to be more likely attributed to poorly designed queries, so I set off to prove this with actual data.

The queries included as part of this discussion were searching an attribute that was indexed and part of the partialAttributeSet (replicated to Global Catalogs). The queries that completed faster contained a specific value to search while the slow queries had multiple wildcards included in their value. To prove my hypothesis, I created a simple script to perform the same LDAP searches several times, some with specific values, some with a single wildcard and others with multiple wildcards:


$report = @()
for($i=1; $i -le 5; $i++) {
     $item = "" | select Count, SpecificValue, SingleWildcard, TwoWildcards
     $item.count = $i
     $item.SpecificValue = (Measure-Command { get-dn user mail "testuser@test.domain" }).TotalMilliseconds
     $item.SingleWildcard = (Measure-Command { get-dn user mail "testuser*" }).TotalMilliseconds
     $item.TwoWildcards = (Measure-Command { get-dn user mail "*tuser*" }).TotalMilliseconds
     $report += $item
}
$report

*Note: Get-DN is a custom function created by a co-worker. It is a PowerShell function I have loaded in my profile that performs a global catalog search. It builds the LDAP filter using three arguments: 1.) the objectCategory to search 2.) the attribute to search 3.) the value to search for in attribute.

The results of the searchs can be seen below:

Count SpecificValue SingleWildcard TwoWildcards
----- ------------- -------------- ------------
    1        9.5489         9.3398   11066.8601
    2        7.7018          7.335   11102.0755
    3        7.8635         8.3801   11067.5442
    4        8.0664         7.6145   11137.9768
    5       10.2233         8.9622   11132.9646

The averages speak volumes about this test:

Property : SpecificValue
Average  : 8.68078

Property : SingleWildcard
Average  : 8.32632

Property : TwoWildcards
Average  : 11101.48424

As I expected, placing multiple wildcards in an LDAP search greatly impacts search performance — even if the attribute being searched is indexed. What I found somewhat surprising is that a single wildcard has nearly no impact on performance — actually in the test a single wildcard slightly outperformed the search of a specific value (but only by a fraction of a millisecond).

I performed these same queries several times and hard-coded the server names to search to include 1.) physical GC only 2.) virtual GC only 3.) allow DNS to resolve domain name to obtain GC name. Each result set was very similar (within fractions of milliseconds) so I included the default result set where DNS resolves the domain name to obtain a GC.

This article is filed under scripting (and not virtualization) because the results prove that using inefficient queries creates more impact on LDAP directory performance than whether you have physical or virtual domain controllers.

Posted in Messaging, Scripting | 1 Comment

Network configuration missing from ESXi host

I came back from an awesome week at VMworld to find a very odd networking issue on several production ESXi 4.1 hosts. I first noticed that the hosts in question had an HA warning, so I attempted to ‘Reconfigure for VMware HA’ which resulted in the following error message:

Cannot complete the configuration of the HA agent on the host. Misconfiguration in the host network setup.

I decided to place one of the hosts into maintenance mode while I investigated this issue. This failed with several warnings during the vmotion operation(s):

The vMotion interface is not configured (or is misconfigured) on the Source host.
Currently connected network interface 'Production' uses network 'Production' which is configured for different offload or security policies on the destination host than on the source host.
The vMotion interface of the destination host uses network '<unknown network>', which differs from the network '<unknown network>' used by the vMotion interface of the source host.

Image of vMotion error

I then decided to check out the networking configuration to see what was going on. I clicked on the host configuration tab, then Networking and waiting. I never got more than the following screen — even after selecting ‘refresh’

Thinking something wasn’t displaying correctly, I moved to the Network Adapter link. What I found there was even more alarming:

Just for clarification, all six of those network interfaces should be assigned to vSwitches! The oddest part was the hosts still had active, running and responding virtual machines, but the host had no visible signs of network configuration.

I then pointed my vSphere client directly at the host, thinking something was wrong in vCenter. No such luck, I received the same results. I enabled tech support mode and logged in through ssh. I then listed all of the vswitches using the following command:

esxcfg-vswitch -l

Fortunately, all of the vSwitches were still in tact — which explains how/why the VMs were still online. I then checked the esx config file to see if my NICs and portgroups were still properly defined:

cat /etc/vmware/esx.conf |grep -i nic
cat /etc/vmware/esx.conf |grep -i portgroup

Since everything was in order I went to the DCUI on the console and restarted the management agents on one host. A few seconds later everything was back in working order and I was able to re-enable HA. This was a very simple fix, but it is one of the weirdest network issues I’ve ever seen on an ESXi host.

Posted in Virtualization | 5 Comments

PowerCLI vCheck and PowerShell 1.0 support

I’ve been looking at some of the vCheck 5 code looking for possible code improvements. There are a few changes that I believe could simplify the script, but require PowerShell v2. One that comes to mind would be using Send-MailMessage instead of a custom function that sends mail using the .Net properties. In order to maintain backwards compatibility I have left this alone…until now! While looking over the PowerCLI 5 (build 435426) release notes I noticed the following heading:

Discontinued Support
Support for PowerShell 1.0 will be removed in the next PowerCLI release.

This is important to note — and also means that I will start using more PowerShell 2 going forward. The current build of vCheck I’m working on (5.45) will contain several improvements in the way output is addressed. I’m planning on moving all of export tasks into a function within the configuration section of the script and switching the mail function to Send-MailMessage. This will reduce the number of lines of code as the custom function can be removed and will also be easier for newer PowerShell users to understand.

Posted in Scripting, Virtualization | Leave a comment

PowerCLI vCheck 5.44

It has been nearly three months since I last posted an update to vCheck, but I haven’t forgot about the script. Here are a couple of things I have been working on:

# Version 5.44- bwuch: cleaned up comments and logging
# Version 5.43- bwuch: resolved bug with "Host Build versions in use" counter
# Version 5.42- bwuch: Added Cluster BIOS Check
# Version 5.41- bwuch: Resolved PowerCLI 4.1 warning on line 1327 re: LunPath

5.41 – In response to a comment on vCheck 5.40, I’ve added a bug fix on what used to be line 1327. This bug fix prevents a warning when checking LUN Paths with the 4.1.1 version of PowerCLI.
5.42 – This is a newly added check that I thought was important. It validates that all nodes in a HA/DRS cluster are running the same BIOS versions. I’ve had a problem with BIOS updates causing PSODs and a specific issue with Dell blades where vMotion failed on one node of a cluster that had a newer BIOS than the other nodes.
5.43 – this is a minor cosmetic fix on the check for ESX/ESXi versions in use; when only one version was in use the counter in the header showed the number of hosts instead of the number of versions. This has been resolved.
5.44 – the console logging (using the Write-CustomOut function) was being called outside of the checks in certain cases. This caused the console to report checks were being performed that were not. This has been resolved. Additionally some unneeded code (which had been commented in several versions) has been removed from the code base.

You can download the updated version here: vCheck5.44.ps1
Please feel free to leave a comment with any suggestions or problems you may encounter.

Posted in Scripting, Virtualization | 4 Comments

vCheck 1.14: Inactive machines count

I was looking through the vCheck Feature Request tracking spreadsheet that I’ve been working on for several months.  I found this comment that was originally posted in September 2010:

One of the Heading in the scripts says number of Inactive machines. What exactly it means that the machines are turned of or what. Secondly it doesnt list those VM’s. Can i have code for that please. Regards

Reviewing the code today, it appears this section was added back in the 1.14 version. The ‘Inactive VM’ section is simply a count of powered off virtual machines.  The following code will produce the actual list:

$FullVM = Get-View -ViewType VirtualMachine | Where {-not $_.Config.Template}
$FullVM | where { $_.Runtime.PowerState -eq "poweredOff" }

In version 5.35, a section title “Powered off VMs” was added that includes the last power on event time for each powered off server. I hope this helps!

Posted in Scripting, Virtualization | Leave a comment