ESXi Datastore Performance Battle: iSCSI vs. NFS on Synology NAS with VAAI

I recently got my hands on a new Synology DS723+ and a couple of 4TB HDD disks for my home lab. My plan is to use this device as a backup target, either as a storage destination for my existing backup product, or perhaps switching to the Synology Active Backup application that can be enabled on the Synology device. I’ll be doing a bit more testing with this in the future, and do plan to write up those results in a future post.

However, since I had this new device, I wanted to revisit a decision I made several years ago — whether I should use iSCSI or NFS for my ESXi Datastores. I wanted to do a side-by-side comparison of the protocols to make sure I was using the optimial one. I also wanted to test NFS with & without the NFS VAAI plugin, to make sure it was worth installing as part of host builds. Several years ago, I purchased a Synology DS1817+ to use as the primary storage in my lab and decided to use NFS with the VAAI plugin. This has worked very well for me, but it never hurts to re-think a decision.

For this test I decided to perform a couple of virtual machine copy and move operations. I wanted to use the same virtual machine, but decided to use one smaller template and one that was a bit larger. Here are some storage details on the two VMs that I used for comparisons:

  • photon5 – 16GB VMDK, thin provisioned & using about 860MB on disk
  • win22 – 50GB VMDK, thin provisioned & using about 16GB on disk

Both test VMs were placed on a local NVME disk on a test host which also had a pair of NFS datastores and a pair of iSCSI datastores backed by the Synology DS723+, connected to the network with a single 1GbE port. The DS723+ has two gigabit connections available and can be upgraded to include a 10GbE connection. However, for this test I wanted to focus specifically on storage protocols and having this bottleneck in the network and using HDD instead of SSD seemed acceptable. Changing these factors would improve performance, but my focus of this test was the protocol & VAAI impact instead of pure performance.

To execute the tests & capture results, I decided to use PowerCLI commands inside of a Measure-Command code block and focus on total seconds. I had originally started this test using ESXi 7.0 only, but after seeing another thread mention that the Synology NFS VAAI VIB would work with vSphere 8.0 (although it is not officially supported), I decided to double up all the tests to see if drastic differences existed between the two versions. From the testing below, we can see that the ESXi version had no noticable impact on the testing. I listed the total seconds for the first test only, but did see some variance in multiple runs. I suspect if we averaged the results across 10 runs they would be even closer to the same value.

Test1: Full VM Copy across the network – completed & measured using syntax similar to: Measure-Command {New-VM -VM 'localdisk_photon5' -Name 'nfs_photon5' -VMHost 'testhost' -Datastore 'nfs1'} | Select-Object TotalSeconds

VMESXiSource DSDest DSTime (seconds)
Results of Test Case 1

Based on the table above, we can see that the performance of all the New-VM operations are roughly the same for the same per source VM size.

Test2: Full VM Copy between datastores of the same type – completed & measured using syntax similar to: Measure-Command { Move-VM -VM 'nfs_photon5' -Datastore 'nfs2' } | Select-Object TotalSeconds

VMESXiSource DSDest DSTime (seconds)
photon57.0u3NFS (VAAI)NFS (VAAI)22.31
photon58.0u1NFS (VAAI)NFS (VAAI)21.97
win227.0u3NFS (VAAI)NFS (VAAI)231.77
win228.0u1NFS (VAAI)NFS (VAAI)225.20
Results of Test Case 2

I did not include specific results, but the tests observed above for Move-VM had similar results to New-VM copies that used the same source/destination datastores. For example, a Windows 2022 New-VM from iSCSI to iSCSI (similar to the last 2 rows of the above table) only took 3.83 seconds. This is most likely due to the fact that both clone and migrate requests are using the XCOPY VAAI primitive, described here:

Based on the table above, we observe some differences for NFS vs. iSCSI. Installing the NFS VAAI plugin offered a significant improvement vs. the default NFS configuration. However, the VAAI iSCSI performance for Move-VM operations was noticeably better than the best case NFS configuartion.

Summary: These are not the results I had initially expected. I had assumed NFS performance with VAAI enabled would have been much closer to the iSCSI performance numbers. I was very impressed by the iSCSI performance for clones/moves when using VAAI. Based on Test 1, I’m likely to leave the majority of VMs already deployed in their current NFS configuration. I don’t move the VMs around and the VMs aren’t observing any latency, so I can’t think of a compelling reason to go back & move them. However, based on Test 2, I do plan to make some changes. I currently have a couple Aria Automation deployments that use linked clones to improve provisioning times. I’m likely to revisit that configuration, moving the template VMs & target deployments to iSCSI datastores backed by the same Synology array to unlock the super-fast cloning.

This entry was posted in Lab Infrastructure, Virtualization. Bookmark the permalink.

One Response to ESXi Datastore Performance Battle: iSCSI vs. NFS on Synology NAS with VAAI

  1. Pingback: ESXi Datastore Performance Battle: iSCSI vs. NFS on Synology NAS, in-guest performance | Enterprise

Leave a Reply

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.