ESXi Datastore Performance Battle: iSCSI vs. NFS on Synology NAS, in-guest performance

After a recent post on Synology iSCSI vs NFS (https://enterpriseadmins.org/blog/virtualization/esxi-datastore-performance-battle-iscsi-vs-nfs-on-synology-nas-with-vaai/) I realized that I had overlooked some important performance characteristics and wanted to follow up to the post.

First, the Synology DSM version being used during both tests was 7.2-64570 Update 1. I mention this as performance improvements for iSCSI or NFS could occur between DSM releases which could greatly impact the results.

Second, the previous tests focused only on VAAI offload performance. With the exception of the first test, which copied a file from local disk to the Synology DS723+, I didn’t get a feel for if read/write operations within the guests would have any performance differences.

I’m not a huge fan of IO benchmark tools, as they can be tweaked and biased towards a specific configuration. Instead, I came up with three quick tests to look at real world things that I could measure against a specific guest VM. Since some of these tests also require CPU/memory, I performed them one at a time, on a host running no other workloads, and did each test 3 times to find the average runtime. These tests were completed from clones of the same Windows 2022 virtual machine, running on the same ESXi 8.0u1 host.

There are likely more sophisticated tests that could be performed. As outlined in the previous post, this NAS is currently connected with a single 1Gbps NIC and has no SSD. Those are knobs that could be adjusted to impact performance, but for comparisons sake the only variable intentionally adjusted for the tests below is whether the VM resides on NFS or iSCSI backed storage.

Test1: Power on a Windows 2022 virtual machine, which should be primarily a read IO operation. I measured this with Measure-Command { Get-VM TestVM | Start-VM | Wait-Tools} | Select-Object TotalSeconds.

Test2: Update VMware Tools, which should be primarily a write operation (to update/replace existing files). I measured this with Measure-Command { Get-VM TestVM | Update-Tools } | Select-Object TotalSeconds.

Test3: File copy within the guest OS, which should be equal parts read & write. For this I picked an existing folder on the system with lots of small files and copied it to another disk in the system, both backed by the same Synology storage. The source folder was 6.4GB with 66,544 files in 25,572 folders. I measured this with Measure-Command { robocopy C:\Windows\winsxs D:\winsxs /MT:4 /S /DCOPY:DT } | Select-Object TotalSeconds

Test TypeStorageTime1Time2Time3Average
Power OnNFS57.7952.6752.6854.38
Power OniSCSI58.8253.7547.6953.42
Tools UpgradeNFS74.9797.994.4289.10
Tools UpgradeiSCSI89.1392.9580.4287.50
RobocopyNFS799763757773
RobocopyiSCSI609679686658
Guest Operation performance tests, all values in seconds

Summary: Based on the above table, we can see that the first two tests have very similar results with no noticeable difference between NFS and iSCSI performance. The average time for the two tasks (power on & tools upgrade) are within an acceptable margin of error. For the last test, using robocopy to duplicate a lot of small files, the VM on an iSCSI datastore completed the task about 2 minutes faster (on average) than the VM on an NFS datastore. As with the previous test, this is not what I had expected.

This entry was posted in Lab Infrastructure, Virtualization. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Notify me of followup comments via e-mail. You can also subscribe without commenting.