VMware

Local storage performance tuning

Hi, I’m benching a single Optane drive to investigate high-speed low-latency local storage configurations prospects.

 

I’m running a single Server 2019 VM with Iometer on the host, and I’m kind of disappointed with the results. As a start, I got close to Intel’s advertised numbers on a bare metal Server 2019 installation – almost 600K IOPS heavily loaded, and about 70K IOPS with 14us latency with single thread @QD=1.

Next I moved to a 6.7 installation with the drive passed through to a Server 2019 VM with DirectPath. This config clocked 500K IOPS loaded and 35K IOPS @QD=1, with 28 us latency. OK, some performance lost, but nothing too surprising. Although latency-sensitive load did take a 50% hit, which may be considered substantial.

 

Finally, I’ve set up a VMFS datastore instead of the DirectPath config. Here’s what I’ve got: 200K IOPS at full load, and about 23K with 44us latency at QD1. So basically a third of the bare metal system performance. I’ve tried all the virtual storage controllers (interestingly, the LSI SAS gave the best numbers), I’ve tried HPP, I’ve tried adjusting reqCallThreshold, I’ve tried basically all the tricks described in Performance Best Practices.

So my question is – is such a performance penalty expected for high speed storage? 60-70% performance lost due to virtualization seems rather a lot. Do you have any experience to share?

 

Also a sort of related question: for some reason, using the NVMe controller (for the VM) results in a microscopic 400-500 IOPS read at QD1, with high latency. Writes are fine. And also sometimes tuning the reqCallThreshold for a PVSCSI controller gets me the same result. Meanwhile with the same PVSCSI and no reqCallThreshold set (or set at 1), I’m steadily getting 20K+ (40x times more). Has anyone got any insight on this?



View Reddit by vat11View Source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

One Comment

  1. Create an HPP claim rule by running the following sample command.
    esxcli storage core claimrule add -r 10 -t vendor -V=NVMe -M=* -P HPP –force-reserved
    This sample command instructs the HPP to claim all devices with the vendor NVMe. Modify this rule to claim the devices you specify. Make sure to follow these recommendations:
    For the rule ID parameter, use the number within the 1–49 range to make sure that the HPP claim rule precedes the build-in NMP rules. The default NMP rules 50–54 are reserved for locally attached storage devices.
    Use the –force-reserved option. With this option, you can add a rule into the range 0–100 that is reserved for internal VMware use.
    Reboot your host for your changes to take effect.
    Verify that the HPP claimed the appropriate device.

Leave a Reply