VMware

Poor Performance of a Single VMDK on a VM with Multiple VMDKs

Hey all!

The title kind of describes the issue, but I need to give some more detail to outline the problem. A Nimble support case and a VMware support case were been raised, but we have been unable to find a solution.

* We have a Nimble storage array
* VM01 (Server 2016) has a C: drive and an E: drive each on their own VMDK. (2 VMDKs on the Nimble)
* Using the “diskspd” tool, the C: drive on VM01 gets 223.17 MiB/s write speed with an average latency of 35.858
* Using the “diskspd” tool, the E: drive on VM01 gets 71.52 MiB/s write speed with an average latency of 111.823
* Using the “diskspd” tool, the C: drive on VM01 gets 1039.84 MiB/s read speed with an average latency of 7.691
* Using the “diskspd” tool, the E: drive on VM01 gets 78.52 MiB/s read speed with an average latency of 101.329

As you can see from those test results, two VMDKs on the same storage have drastically different performance and we can’t figure out why.

Here is some extra information:

* Both VMDKs have the same storage policy.
* Both VMDKs reside on the Nimble storage array.
* Both VMDKs have the same limits (none) set.
* We have tried performing a vMotion to a different ESXi host.
* We have tried performing a vMotion to a different storage array.
* Any new VMDKs added to VM01 perform just like the C: drive (WAY better than the E: drive).
* Here is a [link](https://github.com/microsoft/diskspd) to the diskspd tool that was used.

Any help/advice would be greatly appreciated!


View Reddit by __N1C0__View Source

Related Articles

10 Comments

  1. I assume both hard disks are on the same data store? And that data store has no other VMs? I also assume VM1 is a fresh VM i.e. has not been migrated or created on any other hosts or storage

  2. Thick, Thin, or EagerZeroedThick ? Expect anything other than EagerZeroedThick to have a pretty large write performance penalty.

    You may not see this on VMDKs which have already had data written to them, as they may have already gone through the allocation overhead during other workloads.

    Please review: https://kb.vmware.com/s/article/57254

    >The preparation steps associated with thin and regular thick-provisioned VMDKs do add some overhead to write operations. During normal production, these are not meaningful and performance is similar across all provisioning types. During large-scale consumption of new blocks, however, performance distinctions may be observable to the end user. As the payload cannot be written until the block is prepared, the payload must wait until preparation is complete before it can be written. When large write operations are initiated by a guest OS, this will involve a large volume of block preparation activities and could result in observably lower performance on thin or regular thick-provisioned disks when performance is compared to eager-zeroed disks.

  3. does the disk have IOPS limit set?

    Under Edit Settings, expand the disk, then look at “Limit – IOPS” – should be set to unlimited. also look at the “Shares” and type

  4. My first thought would be VMDK size. The larger they are the more overhead. Also consider migrating the data to a new VMDK.

  5. Perform a TPEZ storage vMotion to a new datastore and test.

    Make sure block size and indexing are off.

    How much utilization is currently occurring on the drives?

Leave a Reply

Close