Poor NVMe store performance

Well, here’s the thing:

I had a Windows 10 vm that received an NVMe SSD drive as passthrough. Windows was installed on it and performance was native like, as expected.

Then, realizing that 1TB for C drive of Windows is a complete waste of precious fast storage and since I had to reinstall the damn thing anyway, I stopped passing the drive through as a pcie device and created a data store on it instead.

When specifying a drive of 250GB for Windows, I saved it, eagerly zeroed, on the NVMe store.

I don’t know what I was expecting but 1,6 GB/s is not it. I was getting 3,5GB/s on metal.

Is this a normal situation or is there something I’m missing?

The P900 has all the performance needed BUT I DID shrink the CPU’s count a bit from the previous installation, from 12 vcpu’s to 8… could that be it?


EDIT: syntax and some new details, the drive is Logic SAS, not Parallell and certainty not Paravirtual. Could this make a difference? I have VMWare tools installed. Can I change the controller type with Windows having the drivers and expect it to wake up again?

View Reddit by Maude-BoivinView Source

Related Articles


  1. Any other VMs running on this host? Any competition for CPU resources? (I don’t expect in any case that this would affect disk transfers very much.)

  2. Whoa…

    How many cores are in the host? Not hyperthreads, real cores.

    Why would you create a VM’s virtual disk to be thick, eager zeroed?

  3. What’s running in this now-8 vCPU Windows VM that requires 8 vCPUs?

    The NVMe SSD, is that PCIe or SATA?

    If you change the disk controller type, you’re likely to lose connectivity to your virtual hard disk and the VM will not start. You can change the controller if you recreate your disk (or edit the disk header).

    I believe lazy zeroed disks have the best performance, but the differences between thin provisioning and either thick provisioned format is negligible for most things. You’re just wasting space.

Leave a Reply