VMware

Nested vSAN poor performance

This is something that’s been bugging me for a while. I’ll start off with the host PC specs/setup.

Windows 10 Pro
AMD Threadripper 3960X
256GB RAM
Mellanox ConnectX-210 Gbit adaptor (set to jumbo frames as are the VMnet adaptors)
OS drive – Corsair MP510 1920GB NVMe
Workstation drive 1 – Samsung 960 Evo 1TB NVMe (nested host 1 on this drive)
Workstation drive 2 – Sabrent 2TB NVMe (nested hosts 2, 3 and 4 on this drive)

VMware Workstation setup:
4x ESXi hosts, each with: 8 CPU, 60GB RAM, 6GB drive for ESXI, 75GB vSAN cache, 500GB vSAN capacity
2 NICs per host, NIC1 set to bridged network which does VM/mgmt traffic. NIC2 set to internal only and does vMotion/vSAN.

ESXi setup:
Nothing special really, two standard switches attached to each NIC. Both switches set to 9000 MTU as is the vmk for vMotion & vSAN.

The PC is connected to a 10 Gbit Mikrotik switch, which connects into a UniFi 16 XG switch. Hanging off this switch I have a QNAP NAS with a 10 Gbit NIC.

iPerf tests:
Win 10 host to nested ESXi – 1.3 Gbps (no idea why!)
QNAP NAS to Win 10 host – 9.8 Gbps
ESXi host to QNAP – 1.6 Gbps
Host 1 to host 2 – 3.5 Gbps (this is on both NICs)

I’m fairly confident it’s the woeful nested host to host networking performance, and my backups are being killed by the Win to to nested host transfers being poor.

Any reason for this? Should I reduce the number of CPUs I have assigned to each VM and perhaps have more hosts with less hardware/storage etc?

I don’t normally run nested hosts but I made the move as my physical boxes were simply too much in terms of heat and noise.


View Reddit by ChrisFD2View Source

Related Articles

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Close