VMware

10GB Networking Jumbo Frames Query

Hello All,

Myself and colleagues have been discussing optimal configuration for ESXi NICs using 10Gbe.

The general consensus was now that we’re moving away from splitting out all traffic onto a crazy amount of NICs like in the 1Gbe days and converging all traffic over 2 or 4 10Gbe NICs. But some people also said they would always dedicate NICs to iSCSI.

The issue that was raised was around iSCSI and Jumbo Frames. Jumbo Frames is set at vSwitch level I believe. So if we’re creating a single vSwitch and adding all our uplink ports to that vSwitch. Will setting the MTU on the vSwitch to 9000 affect the other traffic for VM networking, Mgmt etc?

Also, is there any guidance or official ‘best practices’ on best vSwitch/NIC configurations? I know it’s heavily defined by the requirements. But i’ve not really found much detail other than blogs.

Thanks!



View Reddit by alexwood92View Source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

10 Comments

  1. We use 2 or 4 10G per host, We’re all Enterprise Plus so we use only dvSwitches with NIOC enabled. iSCSI as “High”, VM Traffic as normal, and vMotion as “low”. Never had an issue.

    ​

    MTU is indeed per vSwitch, but you’ll need to make sure your physical switches backing the environment are set to 9k+

  2. The vSwitch is set at 9000 MTU, but that doesn’t mean everything uses it, thats just the MTU that’s allowed to pass through the switch, just like your physical switch. Just because you set your switch to allow jumbo frames, doesn’t mean they’re going to be used.

    Everything you listed (mgmt, iscsi) among other things like vMotion…all have their MTU controlled by the setting on the vmkernel port. And the VM traffic is controlled by the port assigned to the VM on the vSwitch, which is the VM controlling its own MTU size being sent out. For a Windows VM, it’ll most likely still use 1500 unless you manually edit the network adapter to use jumbo frames.

  3. Lots of good info from the others. To add, we use dedicated, non-routeable, switches for storage, which obviously requires dedicated NIC’s. Invest in good switches with deep buffers if using Jumbo frames. They will kill cheap ones with shallow buffers. Or consider switches capable of running cut-through.

  4. Honestly, why would you use just one vSwitch? Best practices surely has to be to dedicate bandwidth for the iSCSI, and using multipathing both for more performance (less important with 10G for most, but still) and for redundancy at the same time. One NIC per path. If one fails, the other keeps going. If both work, they share the load.

    4 NICs is a nice number in any host I would say, two in a port group for the network traffic, VLANs etc and two dedicated to iSCSI. It’s a tried and true approach and just because you get more speed with 10 gig than 1 gig really doesn’t change any of the reasoning for me at least.

  5. When it comes to jumbo frames I’m kinda biased.

    I’ve seen too many implementations gone wrong I think.
    If you and your colleagues know what you’re doing, fine. Otherwise I always try to get my way around jumbo frames as it’s not going to be a huge performance difference.
    At least not as much as it would make the extra complexity and effort worth it.

    Oh and I highly recommend this blog article:
    https://blogs.vmware.com/virtualblocks/2019/01/30/whats-the-big-deal-with-jumbo-frames/

  6. The key here is to make sure Jumbo frames is enabled all the way through your stack (switches, storage arrays, etc) If you push jumbo frames through through a device that is set at 1500 MTU you will get drop packets and retransmits that will choke your network out.

  7. If you converge all nics on one virtual switch, consider potential exhaustion of the defaultTcpIpStack buffer and heap, especially when you have storage traffic flowing on said virtual switch.
    Forking out to a separate vSwitch would circumvent that.

    MTU can be set at both the vSwitch (if you have any consumer using MTU>1500 it’s mandatory) and vmkernel port level, so you could run mixed virtual ports with. 9000 and 1500 without problems as long as the physical infrastructure copes.

  8. No, increasing MTU on L2 level (vSwitch) does not impact any L3 interfaces unless you specifically configure L3 interface with larger MTU.

    But always segregate disk IO (iSCSI/NFS) from any other traffic, you will kill performance of all of your applications if you end up slowing down disk access because of vMotion or some high volume VM traffic.

  9. Why don’t you try to run iSCSI without Jumbo Frames first? Depending on your networking topology you might now have jumbo frames end to end ? In my experience with iSCSI jumbo frames does not give any major benefits.

    If you have proven, based on tests that you need jumbo frames I would definitely have dedicated NICS for iSCSI. This will help your operations teams as well for obvious reasons.

Leave a Reply