I’m curious to know what all my virtualization comrades have to say about this! 🙂 Back in Server 2003 days, TCP/IP Offloading became a strong talking point at my office and we chose to disable all offload settings in TCPv4 NICs. There were SQL performance issues that seemed to be tied to the offloading settings, disabling them did help.
My question is: I wonder though if this is even a thing anymore? From what I understand it was implemented as sort of a helper to slower PROCs, but today we’re running on very robust FlexPods for ESXi OR very capable DELL physical servers for applications that need a dedicated physical host. Not to mention we’re in Server 2016/2019 days where I think the performance has changed significantly from the 2003 OS to now. I hate to keep doing a thing just because it’s what we’ve always done.
View Reddit by OldMustardBeard – View Source