Our cluster hosts have two 100Gb nics so we had to choose between NSX managed dvs and n-dvs switch. We went with n-vds to take advantage of the distributed firewall. N-VDS doesn’t appear to be supported by vsphere 7 workload management.
The initial release of vsphere 7 allowed us to deploy workload management through the UI but the kubernetes supervisor VMs had to be manually attached to the portgroups and the deployment would get stuck in the “configuring” status even though all nodes were in ready state and pods were deployed.
Vsphere 7 update 1 ui for workload management does not show the nvds switch so it won’t proceed. I can enable workload management through python but the deployment has the same issues; Supervisor VMs need to be manually connected to portgroup, nodes are in ready state, 16 namespaces and 92 pods deployed, but deployment is still stuck in configuring state.
1) Anyone know why tanzu technically doesn’t support n-vds? Will it in near future?
2) Any suggestions on troubleshooting the communication between the supervisor VMs and the NSX-T load balancer virtual servers? “kubectl.exe get -A rc,services” shows 19 namespaces with Cluster-IP associated with a NSX-T load balancer virtual servers. The EXTERNAL-IP is blank but I think that is ok since the LB pools appear to point to the 3 supervisor VMs. I’m wondering if there is a routing issue between the supervisor VMs and the Edge cluster LB service/overlay.
Any suggestions would be greatly appreciated. I’ve been trying to get this to work for a month 🙁
View Reddit by cmbwml – View Source