Just like the title states — I’ve been tasked with taking a dozen hosts in a cluster that are currently PXE booted and converting them to local install. I’m managing them with vSphere 6.7.0u3, and the destination ESXi version is 6.7.0u3 (currently the hosts are using 6.5.0u2).
At a glance, this looks relatively simple. It looks like I just need to remove them from vsphere, then from that point, treat them like new hosts with regards to installing ESXi, attaching them to vsphere, and remediating with my chosen baseline group to get the patches and other stuff installed. Does this general plan seem sound? Am I forgetting anything critical?
What I cannot figure out via searching is whether removing a host from vsphere will also remove it from the autodeploy rules. Also, currently, network and SAN links are managed with a host profile, but all of the VLANS and storage devices appears to be saved at the cluster level, so do I even need that host profile anymore? What might be configured at the host level that wouldn’t also be at the cluster level? Mgmt and Migration networks, maybe?
Thanks for any help y’all can offer. I’ve only been administering VMWare for a little over a year now, and I’ve not had any formal training — just a combination of google and picking coworkers’ brains.
Edit: Thanks for all the replies, y’all. I’m feeling much better about this. Also, the boss says I’ve got two new hosts to add to the cluster while I’m at it, so I have two hosts I can fiddle around with to make sure everything in my process is correct before I attack the existing hosts — and it gives me more cluster resilience in case I totally hose one of the hosts (or it decides it’s got a bad memory module — recently, a lot of our R640s have suddently decided they’ve got failed modules after a reboot).
View Reddit by dont_remember_eatin – View Source