VMware

Help a newbie plan my new cluster environment

I am currently planning to replace our existing hyper-v cluster, running on an old EMC SAN and two HP gen8 servers. I’d like to utilize fault tolerance in ESXi 6.7. I’m new to VMware clusters.

* **SAN**: an all-flash offering from iXSystems (x10ha), Dell (Powervault ME4024) or HP Nimble. SAN will have ~15TB usable space.
* **Servers**: Two PowerEdge R640’s, 192gb RAM, single Xeon 4216, and Intel x520 4-port nics. I will be running 25 VM’s max, each host will be able to handle the load of all of my VM’s in the event of a host failure. I might add a third host in the future if our needs increase. VM requirements are low. Mostly basic windows/linux vms. May add an RDS soon.
* **Switching**: Two Cisco Nexus 3k switches, with redundant 10gb links to hosts/san.

My questions are:

Is there anything in the above config that is against best practice or that might cause issues? Any pros have recommendations that might improve this setup?

Seems like i’ll need vSphere Enterprise Plus licenses to handle FT on vm’s with more than 2 vCPUs. Im guessing i still have to license both hosts? nearly 8k for those licenses is a lot!


View Reddit by krispycrustaceanView Source

Related Articles

4 Comments

  1. > Any pros have recommendations that might improve this setup?

    Yes. RTFM. I went through the Dell’s best practice document, and there was some recommended storage optimization commands that I ran.

    When you say “Fault Tolerance”, do you mean “High Availability”?

  2. Not going to address the general hardware/design questions, because you haven’t really provided enough context with regard to actual performance and business requirements to justify the effort. But….

    > Seems like I’ll need vSphere Enterprise Plus licenses to handle FT on vm’s with more than 2 vCPUs. Im guessing i still have to license both hosts? nearly 8k for those licenses is a lot!

    If you’re working in an environment where $8k for two host licenses seems like “a lot”, the chances that you’re in an environment where Fault Tolerance is actually a technical or business requirement is close to nil. Chances are you’re probably making things more expensive and introducing unnecessary technical complexity here.

  3. I have a somewhat similar setup 2x lenovo System x3650m5 single CPU 10 cores servers and a nimble cs215 San 2x juniper EX3300 switches for the iscsi traffic. Running VMware essentials plus, so that i can do HA/vmotion in case of a host failure no DRS though with that licens.

    Also using the X520 network cards in both hosts and using direct attached copper cables.

  4. >Any pros have recommendations that might improve this setup?

    Is fibre/MDS an option? If each host only has 4 ports and you want redundancy, you now only have 2 ports per server. Dedicate 1 for vMotion traffic and now you are down to 1 port for management AND for the VM network traffic. If fibre is absolutely not an option, get more NICs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Close