VMware

Business Expanding – GPU Passthrough and VMware, What do I need?

Hey all,

So I have been sitting on my hands for some years now and am now wanting to take the dive. I have a company that requires more architect engineers and I want to set up a virtual space where they all connect to one major pc in my office and runoff that.

My question to you is what part of VMware do I need and do we need special Nvidia graphics cards? Are we able to use Geforce or 2000 series cards?

Example:

3 engineers all connect to a 32 core AMD, 128gig ram, Quad 2080RTX cards.

How would I go setting this up with VMware and is it straight forward with GPU Passthroughs?

Thanks.


View Reddit by meathelix1View Source

Related Articles

5 Comments

  1. Why do you want to architect it like this. Sounds like it might be expensive with licenses. Plus cumbersome in that you can’t easily rebalance vm settings without kicking people off it.

    I looked at this a while ago and it was cheaper and more flexible to get individual pc’s.

    You could also look at something like Teradici and cloud machines in AWS if you wanted flexibility with gpu and scale. Then you also get the work from home / out the office benefit

  2. If you have three engineers needing that kind of horsepower and needing it on a constant / daily basis, virtualising this workflow is going to cost you significantly more than your current solution and offer you next to no benefits. Hard to suggest taking this route honestly.

    As has been mentioned, you’re into ESXi, ideally high availability for reliability (because it’s no longer just a case of one workstation dying, taking out one engineer.. it’s one server taking out all three) so you’re going to need duplication of hardware and especially the expensive GPU’s and they’ll need to be Quadro’s for the correct compatibility and licensing.

    Read up on it and investigate but once all facets are taken into consideration, I just don’t see that it can really give you any major benefits.

  3. Passthru can be done, but you’re supposed to use Quadro cards, not GeForce, which are much more expensive.

    More cost efficient to get a big card and use vGPU virtualization, because 1) you can fit way more VMs per host than you can have PCIe slots and 2) Quadro cards are expensive but not compared to 12 low end cards that it replaces.

    I have an architect customer testing this out using Revit. They’re doing a 1-2CPU to vCPU ratio on their systems (most places do 1-4, but they’re willing to spend more for testing & guaranteed resources) and a decent GPU, sized to give every VM a 2gb vGPU, sized based off of the number of VMs they can get on the host based off the CPUs.

    You need Horizon View licenses & NVidia GRID licenses, but if you’re doing this go for an actual enterprise setup. Those licenses aren’t too expensive, since you’re paying per VM and Engineering workstations are pretty beefy.

  4. Thanks for the replies everyone, while taking a further look into it, it seems like the best option would be to keep them separate.

    Looking at the final prices of it all I would be better off sticking with just getting parts for each build. My country’s conversion rate atm is terrible for buying in USD.

    I am better off buying the missing parts locally and I would still save more.

    I was really looking forward to playing with VMware and setting up one rig for all.

    Thanks again everyone for your responses.

Leave a Reply

Your email address will not be published. Required fields are marked *

Close