proxmox
Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and
Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and 3060ti mixed PCIe VRAM Performance
#Homelab #Server #Multi #GPU #Benchmarks #Multiple #3090s
“Digital Spaceport”
Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a…
source
To see the full content, share this page by clicking one of the buttons below |
You mentioned about inference, but what about training? Can you mix GPUS and VRAM?
lets try 8 gpus
I've got 3 3060 I have been running on an x370 i haven'thad any issues with running at x x8 x4 on the pcie lanes, I have had a lot of fun using mixtrail 42b it's very usable
I'm really frustrated with the results but it was necessary and I thank you for your work!
Loving the self hosted AI content, keep it up!
Hey, very interesting video! I am curious what would happen if you pair an rtx 3090, with an older 24gb card like a p40, or even older m40. Would the more vram be good, or would the generations mixing affect the performance? Thanks for the content, I will be following, and hopefully soon make my own ai home-lab. 💪
Part of me wants to see if I can't run a local LLM for coding on my 3080 (10gb).
This is very useful, i am building a local A.I. cluster and i am trying to balance learning and WATTS used
there are a few mini pc that you can allocate as much as 16g of ram to the integrated gpu for less then the price of a 12g gpu
the value is there
Would you consider taking orders for these on your store?
I thought llama.ccp could not benefit from multiple GPUs for processing, only adding vRam. Maybe you should test with vLLM or TensorRT.
Maybe with 3090 NV linked the vram could be shared and bring some benefit, idk
I there a possibility to earn something from these setups?
I love your pivot to AI dude. I'm actually using ollama for some projects at work and this is very useful stuff. Keep up the good work and hope you're doing well!