proxmox

Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and

Homelab Al Server Multi GPU Benchmarks – Multiple 3090s and 3060ti mixed PCIe VRAM Performance

#Homelab #Server #Multi #GPU #Benchmarks #Multiple #3090s

“Digital Spaceport”

Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

13 Comments

  1. Hey, very interesting video! I am curious what would happen if you pair an rtx 3090, with an older 24gb card like a p40, or even older m40. Would the more vram be good, or would the generations mixing affect the performance? Thanks for the content, I will be following, and hopefully soon make my own ai home-lab. 💪

  2. This is very useful, i am building a local A.I. cluster and i am trying to balance learning and WATTS used
    there are a few mini pc that you can allocate as much as 16g of ram to the integrated gpu for less then the price of a 12g gpu
    the value is there

Leave a Reply