proxmox
Split A GPU Between Multiple Computers – Proxmox LXC
Split A GPU Between Multiple Computers – Proxmox LXC (Unprivileged)
#Split #GPU #Multiple #Computers #Proxmox #LXC
“Jim’s Garage”
This video shows how to split a GPU between multiple computers using unprivileged LXCs. With this, you can maximise your GPU usage, consolidate your lab, save money, and remain secure. By the end you will be able to have hardware transoding in Jellyfin (or anything) using Docker.
LXC…
source
To see the full content, share this page by clicking one of the buttons below |
can i share my gtx 1650 between couple of vms or not ?
Ty for sharing your knowledge
Two questions if you may know the answer?
1. Can Proxmox install Nvidia linux drivers over Nouveau and still share the video card?
2. If one adds a newer headless GPU like the Nvidia L4, can you use this as a secondary or even primary video card in a VM or CT?
What is the command to get the gid or uid when you mention LXC namspace or host namespace?
Greate video i hope this help me to solve the HWA.
Is this simplified if one were to go with a privileged container?
This is really useful information, thank you! Is a similar process required when using a privileged LXC or the GPU available without any extra steps?
You do have an error in your github notes. After carefully following the directions and c/p from your notes I thought it odd when no /etc/subguid could be found. Still I proceeded but the container wouldn't start. After looking around a bit I noticed that /etc/subguid should have been /etc/subgid. After fixing the issue the container started just fine. Regardless, great video and you gained a new sub. Thanks..
Do you make your own thumbnails? Yours are top tier!!!
2:40 actually it's for some intel gpus possible to split between vms. but didn't do any benchmark on it and had no use, so i went for priviledged lxc at the time i was setting up mine. but now i'm considering redoing it unpriviledged, thanks for the video!
Really awesome! But how is this working on the technical level without GPU virtualization at all?
I did not catch this quite right -so is this a way that works only with many LXC+Docker inside or many LXC+ anything inside. That is – can i run, say, 4 LXC debian containers and in each one of them, one Windows 10 VM? If so – it is interesting and great! Otherwise (LXC+Docker)… isn't it already possible to share GPU with every docker container after installing nvidia cuda docker, and pass -gpu all
I dont belive I see this, you are the best
You have solved just one of my little problems , I've moved jellyfin form one server to another and frigate VA worked , but jellyfin was giving me a error .
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_amf))
Stream #0:1 -> #0:1 (aac (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
[h264_amf @ 0x557e719b81c0] DLL libamfrt64.so.1 failed to open
double free or corruption (fasttop)
Could not work it out it was, it was from a backup so the same configs etc , look at your notes and there was a OOPs forgot to the. usermod -aG render,video root
Now all working again .
Would there be a use case for a higher end card like a spare RYX 3070?
I also run my Kubernetes test env. in LXC on my laptop, makes a lot of sense.
Your github is a pot of gold. TY sir
I really love your channel Jim. I learn(ed) a lot from you !!
I would love to see how to get the low power encoding working 🙂
Great video! How does this work with nvidia drivers with a GPU? Does the driver need to be installed on the host and then in each LXC?
My weekend project right here. I run unraid in a VM with some docker containers running in it. I want to move all containers outside the unraid VM. Now I can test this and also sharing the iGPU!!! Not straight put to a single VM. NICE!
Came from the Selfhosted Newsletter a few days ago and I am loving it. Great video, and I will definetly try that as soon as I have time
Nice Jim 😀. You keep making great content👌🤌