proxmox

Supermicro Pizza Box Server with Intel Flex GPU

Supermicro Pizza Box Server with Intel Flex GPU

#Supermicro #Pizza #Box #Server #Intel #Flex #GPU

“Level1Techs”

Wendell has gone out and bought a New Supermicro Pizza Box server just so he could get his hands on Intel’s Flex GPU!

Check it out here:



source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

27 Comments

  1. Slightly off topic, but there are a boatload of small businesses who are using the paid version of VMWare Workstation Player, and a far larger number on individuals using the free-for-non-commercial-use version. Broadcomm is well known for dumping even medium size businesses in order to focus on milking the whales (excuse the mental picture just then…). Will they just stop deleveopmet of player? Start charging home users a little and small businesses a lot? Cancel the product entirely? Lots of people are worried.

  2. Not enterprise, but at home I have a EPYC CPU running esxi8 and have a dedicated "Steam Gaming VM" that passes through a 4060 that I stream to my handheld devices. It works perfectly but would love to be able to share it with my other VM's on the hypervisor. Unfortunately PCI only works with one VM only but I'd be open to installing a second if VMware allows for it to share its resources to any VM that requests it.

    I'd love to know the noise level of this Pizza Server. Great video btw

  3. I have VMs with GPUs I can use for gaming and video editing. The problem is all the remote access programs I have tried suck and have too much latency. Also wish I could easily virtualize GPU resources. Granted I am still running esxi…

  4. So I'm pretty new to all of this type of stuff but I think that I have a use case where this would really work well: I'm a grad student in a MS of Electrical Engineering program. On the project that I am on we sometimes use ArcGIS Pro. We have a machine with a nvidia RTX3080 GPU in it that we can remote into and run models on. The catch is that if it's a large project, in order to "pan around" or to examine say a 3D terrain model we have to actually save the file/model, then sync the files to our local machines, then open ArcGIS on our local machines and check things out. If we try to pan around (sort of like what the guy from intel was doing in google earth) its very stuttery/glitchy.

    Am I correct in assuming that this would mean that instead of the graphics for what is being shown on the screen being rendered on the CPU it would be rendered on the GPU so it wouldn't be stuttery?

    Another thing that has always annoyed me but I'm not sure if it has any bearing on this at all or not: I remote into a Linux machine that is right next to my desk (just easier than swapping keyboard/mouse/monitors around, and yes I know KVM's exist but I have two 120 Hz ultrawides, and KVMs that support 2x 5120×1440 @ 120Hz are VERY expensive). Anyway the main reason I do this is because the python library tensorflow does not support using GPU's for training in Windows. I have some programs (like ArcGIS Pro) that only support Windows so I have to run it. Anyways, the two computers are connected via a 2.5 Gig ethernet link and the Linux machine has 2x RTX4090's in it. When I scroll around in a program in Ubuntu (via remote desktop) it sometimes can be glitchy just like how ArcGIS can be glitchy when panning around on that other machine. I also don't know how to explain it but the "look" of the Linux desktop is just not as "clean" as when viewed directly. Maybe something to do with aliasing? The machine that I am using to remote in is running W11 and has an RTX 3080 so I doubt it is on the receiving end.

  5. Main annoying thing about vmware is actually buying it now, more steps, more annoying, more distributor, more bleh. We used to buy it through our HW partner.

    A client (architecture and engineering firm) does use remote stations but they just use desktop pcs, no server hardware for it – cost is the reason for it.

  6. What do you think of using this in a production studio to have a VM for each Remote editor?
    Have an office NAS. Each VM 10GbE to a NAS.
    Remote editors run editing software on VM and remote edit through their remote connection to the VM.
    Would this work a lot better than using Parsec for remote editing?

  7. I honestly do not understand Intel's GPU strategy. I would buy an Arc Pro A60 in a heartbeat… if I could actually find one. Same thing for these Flex cards.

  8. Hi, I built one of the largest VDI deployments in the state of california in the public sector. We used VDI for many things, but always found the cost of graphics cards being included made the solutions value drop significantly. Adding millions to licensing and hardware costs wasn't a great way to move forward. Luckily in our use case, most work that was to be done was productivity solutions and internet browsing. That being said, there is a huge opportunity for people to have distributed VDI devices with this card, and Partners to build a practice of supporting those hypervisor solutions you mentioned. The other thing you didn't speak about immediately is density. When we architected our solution, we went with 185 Users per Server (supporting a total of 10K in our initial roll out). Can this card provide graphics capabilities without a limit to VDI or RDSH devices? that's the real question is how far will it scale.

  9. I remember hearing about this tech nearly 11 years ago and was so amped for it to take over, then all the companies ruined it. Honestly reminds me of containerization… everyone was using it and had it but no one wanted to share until Docker almost went under and open sources to stay alive.

  10. Oh yes – The VMware and Broadcom thing is certainly a thing. The positive thing is that it has opened up a lot of eyes to the dangers of monoculture. That said. My company has to bite it this time – but that will be the last time – we are kicking anything Broadcom out – where possible and as fast as possible. There is now a corporate strategy of having at least two vendors in any critical area. Bye bye 😒VMware. Its been a blast.

  11. I bought a tesla m10 to try this out myself a while ago, I had no idea there was licensing involved, hopefully that's nothing I need to pay for to use that card, or I'll resell it, and I planned on using an a770 in my machine to try av1 streaming anyway

  12. Looked at Nvidia Grid at one point but licensing shut that whole conversation down.

    Between Linus Tech Tips and Craft Computing there are number of "X Gamers, 1 GPU type projects" and all have crashed and burnt for one reason or another. Take gaming out of the equation (mainly because of anti-cheat) and yeah this is useful as a VDI solution.

  13. Now we want that Flex 170 vBIOS SR-IOV compatibility sticked and implemented to the ARC A770 16GB for those we aren't businesses (as they don't have a licensing model per user why worrying about only businesses) and/or we are just poor enough enthusiast customer/businesses working along side KVM-HyperV nested virtualization to be able to run WSA/WSL2 dev environments on a modest VDI homelab infraestructure.

    Nowadays only consumer cards compatible with that GPU virtualization are Intel iGPUs (whether if through SR-IOV, and or Intel GVT-g) and/or RDNA2/A770 through VFIO (but since this last one depends on Resizable BAR for good gaming perf, and that's difficult to pass to VM, I am not currently using it).

    I wanted to believe that RDNA3 would be a good substitute for RDNA2, but after testing with Navi 31 and Navi 32, those cards will refuse to output any video signal at all if nested virt was enabled, regardless if it was sticked onto an AMD or Intel platform. I'm not mad, just dissapointed with Radeon team after RDNA2 successful lineup.

Leave a Reply