proxmox

PART 2! – Proxmox on Intel’s Hybrid Big Little – IT WORKS!

PART 2! – Proxmox on Intel’s Hybrid Big Little – IT WORKS!

#PART #Proxmox #Intels #Hybrid #Big #WORKS

“Craft Computing”

Thanks to NordPass for sponsoring today’s video. Take control of your passwords! Visit to take advantage of their best offers, risk free for 30-days!

Grab yourself a Pint Glass or Coffee Tumbler at

Proxmox and Intel’s Hybrid “big/LITTLE”…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

44 Comments

  1. Results presentation and description is a bit hard to follow. I never touched 12gen+ Intel CPUs in my experiments and probably missing some details on the way they function. But watched this video couple of times and still find the way "tables and graphs" are layed out is not helping but making it more confusing.

  2. Interesting stuff. Won't be able to test it myself though as I only have a Epyc 7543 and a Xeon E-2314. Neither have E cores, and the Xeon doesn't even have SMT :p

  3. Nice work, great content. Would be even better if you considered power usage statistics when performing these kinds of discovery test. For most, that seems to be one of the biggest concerns with home lab equipment.

  4. I wonder what Joyent Illumos would do. The thread/core thing was "solved" by Sun Microsystems and Solaris about 20 years ago in their Niagara architecture. Illumos is the current Free-ish version of Solaris and I'm sure some of that scheduling mojo was carried over. Unix and later Linux always had the ability to manually pin tasks and processes to CPUs/CPU sets. I'm wondering if. you could run a full Cinebench session pinned to the P-cores and then pinned to the E-cores (alone) and see the results.

    Also, you gotta try a proper Belgian Witbier or even a Geuze.

  5. Glad you took another look into this 😊 I wish you would take another look at the 3070m GPU that got BIOS bricked (if you were unable to unbrick it, you could always send the GPU to another YouTube content creator for the unbrick video – then use the Frankendriver on the fixed GPU)

  6. I've had a lot of issues with the 12000 series CPUs, but after applied the microcode, you linked to, it started working as it should 😀 Usually I am sticking with AMD 5000 series CPUs….

  7. I'm running Intel NUC with 1340p since november without any issues. Usually there is 5 VMs on at all times and I did not notice any stability issues. Even go iGPU passtrough for jellyfin HW acceleration working in a VM
    I went trough few of tteck proxmox helper scripts after install which includes microcode update, that was most likely what saved me from issues

  8. Thanks a bunch for following up on this. This kind of actual hands-on testing and reporting of functionality – and especially your follow-up, when you got the microcode updated and stable and redid all the benchmarks, and your new findings – is incredibly valuable, harder to come by presented in such a comprehensive manner, and happens to line up with stuff I'm tinkering with right now, so, cheers!

  9. Personally on my Alder lake system i only give a VM either Performance cores (threads included) or E cores NEVER both! aka i generally keep the E cores for the base Linux OS and give Widnows Games VM the P cores

  10. Such a shame about the engineering samples, ive been storking them in AliExpress for months now looking to replace one of my servers running a E5-2690 for a while. And it looks like im going to have to hold off for now 😔.

    Lets hope in another gen or two they will fix the stability issues. They always do and i can wait!

  11. Wow, great update, and so soon after the first part! Big kudos @DotBowder for pinpointing this to the microcode. I did not really think about that, because we generally update our BIOS on our retail boards and have the microcode installed (Not only for stability, but also for security reasons). BTW, we already have a section in our reference documentation about the microcode if you want to link to that ( Section 3.3 Firmware Updates) It is also on our wiki. Thanks for the good work!

  12. Been wondering if It would be worth it to use my spare, 12900K for my main proxmox server. These videos definitely help with making that decision. Was also toying with the idea of possibly turning this 13900KS daily into a proxmox desktop that still runs windows daily on it but can also run other vm's and containers while being able to shut down windows. Have only seen one other person do this on YouTube, and probably isn't a choice many would want to make for their daily. But I need room to start building more than just the networking portion of my home lab. Buying old Xeon E5v4's and trying to get my MSI GS66 to work with proxmox ve without throwing errors, seems so far like it's not going to work.

  13. I am surprise because I’m Running a 13900k for Two Years now on proxmox With 16 vms running. ECore allways enabeld and i had never Problems with crashes Without applaying some microcode.

  14. I built a 12900k system for low latency audio processing in windows and found that disabling hyperthreading improved performance considerably in that work load. Not really the same thing obviously but thought it was interesting.

  15. The interesting part to cover in part 3 is how these boards perform in idle mode. I mean power consumption, falling into c6+ states, etc. Everyone measures performance, but my home server is idling most of time, and though I need to squeeze every GHz out of it sometimes , 80% of time its just idle and costs me $$$

    And many thanks for such niche videos, it's insaly hard to find how virtualization works on these Chinese mobos

  16. It's cool to see you revisit this.

    To help with further testing, you can use run 'taskset –cpu-list –all-tasks –pid <AFFINITY> "$(< /run/qemu-server/<VMID>.pid)"' to on-the-fly restrict a running VM to a specific set of cores.

    Example:
    taskset –cpu-list –all-tasks –pid 0-3 "$(< /run/qemu-server/101.pid)"

    –all-tasks, Lock both the root pid AND all of it's children to the specified cores
    –cpu-list, Let's us specify the cpus that the process will be pinned to (above value: 0-3)
    –pid, The root pid id is appended to the arguments (above value: "$(< /run/qemu-server/101.pid)")

    You can also use taskset to start a command and pin that command to a specific set of cores.

    # Create 4 threads to stress our cpu
    stress -c 4
    # Create 4 threads to stress cores 1, 7, 8 and 9.
    taskset –cpu-list –all-tasks 1,7-9 stress -c 4

    The –pid argument is what let's us target a process that is already running.
    All of the proxmox VM's root pids are stored in their /run/qemu-server/<VMID>.pid file.

    Consider starting your VM, begin your benchmark, then start htop and watch it's threads bounce around. Then, run the taskset command from the proxmox shell to restrict your VM to a set of cores. Then slide it around to another set of cores.

    The source for this is a Proxmox Forums Thread # 67805 called "CPU pinning?". It's what led to the affinity feature being created. The 2nd page of comments has a fun exploration into using this taskset with numa memory.

    Also, I think you incorrectly attributed the instructions in the video comment to me. I believe you got the instructions from someone else.

  17. SOC2 type 2 is NOT a validation of security or how secure an org is. It's simply a certification that states that an auditor attested that you've implemented policies and procedures and adhere to those policies and procedures. "do you scan for vulnerabilities" yes, then pass. It doesn't test that you remediate all vulnerabilities or even the highest. If an org says we "remediate or run vulnerabilities through our risk process and either accept and insure them and move on" then SOC2type2 will ALSO pass them.

  18. This jogged my memory on virtualization and right-sizing VMs compared to bare metal, and why a mid-way loaded host (with noisy neighbor VMs) is faster with a diverse workload. It's because the scheduler should be able to extract more performance during the delay intervals of other threads when they don't overlap too much. Oversubscription can still net performance to an extent when many guests are nearly idle.

  19. I wonder if Intel's "big" Xeons like the Xeon platinum 8558 are "true big little" as intel's Ark page says "high priority cores" and "low priority cores"…is that the same as P and E? my gut feeling says no as Vmware DOES support these Cpus. Just food for thought

Leave a Reply