proxmox

Bare Metal vs Virtual Machines 2.0

Bare Metal vs Virtual Machines 2.0

#Bare #Metal #Virtual #Machines

“DJ Ware”

I was curious if the performance I saw 4-years ago between bare metal machines and virtual machines was still about the same. So if you are wondering why I chose older benchmarks, its because those are the ones I used 4 years ago and I wanted to compare them.

The results were not what I…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

21 Comments

  1. 11:48: the syscall difference should be expected. In a VM the physical machine and its facilities are emulated, while the processor is not, if your operating system runs on the same processor as your host.

  2. On your Apple M1 experience vs. 12-core: At 8:05 you said you run Fedora, how do you do it? In a VM? If you already run a VM on macOS, it forces any macOS VM software to use Apple's Virtualization Framework, which does not support nested hypervisors (and according to Asahi, the M1's hardware is not capable of nested hypervisors). So no other VM or Docker inside any Linux/Windows VM on Apple Silicon Macs (also Windows is castrated, because it internally requires virtualization for some of its security features and for WSL2/Docker). I don't know how or if native Asahi Linux or Asahi-derivatives on Apple silicon do virtualization.
    Also Apple's Virtualization Framework does not distinguish between power- and efficiency-cores. You can only assign a number of virtual processors to a VM, not specifying p- or e-cores.
    Another downside of virtualization for Apple silicon is, that it does not allow for GPU-virtualization (also not for its ANE) – where x64-based CPUs do, especially with NVIDIA's CUDA.
    On the positive side – Apple is really good in passing macOS resources to VMs like peripherals, and the Parallels Software on seamless App-integration between host and VMs. Also Apple enables its fast Rosetta-2 x86 to arm translation layer for running x86 Linux binaries inside Linux VMs.

  3. When faced with a widening bare metal vs VM performance gap over the last several years, my first reaction would be to blame the mitigations for the recent speculative execution vulnerabilties. I would rerun the tests with mitigations turned off for both host and VM to establish a base line. I'm not saying that the P+E cores design from intel doesn't have anything to do with it. But it shouldn't affect the syscalls performance that much, whereas mitigations hurt those the most.

  4. I find the 12th gen to be very inconsistent in performance when i use ubuntu and have vms with spice for desktop interactions.

    Also, lxd seems to get stuck at a very slow core at times and will be at 100% utilization. Any thoughts on this?

  5. Heat and memory bandwidth should be a problem with high core count. VMs should just add to the real physical parameters of the cpu and start to fail with more VMs. By random chance my motherboard had iommu groups that broke up pretty well to hand over to the VM. So in Windows (VM) under Linux Virtual Manager, I burn DVD-Rs with nero as the SATA controller is passed through. I also had an issue with samba through the nat nic emulation, so I just gave the VM my motherboard NIC and it literally goes to a hub on another cable to come back to access files in Linux at the same host.

  6. a thread is not the same as a dedicated core. Most multi-processor aware applications still do not efficiently distribute operations equally and that is because not all operations are made equal. The real return that can help an admin the most is know when to virtualize, when to containerize, and when to use PCI and NIC pass through most effectively.

  7. Pretty confident that VM guests doing directio calls are being buffered by the host, so the "outside" buffers will enhance VM reads. There is no way for the guest to flush this "hardware" buffer.

  8. You are using Proxmox and that runs on OpenZFS, so OpenZFS is great in reading and somewhat worse in synchronous writing unless you use a LOG / ZIL vdev. Running a VM from OpenZFS is like running it from a RAM disk, through the huge L1ARC Memory Cache in the Host.
    I run Virtualbox on OpenZFS/Ubuntu 23.10 and the VMs are faster and more responsive than bare metal through the Host OpenZFS caching (say 8GB). Besides by default everything is lz4 compressed, so you need only half the number of IO Operations, both for storage and network IO.

  9. Read/write tests dont work inside a VM. The data is cached in RAM by the host. No matter how much the software in the VM clears its own kernel buffers, the data will instantly be retrieved by the host kernel's cache when the data is requested again. Likewise, writes are double buffered, going into paravirtualized shared memory and then into the host's write cache. The only way to get around this double caching would be to pass through an actual physical disk device completely into the VM.

  10. Me and a friend have had so many discussions on this. We both in the same field…since regardless of bare metal or a VM hypervisor, much of the way the OS will perform in a VM depends on the Hyper Visor's set allocation to resources and also the architecture of the CPU that's being accessed be it bare metal or a VM… If a CPU doesn't make every syscall or other function a priority because a OS is behind the VMs hypervisor then bare metal will set the bar however many modern CPUs registry access and scheduler doesn't distinguish and will treat them the same. "Certainly the VM itself may run in or on top of an OS so it will be slower…" no longer holds weight really as modern processers handle load substantially better than prior generations and things runining in the background are usually such light tasks compared to the computational power that modern CPUs can run those task as on LE keeping the temps low and the efficiency high. I run many machine's OS on bare metal and many I run in VMs or containers. The trending thing these days of course is just spinning up one in a cloud or if your developing something, just firing up cloud hosted docker containers. Eh…there's definitely more ways to lose data or room for error if you layer your OS like onion unless you take very good measures then each layer could be a potential point of failure and all your data lost, corrupted, or stolen. Enough rambling …thanks DJWares for another great topic and the time and effort you put in these videos doesn't go unnoticed and know it's sincerely appreciated. ❤🎉

  11. VMs are decent when they use paravirtualization, where fake device drivers use shared memory to rapidly transfer data to/from fake devices.

    VMs are extremely slow when accessing any fully emulated device such as virtual graphics or sound card, where the "hardware" is emulated, involving so many interrupts to the hypervisor constantly. Even things like the motherboard and PCI bus has to be emulated when doing that. It is insanely slow.

    To me, VMs are great for server stuff. Awful for multimedia desktop stuff, unless you pass through physical hardware directly.

  12. lol VM -> 16351.83 READ? BM 6,393.79??? How you have free 2 amout of read from what source?). You test VM and bare metal with different kernels, so all tests are basically not fair.

  13. Impact of VMs on performance is more noticeable when you look at changes in architecture. VMs promote service separation across larger number of smaller instances, increasing network overhead.

Leave a Reply