proxmox

More POWER for my HomeLab! // Proxmox Cluster

More POWER for my HomeLab! // Proxmox Cluster

#POWER #HomeLab #Proxmox #Cluster

“Christian Lempa”

Use code christianlempa at the link below to get an exclusive 60% off an annual Incogni plan:
Thanks to Incogni for sponsoring this video.

In this video, I will be sharing my experience of building a Proxmox Cluster with 2 Nodes in my Homelab. This allows me…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

48 Comments

  1. really good content – take this to its ultimate conclusion is all i would say – think about 2 node 25 or 40g cluster – no switch needed – you could use the 56g dual port cards (40gbe) for cluster and then a 10g management network – the netfs is a good one to dig into – ceph/nfs/zfs/gluster? i think you are going to have to experiment – maybe try some nvme arrays? thanks for the content!

  2. One Tip do not run it on a SD Card because it writes alot of data for the Q DB. I blew up an expensive SD Card within 2 wks. You can also use a cheap OrangePi as long as it can run a linux derivate preferable Debian because Proxmox is Debian based.1 GB LAN Port works fine as well.

  3. Technically 2 nodes don't provide HA at all, because no single leader can be elected (split brain). So you mean failover in this case, not HA. HA is only possible with an odd number of nodes (1, 3, 5, …), because that removes the split brain problem.

  4. I am using the cloning to make sure that my vms/containers are migrated automatically if one of the nodes are down. That way, I don’t have to deal with the network storage which brings its own problems.

    Also you can change the votes in corosync config. That’s how I ran my cluster for a while where my main node had 2 votes and slave had 1 votes.

  5. Hi @ChristianLempa one note i made was when you used qdevice when running HA you need to ensure that the q device has root access which I had to do to get my cluster working but great video I use 1 dell 5090 & lenovo in my ha setup pm cluster & have a raspberry pi 4 (which also runs 24/7/365) as my q device but great content

  6. I have successfully running a Proxmox cluster with 2 nodes and a quorum device in my small home lab with a simple storage sharing solution without external network shares. Both nodes have a dedicated SSD with the same size just for this purpose. How to set it up? On the first node you go to disks, create ZFS, select the disk and give a name (this name should be a general name, not dedicated to a node, like 'pve-data' or something similar), select add storage, everything else on default and check the box on the disk you want to have as storage in the device list, click on create. The first node now have a new storage. On node 2 do the same, go to disk, create zfs, now important use the same name like on node 1 (in my example 'pve-data') and also it is very important to deselect the checkbox 'Add Storage', select the disk from the device list and click on create. After that you can go to Datacenter -> Storage, select the ZFS storage 'pve-cluster' and click on Edit. In the dialog you select all nodes in the dropdown menu named Nodes. After that the nodes will be come the storage listed as local storage on each node. With replication tasks for each VM or LXC you can set how often a storage of an VM or LXC will be synchronized between the nodes. In HA you can also set the list of containers or VMs you want to have migrated live between the nodes. You have to do this for each VM or LXC container, but it works very well for me. Thank you for covering this Proxmox topic. Hope to see more of this.

  7. You dont “need” a separate device for shared storage. You “could” use CephFS or GlusterFS to create a shared storage pool from the disks/partitions in you servers.
    Im using quotes because its possible, but a seperate device is probably still beter.

  8. Yes, there are some quirks with a 2 node proxmox cluster. I run the same, but no qdevice. I just have a direct 10Gb connection between them. I did it for a similar reason as you, I wanted fast migration between the servers.
    Fun fact, I'm running PBS in HA with the virtual disk running on a 1Gb link to my Synology NAS, and the PBS datastore is on the Synology also. It works quite well considering it's going over 1Gb.
    I've backed and waiting on the Zima Cube Pro to arrive as that will become my 3rd Proxmox node running TrueNas VM configured for my HA storage with a 10Gb link and NVME storage available.

  9. very fun! Thanks for the great video! have a beelink mini pc (intel) as primary, and an old 2012 MacBook Air (intel) that I recently put in a cluster. They are sharing storage for backups. Migration works great so far, even with the CPUs being different models. Looking forward to your SSD NAS storage in the next 1/2 of the year.

  10. I am running a similair setup in my homelab. 2 nucs, 1 pi zero with ethernet hat as qdevice. Also configured HA.
    Ive setup ZFS on the nucs with replication. Therefore not requiring a shared Storage. The nucs are connected to 1gbps connection but thats fast enough for my lightweight vms and containers.

  11. I vote for a CEPH cluster! I would love to see you set up a CEPH Object Store and File System on your 3 ProxMox servers instead of relying on a TrueNAS server or other NAS. I understand that YouTube content creators have a need for a lot of storage and high speed networking but not the average person running a home lab. Having a “local” file system for your work load that is mirrored to other servers would be very appealing.

  12. For different CPUs with HA there is solution without buying matching hardware, google "craft computing Lets install proxmox 8.0 tutorial" see CPU section of video. For making two node functional without third node complexity google "apalrd's proxmox two node quorum". For shared storage, no need for extra third computer with Truenas, just add NFS on the "small/watt efficient" node that will always be on 24/7.

  13. Been following your vids for a while and they just keep getting better. Finally have the confidence to press forward with my homelab thanks to you. Rock on brutha!

  14. You forgot to tell about very annoying problem: if even 1 node of a cluster is done, DO NOT REBOOT ANY OTHER!!!
    Rebooting a node that can't see full cluster will give you an error and no one VM on this node will be started until cluster is healthy

  15. For HA you definitely have to assess what you want vs. what you actually need. Having HA cluster with only single node shared storage, you are back to square one on being fully redundant.

  16. Hey Chris, take a look at CEPH and maybe you can build a network over USB4 thunderbolt? 😉 at least this is what I run on a 3 node cluster with NUCs 13ths gen.
    But even then: it takes around 2-3 min after a Node dies to run the HA migration. But not because of slow network 😂 getting around 20-30 gbit over the TB4 ports

  17. FYI I tried to do a qdevice inside of a docker container & it was a mess. I end up messing up my entire cluster. I got it added to the cluster but it was non-voting. While messing around with it i borked my entire cluster so i probably won't be doing that again.

  18. i’ve been running proxmox 3 node cluster with ceph ha for over 6 months now and it’s been stable so far. i even upgraded to proxmox 8.1 from pve 7 without any issues. curious why you decided not to go this route. maybe an opportunity to make this topic a 2 part series? 😅

  19. Thanks for the video. I have a proxmox cluster running, but I need to reinstall proxmox on the cluster. There are a few things I didn’t take care at the first time. Hopefully with your video it will be better 🙂

  20. Just add dedicated HDDs to both Proxmox nodes just for VMs and then format those using ZFS. Then you can mount those drives as shared and have ZFS doing replication every XX minutes from node to the other. No need for external storage.

    Craft computing YT channel has a nice tutorial

  21. Currently I am not running a cluster, but have plan for this in future. I was planing on failing services over, but I see that is going to take more than initially expected. At least I can do like you are doing now.

  22. You don’t need shared storage, you can also setup replication to run ever 15 mins…or less if you want then you don’t have to worry about HA failing when you pull the plug

  23. I don’t understand why you would want to go for a 2U case instead of a 3U case especially if you are planning to use an ATX PSU. That’s still gonna occupy 3U in your rack because you can’t put another server directly above as the PSU needs to suck in air from above. You could put a full height PCIe card in the 3U case without additional riser cables and you get proper front to back airflow…

  24. Ah the memories of a 2 node cluster on mixed hardware 🙂

    As a stop gap for failover look into the data centre replication option, it can copy the storage between nodes at set intervals. (Risk of data loss of x mins).

    If you go the TrueNAS route look into iSCSI instead of NFS, you don't need SSD's, but this is a home lab 🙂

    Upgrading to 10Gbps is useful between multiple nodes and storage, I've seen my migrations move around at 2Gbps, (SSD cache assisted I'm sure).

  25. I run proxmox at work, with a big cluster, multiple shared storage devices. It works great, and, I'm happy to see you added a qdev to your cluster, because it really is needed to maintain cluster integrity.

    As far as shared storage goes, any cheap NAS will work. I'd recommend something that has NFS shares. For homelab, gigabit is mostly fine unless you have a ton of storage. The price jump networking wise to 10gbps isn't bad, but, the storage price jump that can actually utilize that network speed can be quite expensive.

  26. Christian, you can do a poor man's HA with local disk by scheduling replication of your VM's. That way a copy of your VM is standing by on the failover node. Also, I didn't have luck with SSD RAID for hosting VM's. The cost was exorbitant, the capacity was too low for my purposes, and the i/o wasn't much better. I went with more HDD's and an SSD ZFS read/write cache. P.S. You're doing great work. Thank you for sharing and for your joyful presentation style.

  27. @christianlempa I recently created my Proxmox node and had the same problem when migrating VMs, but I fixed it by selecting on the VM configuration > Hardware > Processors > Type > x86-64-v2-AES. This works flawlessly between my Intel N100 and my i7 6700T

  28. to get live migration work more reliably this can help: pve8 has cpu types which wheren't available before. sth like x64_x86_v3 is for example intel skylake and newer and amd epyc (i assume zen1). just klick help in cpu menu of a VM
    and as others pointed out, with replication HA is possible. but loose data up to migration interval.
    otherwise do a bulk migration before shutdown?

  29. You should also be able to set the "CPU TYPE" under your VM hardware to a version supported by both processors (like x86-64-v2/3/4 processors) to improve live migration compatibility. PVE has a ton of options for each generation of chips going back to 486, so you can really tune in the instruction sets made available to the VM this way. Might be worth testing if you have a mixed cluster environment.

  30. Errrm… venting for a power supply on THE TOP OF A RACKMOUNTED case, where the unit above will obviously obstruct the fan?? Hmmmm, I'm not sure this is well thought out…. not at all. Looks cool, but this should be a serious argument against a full size ATX power supply in there…

Leave a Reply