proxmox

I Built a PC that CAN’T Fail… and You Can Too!

I Built a PC that CAN’T Fail… and You Can Too!

#Built #Fail

“Linus Tech Tips”

To learn more about the power and future of Intel’s Xeon processors, check them out at:

Our servers need to stay up and running throughout the day for us to keep doing what we do. So we built additional servers to get rid of any down time if something were to crash. And…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

21 Comments

  1. @8:27 — I'm sure it's fine in this application, but generally you can't rely on having two network switches for high availability AND two network switches for double the bandwidth. If, as you operate normally, you begin to rely on that double bandwidth for performance or capacity, then the system will begin to have a cascade failure during a network switch outage as you won't have the resources available that you require. This application is obviously designed to provide HA, and if you don't want that to quietly degrade, then you should disable the link aggregation.

  2. Wow, this setup is so rock-solid it could anchor a ship! 🚢 Just imagine an air traffic control system powered by this – pilots would pause for server migrations instead of weather delays! 😂 On a serious note, Linus dropped some serious knowledge about virtualization. As someone who loves building older Xeon rigs (I recently gifted my socket 1366 dual 6-core Xeon with 96GB DDR3 1600 to my cousin), I really appreciated the incredible value I got: $15 each for the CPUs, around $120 per stick for the memory when it was new, and the motherboard costing about $450. If only my computer problems could be solved with a quick reboot… guess I'll just keep practicing my tech magic! 💻🔌

  3. I already do this but without distributed storage. I use HA in Proxmox, and it will copy over the VM's disk every 15 minutes. This sounds awful, but it only copies over the changed bits, so it's only a few seconds on a 1gbit connection. Then if I migrate it does the same as in the video, copy over the changed bits from disk, from ram, and then resume on other host.
    If the host goes down unexpectedly it will not resume from the current state, but it will boot from what it was maximum of 15 minutes ago, which is fine for my use case.

  4. i can´t remember my last blue screen. And i did nothing special. Didn´t even realize for quite a while that you now need to buy different RAM for Intel and AMD. And yet i haven´t had a bluescreen in years.

  5. Three things:

    1) I'm already running this at home with three OASLOA Mini PCs (which sports an Intel N95 processor (4-core/4-thread)) in each node, along with 16 GB of RAM, and a 512 NVMe SSD. The system itself has dual GbE NICs, so I was able to use one of them for the clustering backend, and then present the other interface as the front end. (Each node, at the time, was only like $154.)

    2) My 3-node Proxmox HA cluster was actually set up in December 2023, specifically with Windows AD DC, DNS, and Pi-Hole in mind, but then ended up changing to AdGuard Home, after getting lots of DNS overlimit warnings/errors.

    (Sidebar: I just migrated my Ubuntu VM from one node to another over GbE. It had to move/copy 10.8 GiB of RAM over, so that took the most time. Downtime was in the sub-300ms range. Total time was about 160 seconds.)

    3) 100 Gbps isn't that expensive anymore. The most expensive part will likely be the switch, if you're using a switch. (There are lower cost switches, in terms of absolute price, but if you can and are willing to spend quite a bit more, you can end up with a much bigger switch where you'd be able to put a LOT more systems on 100 Gbps network vs. getting a cheaper switch, but with fewer number of ports overall. I run a 36-port 100 Gbps Infiniband switch in my basement. I have, I think, either 6 or 7 systems hooked up to it right now, but I can hook up to 29-30 more, if I need to.) On a $/Gbps basis, 100 Gbps ends up being cheaper, overall.

  6. I haven't seen a blue screen in over two decades. The miracles of ditching Windows.

    Meanwhile, I've seen maybe two or three kernel panics over that period, but I can't even remember what they were caused by because it's been so long. Probably proprietary nVidia drivers and testing a crap distro in my distro hopping early days.

    The number of BSODs I've seen from the mid 1990s to around 2004, on the other hand, is easily in the hundreds. Often multiple per day in Win9x and still probably a couple dozen per year on XP. In a decade, the number of BSODs Windows has presented is literally countless.

Leave a Reply