proxmox

LXCs vs VMs – What Was My Rationale?

LXCs vs VMs – What Was My Rationale?

#LXCs #VMs #Rationale

“DB Tech”

So in my last video, we took a look at what I’ve got running in my home lab back here, and we briefly discussed the hardware.

Last video:

Then we talked about the two Proxmox servers I’ve got for production and the one I’ve got for testing and development. I don’t…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

28 Comments

  1. hi, the way you suggested doing things in terms of easy backup also exist for a monolithic docker host too. most of the ways people do it (like me) are to make use of BTRFS snapshotting and also to make use of docker compose bindmounts. so you'd create a directory like /appdata/utility_name and then you'd have timeshift or snapper or btrfs-progs or whatever similar utility then snapshot that /appdata/ at regular intervals (either on another separate disk or as part of a mirror) and should something bad happen to any specific docker container then its as simple as just going into that specific subvolume (snapshot) for the files and yoinking them out

  2. Love the plan. But I moved away from proxmox because of kernel error which crashed my system. Backup was on the same disk so that was quite a nightmare for me. Maybe another video for us if you have a backup plan for this scenario?

  3. Makes complete sense. I may be wrong, but an LXC is essentially a container just running in proxmox natively instead of something like portainer on a VM. Least that's how I understand them

  4. I have two points. 1. I do agree with you on LXC vs Docker. Of course, majority of us do the easy docker-compose templates mixing quite stagnant configuration files with databases (mysql, postgress…). The recovery dilemma could be solved by keeping databases in separate dedicated LXC/VMs replicated at least to one extra instance. There is no reason to create separate mysql installation for every application. They could be combined, which long term simplifies backups and recoveries (IMHO).
    2. The issue of having multiple instances of DNS (or homepage) apps. Implementing vip failover with Keepalived solves it beautify. Quick and easy installation, but the benefit of losing DNS resolution for 1 second is just priceless. My pain Pihole runs as LXC container on Proxmox, and its failover backup (updated by gravity-sync every 15 mins) on a raspberry pi 3.
    Also on question: why don't your production Proxmox server run in a cluster? Just wondering.

  5. i prefere to run each app in their own lxc if possible, it's more easy to maintain, restore etc

    but i have some lxc that run docker with container,
    -one docker with my self application, 28 apps, ssome use a database that is on an another lxc, since any of those apps has data directly, i can restore the all docker without problem (no data lose)

    -one docker with : stirling-pdf, penpot, opera, phpmyadmin,watchyourlan,libreoffice, => only penpot has data that is needed to be save (so backup each day to another disk…)

    -one docker with: audiobookshelf, snippet-box, wallos, planka, flame, dashy => would like to put each of them in lxc but didnt look at it yet, and it's been like this for a long time now

  6. I had the same dilemma with my home server.

    The reason I use VMs on some mission critical services is due to HA and Backups.

    Since I use my NAS's SSD pool as storage via NFS, if 1 node shuts down, the HA manager can migrate without any issues. If I use local-lvm and my node powers off for any reason, HA cannot migrate since the storage is on the offline PVE node. If I use LXC with NFS as storage, backups will fail.

    So It's mix and match for me until I upgrade my hardware and move to CEPH, this is my setup.

  7. My Arrr stack is in Docker its own VM because, you know… Plex is in a VM because hardware passthrough and it connects to NFS shares and I didn't want to mess with those in an LXC. Ubuntu minimal cloud VMs are good for single service VMs as it is pretty light on resources. Other services are in LXCs for the same reasons as you. In addition, my LXCs are available on different networks depending on what they are and this is easier to do vs in a Docker VM.

  8. I go for an LXC out of the gate because my lab doesn't have a lot of horsepower and I like how lightweight they are. I tried getting AWX running in an LXC and couldn't make it work. It's a hobby for me and getting frustrated makes me stop before pulling my hair out so i don't completely give up on it from burnout. I'd be curious to see if anyone has won that battle and what their process was.

  9. @db tech just curious what are the machines specs for these LXC containers for the single services? Just started setting up my home lab so looking for some recommendations. Thanks

  10. If you can containerise it then it should be containerised, the real question I keep going back and forth between docker and lxc. Half my services are on LXC and the other half are on docker.

  11. Loved the rational, especially the snapshot restore. I guess my two lazy points are what would keep from doing it.

    1. Manually updating by logging in and pulling images, etc.
    2. Having to think about resources for each lxc.

    My favorite thing about docker in a VM is I don't have to care if one container uses more resources than another. Only have to monitor the overall VM system usage.

    Either way though, going to move pi hole to an lxc for sure!

  12. well, that is very interesting but can i go one step further? I run my services in docker on ubuntu server. what would be the difference from that to running docker inside an lxc. Sounds to me like an inception of containers… I mean there might be a reason why people recommend doing it that way, but I just can't get the reason why.

  13. Good points for using LXC’s over VM’s! I’ve seen multiple ways to passthrough hardware to LXC’s. I’d love to see a definitive video on hardware passthrough to LXC’s! Especially for unprivileged LXC’s

  14. I think I was with you until 5:35. From my understanding, since the data (shouldn't) be in the VM itself, just mounted, it's just the configurations that needs to be restored. And even if you don't have something like Anisble, Terraform, or something like that for auto-deployments, even just making a backup of the docker compose file used to setup the docker in the VM should be good enough. Now, this only doesn't make sense if you don't update your docker compose file each time any updates are done, and I would encourage you to do that, if you don't already. Of course, the other reason why this approach wouldn't work is that you're storing your VM data directly on the VM, instead of stored outside, to be mounted inside. Depending on your setup, this may the best way, and then your rational sorta makes sense, but even then, not really. Because data compromise is data compromise, regardless of what manner of container you're using. And you're already using Proxmox, so setting up your storage in Proxmox (NFS, ISCSI) shouldn't be that much more difficult than your current setup.

    Your rational at 7:30 is probably the only one that is objectively true. Memory usage should also decrease slightly using LXCs vs VMs, which allows you to have much more LXCs at the same time running than that amount of VMs. At this point, I must ask: What's the point of using Docker if you're just running 1-2 apps per LXC? Shave down the resources more and just run them 'baremetal' on the LXC.

Leave a Reply