proxmox

SMB Server In Docker with ZFS! Simple, Cheap, and Efficient!

SMB Server In Docker with ZFS! Simple, Cheap, and Efficient!

#SMB #Server #Docker #ZFS #Simple #Cheap #Efficient

“Jim’s Garage”

If you want a lean NAS, look no further than Proxmox with ZFS. In this video I create a ZFS pool, pass it through to a Docker VM, and then deploy a Samba server container to share the storage on your network!

Docker Compose…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

21 Comments

  1. This is a great topic, Jim.
    Have you thought about cockpit and zfs-manager module to manage snapshot, replications, etc? Potentially, samba shares also.
    I think it might be easier and safer in the long run.

    Great work on your homelab series. Clear and complete instructions make it a pleasure to follow your videos every time.

  2. I’ve got a dedicated NAS, so this won’t necessarily be my use-case, but it does clear up some confusion I had on the mapping of drives. Excellent tutorial. I’m going to use this as the basis for a FOG storage instance. You may want to check that out project out, as it’s a pretty cool way to swap out environments for non virtualized hardware. I use it to swap out hypervisors for testing on different hardware. It’s a no-frills, but extremely useful project.

  3. I'm using a LXC container to make my samba server and passing the full ZFS pool , added webmin to setup the share points.
    Being on a LXC I just give it own directory and use a mount point to it , no passing it the full drive .
    I got a pair of sandisk 960gb SSDs from Maplin and not had a real problem with them , I guess you have a story about yours.
    Most of my SSDs are samsung 870-QVOs 6 x 4TB total plus 2 crucial mx500 4TB drives .

  4. As always, well explained and demonstrated.
    I'm not sure the VM + Docker is really necessary, unless you just really want to run SMB in the virtualized environment instead of directly on the Proxmox host. I've been running CIFS shares directly on my Proxmox host for a couple years now, specifically to share OS.iso image files for easy cataloguing and sharing of all my various flavors of Linux and Windows, and the connectivity works the same as this example. Either way, the data is shared out transparently to the target, it's just a different method to get there. Thanks for the video!

  5. Perhaps do a follow-up to this by adding additional users with their own "folder" and permissions. Is there a way to hide specific folders from other users? Also, connecting to it using an FQDN for those users outside the network.

  6. Recommending RAIDZ over mirrors as a general advice is wrong. In fact for most use-cases other than archiving mirrors are the better choice, because they deliver more IOPS. RAIDZ delivers more net capacity, but of what value is that if your VMs perform slowly.

  7. "don't buy these – they're terrible"

    I disagree – I remove the labels using a hairdryer then keep them in a box to use as arse roll in the next pandemic. Haven't found a use for the SSD parts yet though so they just go to recycling.

  8. Thanks for the video. Just one question, why to bother with docker at all and not to use Truenas core or OMV? As soon as the data protected with ZFS on the host, you can attach a disk to TrueNas/OMV VM and assign a singe disk to pool.

  9. Great video, I actually do it via LXC container. What would be cool would be a ISCSI target on the same VM and having it mounted in windows, some apps dont play well with files on a samba share. Have greate weekedn, cant wait for the nexit video 🙂 By the way, getting a Minis Forum MS01 delivered in the next week or so for dedicated firewall, cant decide if I wanna play with Sophos or OPNSense, can I get a demo of Sophos to trial for a month or so?

  10. I don't think you can expand raidz yet. It should be implemented on open zfs 2.3.
    Expanding mirrors you're tied to the smallest capacity drive within the same vdev but usually you expand mirrors pool by adding mirror vdevs to the pool so you can have mixed drives capacity vdev within the same pool

  11. BTW, 24% wearout? Is that at all concerning to you? Have you discussed wearout info in Proxmox? I have a SSD that's reporting 14% wearout and I've been getting nervous. Seeing your 24% makes me wonder if that concern is warrented.

Leave a Reply