Hyper-v

Synology vs. TrueNAS, SHR far more Flexible than ZFS – 1362

Synology vs. TrueNAS, SHR far more Flexible than ZFS – 1362

#Synology #TrueNAS #SHR #Flexible #ZFS

“My PlayHouse”

I am expanding my Synology NAS (the RS1219+) upgrading a 10TB drive to a 18TB,, as we have done so many times before. No issues.
Then I wanted to add a drive to my newly build TrueNAS setup,, that did not go as smooth,, or at all.

๐Ÿ…ฟ๐Ÿ…ฐ๐Ÿ†ƒ๐Ÿ†๐Ÿ…ด๐Ÿ…พ๐Ÿ…ฝ :…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

22 Comments

  1. Just like to note you are running dsm7 (witch I believe you are) and have a empty Bay available you didn't need to remove the drive and remove all redundancy (dsm supports live replace, same as truenas)

    plug in new drive into empty bay goto drive replacement, select old drive then new drive then go, it first mirrors the old drive to new drive, if successful it then deactivates the Old drive and continue with expansion process to use the larger drive (no redundancy is lost while all this is going on) if it fails to mirror it falls back to parity rebuild

    As I said truenas supports the same live replace feature so best to always have a empty drive bay available (or hotspare) on both truenas and Synology

  2. Remember all that nonsense years ago about HDD capacity being falsely marketed, and the "Terabyte" vs "Tebibyte" debate? Here we are years later, a 18Tb Drive has 16.4Tb of capacity…..

  3. TrueNas in itself is more Blue Sky / Open Sky . so limitations there will be. if your Synology Was Bought out. and went Subscription Based System Monthly fee starting at base licence $30+$3.20 per TB Per Unit.

    Would You Be so Eat up about limitations then.

    We are now in the era of Non Ownership Rent Everything

  4. I don't think you can add those 10TB drives into your SHR array anymore – the drive you want to add must be equal to or larger than the largest drive in the storage pool, or equal to any of the drives in the storage pool. You could add them as a second, separate storage pool. If you want them all in one big storage pool you should have added the new drives rather than replacing the old ones. The only way to get them back in is to copy the data off, start again and copy the data back.

  5. Itโ€™s weird that you couldnโ€™t just google and learn the limitations of zfs instead of making a video of you appearing like you have no idea about zfs.

    In conclusion no you cannot just expand zfs. You have to either create a vdev (mirror, raidz, etc) and attach it to the existing pool or replace every drive in the pool one at a time and then it will expand to the larger size.

  6. 17:03 the pool dies with the LOG drive! the cache is not a problem.
    I've added special vdev nvme in mirror for my pool in dezember, running on proxmox. need to know the limitiations, theres a post level1tech forum. should be possible for truenas scale as it's using ZFS on Linux like Proxmox.
    SHR is nothing special, just linux mdadm with ext4 or btrfs on top. they describe it in their knowledge base. it's basically several linux software raid depending on HDD layout, with a raid0 on top of them all.

    one big advantage of ZFS is rebuild time. ZFS only needs to rebuild the used data, mdadm (and so of course synology) must rebuild the whole HDD size.
    tl;dr: if you use 1GB on 20TB HDD with ZFS and swap a HDD, only 1GB data needs to be rebuild. if you do the same with mdadm, all 20TB need to be rebuild.

  7. i would say just use debian and raid 10 arrays plus better networking – networking is the weak link for most smb and prosumers/homelabbers. for a good nas you need tons of ram, good caching layers and big arrays plus the fast networking and optimally dual nas – with 40g (actually 56g for 40gbe) you can do it without a switch

  8. For expanding storage like you say for home usage you have 2 options UNRAID (paid) but all of the tools are build in, or OpenMediaVault+SnapRaid+MergerFS (free) but you have to setup all of it by yourself, its not so complicate when you watch couple of yt videos. So maybe look for that solution and make some yt videos about that also

  9. I'd love to get the stability of ZFS and the features of BTRFS. Adding disks one at a time and BTRFS figures out how to maintain the specified redundancy. Re-balancing of pools and the ability to remove a drive from the pool would be great for us home users.
    But after a nasty experience with bit-rot on HW raid many years ago my data will never again reside on a volume without checksum protection, so for me it'll be ZFS for a while longer. One might be forgiven for believing that backups will save us from that problem, but the thing is, after bit-rot has occured you are now backing up a corrupt file.

  10. What you describe (adding larger disks to increase your pool size one at a time [and it wont recognize the larger size until ALL drive are the same]) is all I have ever done with TrueNAS – you really have to have a plan when rolling it out. I've had to rebuild pools because of it. I am sure someone out there can explain why this is how it is, but that ain't me! Great video thanks!

  11. The only downside that I can see with SHR is that you need a Synology box to use it. You canโ€™t pull it out of a dead box and expect any RAID card to get the data back. You can throw it into a new Synology box and it will just find the volume though.

    I am running an SHR2. Coincidentally it was researching Synology that brought me to you channel.

    I started off with four 4TB WD Red drives expanded to eight 8TB a mixture of WD Red shingle, non shingle, and Seagate Iron Wolf. Now I have six WD Gold 12TB and two Iron Wolf 8TB, which are the next to be replaced.

    Expanding has always been just a few clicks, and set it running. Never missed a beat.

    I have found that it is better to Deactivate a drive before pulling it though. One of the new drives was DOA so I had to put the old one back whilst I waited for a replacement. Deactivating a drive rebuilds back to the old configuration quicker than just ripping one out. Less stress on the other drives.

  12. Indeed this a shortcoming to ZFS, but i think you are in a comfortable situation as you have a lot of drives you can use. So creating a new bigger vdev and replicate the pool over should be possible and easy to do.

  13. Have you considered when to use raid and when not to? To me raid in a home environment is a waste of resources as downtime isn't the same kind of deal breaker as it is in a corporate setting. You have several backups, so why not run the main storage server as raid for that "corporate feel" but run the backups as jbod's? Have a look at something like MergerFS if you want your jbod's presented as a single volume, it's easy and expandable.

  14. I've been surprised ZFS is so popular with this limitation. I've also made use of the SHR gradual growth and loved the experience. I don't see the attraction until it supports this functionality.

  15. maybe you could save some power with removing theys built in laptops from server?dont they consume any power even in standby/sleep mode

    2th invest into good ssd and use it cache instead manually writing it hdd.
    ssd takes less power

  16. This is the reason I do not like zfs. I prefer btrfs that can handle differently sized drives.

    In my eyes ZFS is for when money is not an issue and you can change all drives at the same time. For home use I want to expand a drive at a time.

Leave a Reply