Unraid
ZFS Essentials: Auto-Converting Folders to Datasets on
ZFS Essentials: Auto-Converting Folders to Datasets on Unraid
#ZFS #Essentials #AutoConverting #Folders #Datasets
“Spaceinvader One”
This video demonstrates an automated script that takes the pain out of converting folders to datasets. By scheduling the script to run, you can ensure that all new containers on Unraid are effortlessly converted, making it perfect for tasks like snapshots and data organization. Watch this…
source
To see the full content, share this page by clicking one of the buttons below |
Awesome work thanks as always. I did the script on the appdata folder everything worked, i did the same on the domains folder (three vms, Home assistant, retronas, windows 98) but just the retronas converted into datasets, the others are not shown in zfs master. the script runs trought without errors. zfs list shows that all vms are also datasets so i believe its a zfs master bug
Script v2 always gives me this errors: Skipping folder /mnt/cache/appdata/XXXXX due to insufficient space . Any idea why?
can this be utilized for backing up a hdd that’s in a zfs array?
i have an additional hdd that i set up outside of my array that is like to make a snapshot of my entire array drive if possible.
Is there anyway to revert back ? This script fucked up my appdata permission
Heads up the current Version 2 of the script ( as of 14th Sept 2023 ) has a bug for a few of us where it is spitting out an insufficient space error, Version 1 is still working fine though.
I'm in absolute awe of your generosity @Spaceinvader One!!!
I have a fair idea just how much time that you give to help others in your detailed videos…… We are all blessed…… Thankyou for all that you do!!!
The script works, and every docker is its own dataset.
But on the array, have no dataset.
I know why I can't use the 'folder to dataset' script, but not what I should do now.
Can't wait for the next video for auto snap shot 🙂
quick question, how do you set your view once you click on the contents of the pool/share? instead of the home icon and cache I have Index of /mnt/cache and don't get near as many details. I only have Size, Last modified and location.
The background music – lets dance 🎉🎉🕺💃🏼
@spaceinvaderone thanks for the AMAZING content. Question, is it stafe to convert the array folders to datasets?
I converted all of my array disks to ZFS as you instructed in another video and everything is running smooth.
Also, I converted the cache as instructed. Is it safe to do the array or is that another topic for another video?
Hi SI1, I see you have your "appdata" folder separate from your system folder on your cache drive. On my cache I have my appdate default location pointing to my "system" folder. Is that a good thing? should they be separated. Your advice would be helpful.. Love the videos..
thanks for your content as always Ed! it might just be my experience, but manually creating child datasets via ZFS master resulted in an owner of 'root' and incorrect permissions. I guess this is an FYI for anyone else not using the script shown and going manual – I had to troubleshoot this for several dockers who could not write to their AppData, and was fixed by updating ownership and permissions back to owner:nobody and permissions drwxrwxrwx
Am I right in saying that after converting appdata folders to datasets, if we delete a Docker container in the UI, now we'll also have to remember to destroy the appdata folder manually/separately?
I have 4 folders under appdata, and the last one failed to clean up after running your script. It did create the dataset, and somehow the source total size shown in the log file is actually smaller than the actual size shown in file manager. Here is the error in the log:
Validating copy…
Validation failed. Source and destination file count or total size do not match.
Source files: 587, Destination files: 587
Source total size: 27948074, Destination total size: 32198443
Any ideas? Thanks!
How did you enable all the buttons in the > user > appdata (jobs, search, delete, copy, rename…..)? I only see Index of /mnt/user/appdata and no buttons. I am a newbie here – thanks much.
Excellent video per usual. Though i can't understand why these functions aren't present natively.
This was a great video, everything went very well, except for one thing. My binhex-delugevpn docker no longer works. The docker settings are good nothing changed there. The webui opens, but the preferences are blank and the plugins are gone. I even removed it and reinstalled it using the template. Same issue. Can't figure out what happened. Everything else works fine. Sonaar, Jacket, Handbrake, Calibre. No issues at all.
I have a large amount of data for VM and for Appdata, I keep it on a drive other than my Cache drive mounted using Unassigned Devices. Is it possible to convert drives mounted this way to explore the benefits of ZFS and use Snapshots? Currently I don't get things like my Plex metadata backed up as it's too difficult with so many small files but a snapshot would be perfect! How much disk space do snapshots done like this take up? Stored on the same device correct? If you lose the device this isn't a true backup right – can the ZFS snapshot be done to a second device mounted with Unassigned Devices? Maybe even a remote share? Hmm!
Awesome! was wondering how to do this. cheers for the effort in the scripts!
What if I don't have any data sets but want to convert all folders in the zfs pool to a data set?
EDIT: You just have to change source_path="${source_pool}/${source_dataset}" to source_path="${source_pool}" in the script.
from my point of view this script is also very useful to convert any subfolder structure on another zpool – or am I wrong?
Great Video, Ed – as always!
Awesome tutorial, it works fine at first try!
When moving an appdata folder back from the array to the new cache pool, can we manually create the datasets making their names match? This would save time in having the mover script do the work after the fact. Will this confuse the FUSE system in any way?
Where the snapshot will be saved, is it in the same cache pool?