Curious to get some feedback on any caveats (both architecturally and operationally) of using large disks that people may have run into. Environment info: vSphere 6.5, Zerto for DR (vSphere to vSphere), Commvault to back up our VMs and either Windows Server 2016 or 2019 for the VM OSes.
We have a file server migration project where some of the existing shares are 20TB (they are currently on a NAS) and growing at a rate of about 5-6TB/year. In the past, we used 20TB datastores and created 4-7TB virtual disks for each share and broke up the share content to “fit” into the smaller disks. Unfortunately, this resulted in the management of the smaller disks becoming somewhat labor intensive. In an attempt to avoid these issues this time around I am thinking of creating 64TB datastores with one single 60TB vmdk per datastore.
I’m looking for any positive or negative feedback on this idea along with any lessons learned from someone that has done or is currently doing this. I can think of some myself but I’m interested to hear what this sub thinks about this idea.
View Reddit by heher420 – View Source