VMware

Solution to realize a share with ~200TB

Hello,

we have a Netapp FAS with around 200TB raw capacity. Goal is to provide a ~120TB archive share on a windows server.
So the plan was to create two 60TB VMware vmdk’s and attach them to a Windows VM.

But at this point I have a problem with the 16TB limit of the NetApp filesystem (WAFS) for single files. This prevents me from create bigger vmdk’s as they are stored in a single file and therefore could not grow bigger than 16TB.

So I am wondering if there is a way to split vmdk’s over multiple files so they can grow up to 64TB?

Which solutions would you suggest to provide this files shares?

​

Thank you in advance!


View Reddit by dale6667View Source

Related Articles

8 Comments

  1. Why can’t you share directly from the NetApp, rather than create multiple layers of unnecessary complexity that will cause your successors to curse you after you move on?

    NetApp does NAS very well and has supported SMB 3.x for some time now.

  2. Been a while since looking at NetApp, so might be missing something… Can ya not just create a SMB share directly from the NetApp?

    Failing that… If ya need it to be via a Windows VM. Create 8x 15TB VMDK’s and present attach to the windows VM. Ideally spread across 4 pvscsi adapter’s for more IO queues.

    From windows, create a storage space & add all the disks to the storage pool. Now create a virtual Disk & volume using simple/striped (I.e. not parity or mirror) for better performance. set column count to 4 (you’ll need to add 4 disks at a time if you ever want to expand this). You could do 2 columns, but that means you’re only striping the data over 2 vmdk’ rather than 4, so less potential throughput.

    If you don’t wanna do storage spaces, you can do similar with dynamic disks, but not sure of support for that many or ability to extend in future.

  3. As others have said, at this point you really want to serve the files out of the Netapp directly. If you must you can run that through a windows VM, but most people buy Netapp’s because they’re really good at file services.

    ​

    They’re not bad at block either, just some odd limitations around them due to how they’re built to be mostly file servers.

    ​

    Doing it with a million VMDKs and bonding it all together in storage spaces is a nightmare waiting to happen.

  4. Since it is used for archiving, why don’t you provide those shares trough the fas and use windows dfs to have an easier management?

    With that you can also use snaplock to ensure no changes are made to the archived files and don’t have to worry about the 16tb file limit.

  5. Also,
    >But at this point I have a problem with the 16TB limit of the NetApp filesystem (WAFS) for single files. This prevents me from create bigger vmdk’s as they are stored in a single file and therefore could not grow bigger than 16TB.

    The 16TB limitation is the reason why my company said goodbye to NetApp. *IN 2013.*

    I’m pretty sure that the hard limit is much higher now with their newer architecture.

    But if your leadership went out and

    1) purchased a really really expensive NAS platform that was then bastardized to only serve SAN protocols, (hopefully, this is not true, and you can use as a NAS for your Windows shares like God intended),

    2) picked a solution that can’t serve out a LUN larger than 16TB – seriously, my crappy 6 year old VNX can do better than that, and

    3) then expects you to add a bunch of layers of complexity to cram it into VMware and a Windows guest and make it kind of act like a NAS, which seems to be the primary objective of even having this NetApp to begin with….

    The whole lot of them need to be fired for gross incompetence.

    I really feel for you. This is making my head hurt.

  6. do you have the requisite licensing in ONTAP for iSCSI? present the volume as an iSCSI target to your Windows VM. ONTAP also supports CIFS and SMB with an appropriate license. I’m not sure if having a layer of abstraction with a VMFS volume is a hard requirement?

Leave a Reply

Your email address will not be published. Required fields are marked *

Close