VMware

10Gb NAS or 4x1Gb “SAN” for backup storage?

We are setting up a new vmware two host SDDC with shared 12Gb sas storage at a branch office. To keep the costs down we are recycling an older HPE DL320 (for which we have an abundance of spare parts) and using it for backup storage (running FreeNAS).

My goal is to maximize throughput to/from the backup server without killing the production network. I am not too concerned about how long the backup process takes as the amount of data is pretty small (the datastore holds about 6 TB of data, most of it pretty static) but I *am* concerned about how long disaster recovery will take.

The hosts connect to the network with redundant 10Gb links and each have four unused 1Gb ports available. The DL320 has four 1Gb ports available and upgrading it with 10Gb nics is probably not a good option as it would require a new switch or the creation of a huge link aggregation group between two switches.

I am wondering if it is possible/smart to create a iSCSI SAN where the hosts four 1Gb nics are connected as well as the DL320 with it’s four 1Gb nic? Can this be done in such a way that we actually can expect 4Gb throughput? The simple alternative is to connect the DL320 to the 10Gb backbone (using NFS or iSCSI) but as I understand we would not gain anything but redundancy from using multiple 1Gb nics in this scenario?

Down which road should I travel to maximize speed?


View Reddit by snodre2View Source

Related Articles

One Comment

  1. For storage you’d want to do MPIO (Round robin) vs a LACP bond to achieve upto line rate of all network cards.

    Should be doable. Lots of guides on the internet as multi 1gb storage was pretty common early 2010’s

Leave a Reply

Your email address will not be published. Required fields are marked *

Close