VMware

New setup. Should I consider vVols? Any simplier method than iSCSI?

Hello /r/vmware,

I recently got promoted and I am now in charge of our VMWare environment. I just got out of the VMWare class **VMware vSphere: Install, Configure, Manage** and I have now have to setup our new cluster to replace our old one.

Here is the hardware I have

* Servers: 3x R640
* SAN: SCv3020
* License : Standard
* Switch : 2x S4148F-ON with VLT (not stacked)

I also have a vCenter applicance.

Everything will be connected using SFP+ 10G copper connection through a SFP+ switch.

Now, I will start by stating that I have a good knowledge of networking, but not so much about storage. The preferred method appears to be iSCSI. While it is something I could setup, I would try to avoid having through the hassle of the complex fault domain setup. I was hoping to use vVols, but I just figured out that this thing still relies on you having to setup the connection to the SAN.

I am sure you guys are used to set that up, but really, is there any method that doesn’t requires building 2 fault domain networks, 500 ip addresses, labeling a ton of cables, building a complex documentation and eating half of my switch for storage?

Also, is vVols still a thing? Should I consider it?

Thanks for your time!



View Reddit by NoradIVView Source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

8 Comments

  1. **iSCSI** would be the way to go for your current design. It may be a bit tricky if you are sharing the 10Gb NIC’s with storage and VM traffic in a high IOPS and high network utilization… but if that is the case your SAN might be under powered. BUT, if that becomes and issue you can add additional 10GB NIC’s and physically separate the traffic.

    IN general, there are other options for shared storage. Not necessarily recommendations, but options.

    **NFS**. If does not appear that the SCv3020 supports it so that a no go.

    **Fibre Channel**. It appears that SCv3020 supports up to 8 x 16Gb ports. I assume that is 4 per controller. If you drop 2 FC HBA’s into each server you’ll have 3 going to one controller, and three going to the other controller for redundancy.

    Advantages:

    Your SAN and Network traffic is separate.

    You are not burning 10Gb ports on your switch (not a huge advantage these days).

    Direct connect to a SAN is not at all complicated.

    Disadvantages:

    Your SCv3020 might not have all 8 FC ports. You’d need to look.

    If You want to add a 5th host, you’ll need to look at a couple of FC switches.

    You’ll need to add FC HBA’s to your hosts.

  2. Iscsi is the way to go. Please do read the deployment guides for the SC series. Everything you need to know is in there, setting up fault domains, etc…
    when you contact support, and in the lifetime you will be contacting support they will ask you: have you read the deployment guides?
    Ask your sales rep for more info on the guidelines, maybe someone who made the design still has the drawings of it.

  3. While being a perfectly fine storage array, the SCv3020 lacks some feature that would make a VVol deployment interesting. Unlike some more “advanced” arrays like Pure, which have their VASA appliances built-in the array, you would need a separate virtual appliance to run the VVol integration.

    In my experience with the SC series, you’re better off using it as either iSCSI or FC. The Fault Domain configuration isn’t really complicated. we normally bill around 4 hours for a basic deployment like this.

  4. iSCSI would need, at a minimum, a single connection between the SAN and the switch, a connection between each host and the switch and a private subnet for IP connectivity (sufficiently large to accommodate the SAN and the three hosts, perhaps somewhat larger to allow future growth). If you want some redundancy, the SAN and the hosts would have to be connected to both switches (2 links each) and need 2 different subnets for IP connectivity. So, that’s like 4-8 cables, 4-8 switch ports and 4-8 IP addresses. 500 IPs, a ton of cables and complex documentation is not required.

Leave a Reply