VMware

New LUN per VM???

My company recently purchased another company and earlier this year I “inherited” their infrastructure. Amongst the new devices is a 3Par SAN. Upon looking at their vCenter, I noticed they had created a new LUN/datastore for every single individual VM. Like, every one of them. Hundreds…

…does anyone else do this? Is this common behavior and I’m just missing something because it seems absolutely mental to me. I’m super curious about how others carve out their LUNs to VMs. I just have a bunch of 4TB LUNs and let DRS do its thing.


View Reddit by DjaestheticView Source

Related Articles

16 Comments

  1. People used to do this. They didn’t understand what they were doing and made these messes. I used to clean them up all the time. Occasionally it was because they wanted to replicate a VM one-by-one using array based replication. A mess.

  2. 3PAR Adaptive Optimization doesn’t work well with Storage DRS- it’s explicitly mentioned to not use SDRS in the Best Practices Guide.

    This is an extreme case, but I can kind of see the appeal. It makes tracking performance issues, capacity utilization, and troubleshooting easier (assuming they have consistent LUN IDs across hosts, instead of the 3PAR “Auto”). That being said, depending on the version they might be getting close to the LUN maximums of ESXi.

    3PAR InformOS prior to 3.3.1 has a 16TB LUN limit, so if you have monster VMs, you may have to resort to this sort of behavior as well. There are also recommendations on max # of VMs /VMFS for performance and locking reasons, that this neatly sidesteps.

    I’m not saying it’s the design I’d choose, but there are definitely reasons to do it this way, and I’d make sure I’ve looked into it fully before blindly barreling forward on a new design.

  3. One reason to do things like this is that the queue depth per LUN is 128 iirc. So you could in theory have one VM monopolize a queue for a given LUN if they were shared among a lot of VMs.

  4. Stop that nonsense right there.

    This was done by some guys in the past because of the 3PAR having issues with SDRS.

    I don’t recommend that still being a thing with newer release (3.3.1 and so on).
    Moreover you could use tag based placement for your VMs to limit IOPS for example, given you don’t want to do it on the storage level.

    If it’s only a single box, no requirement for sync replication, I can highly recommend VVOLs though.

  5. Are you sure they aren’t using vvols? But yeah some peoples children think this is a good idea. 3-pars are pretty snazzy and do some cool storage tiering maybe they were looking to “optimize” that?

  6. So they also have hundreds of datastores? It sounds like whoever set it up didn’t have a clue to what they were doing. Or maybe like, a negative clue. I would work on consolidating those down.

  7. I have heard many people in the past saying (they heard) that VMware recommendation was to put one VM per datastore and I always laughed at them as I’ve never read anywhere that this was the case in my career.

    There is absolutely no reason to do that but I remember reading through best practices docs and see that they recommend no more than a certain number of VMs (could have been 25 though I don’t recall for sure) for best performance. Again, this is a best practice recommendation and does not mean it applies to every one.

  8. Our environment is this way for all of our older VM’s. They used to run in XEN and each volume was a raw disk on the SAN. So its not even every VM its every disk that has a lun. We aren’t that large so we don’t have any issues but all new storage I add is a large pool. We also don’t have DRS =(. I can only guess why the XEN vm’s were converted that way but we haven’t ever moved them so they stay that way. The SAN is an HP VSA if your curious.

  9. Unfortunately, we do that.. we have two flex pods (VMware, Cisco, UCS) and hard requirements regarding DR testing per application; we need to failover individual applications and therefore create a data store per application. I have some clusters with about 200 data stores.. they are all NFS, no LUNs though..

    Soon we will be moving to VSAN and thank goodness don’t have to deal with this anymore

  10. I have a Nimble All-Flash AF40 and have 1 VM per LUN. (over 100)
    It helps with storage snapshots initiated by Veeam.

    No wasted SAN space since we have compression and dedupe.

    Yes it is was pain to migrate from 15 4TB VMFS datastores but now I can recover a VM from snapshot in 2 minutes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Close