Host LUN IDs equal across all hosts?


Until now we always mapped the LUNs to our hosts with matching Host LUN IDs. It didn’t take much effort for the storage guys to do this, but for our new storage array, matching the Host LUN IDs across all hosts is quite an effort for the storage team.

Now, I was wondering if it is necessary for the Host LUN ID to be the same across all hosts in the cluster. We never talk with out storage team about LUN IDs, we just use the NAA ID to make sure we’re talking about the same LUN.

I found two articles that contradict each other:



The official VMware KB states the LUN ID should be kept the same, because it could cause issues. But with such an official KB I’m never sure if they’re just saying it to avoid any discussion.

With the Yellow Bricks article, I’m concerned it might be outdated……

So….. Any insight on this?


EDIT: Question solved. Reading the comments, I’m convinced to stay with matching Host LUN IDs

View Reddit by GabesVirtualWorldView Source


To see the full content, share this page by clicking one of the buttons below

Related Articles


  1. You don’t have to, but it’s easier than looking at a long NAA ID. The VMware KB is stating having a non-uniform presentation can cause issues. I just don’t know why you *wouldn’t* do this. Are you sure it’s difficult for the storage admins or are they just not sure on what they’re doing?

  2. I can’t think of any reason storage would present different IDs to a target group unless you were doing something really strange. Personally I’d ensure they were all the same, for both troubleshooting and reliability.

  3. We had an EMC Performance Engineer tell us that it is a problem. We had a Citrix XenServer environment, 4 of the hosts had one set of LUN IDs, and the other 8 had a different set. They found it in the logs on our VNX and identified it as contributing to the performance issue we were having.

  4. This will work fine, but is not advised. Specifically, RDM LUNs presented to VM’s are mapped by LUN ID. You won’t be able to vMotion those VM’s between hosts.

  5. I found the reason why you need the LUN IDs to match. SIOC. You may have issues creating Datastore Clusters is the LUN IDs don’t match. Seeing this behavior in 6.5 u1 at the very least.

  6. Paging Jason Massae…

    Beyond SIOC issues, this might cause issues for VADP in direct SAN mode. Either way I would murder my storage admins if these were not kept the same for my own sanity of troubleshooting.

  7. A long time ago if you presented storage to different hosts with different lun ids each host would want to re-sign the data store. It’s been since like version 3x or 4x since I have seen anyone try it so no idea if it still happens but definitely at some point it caused problems

  8. Related to KB [https://kb.vmware.com/s/article/2148265](https://kb.vmware.com/s/article/2148265) if you upgraded to vSphere 6.5 and if there were multiple LUN ID’s for a single device, vSphere would not claim the device after a reboot. The issue was resolved in the release of 6.5 U1 [https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-esxi-651-release-notes.html#storage-issues-resolved](https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-esxi-651-release-notes.html#storage-issues-resolved) ” **After installation or upgrade certain multipathed LUNs will not be visible** If the paths to a LUN have different LUN IDs in case of multipathing, the LUN will not be registered by PSA and end users will not see them.”

    There are a few things that can happen. If the VML ID doesn’t encode the entire serial of the LUN it is possible to generate the same VML for two different LUNs. If that happens the host will not claim a duplicate device. In the case of multiple paths resulting in different LUN IDs for the same LUN, then multiple VML IDs are generated.

    Cody did a great write up on the subject: [https://www.codyhosterman.com/2018/01/issue-with-consistent-lun-id-in-esxi-6-5/](https://www.codyhosterman.com/2018/01/issue-with-consistent-lun-id-in-esxi-6-5/)

  9. if you want to enable StorageIOControl, keep them equal. We ran into this issue after some vCenter Update when enabling SIOC on new datastores failed. In the previous version we could enable it without problems, but support told us later it didnt’t really work on these datastores although it was enabled.

Leave a Reply