since 6.7.0 syslog filled with storageRM and sdrsInjector

Hi @ all,

since updating 2 ESXI hosts from 6.0.0 U3 to 6.7.0 U2c my syslog server (NAS) is filled up with thousands of enties within some minutes…

All of them are “storageRM” and “sdrsInjector” messages.

On vCenter Server Syslog level is “warning”, but storageRM messages are level “Notice”

any ideas how to disable “storageRM” and “sdrsInjector” notifications?
I don’t want do disable syslog to NAS completely 🙁



View Reddit by groehazView Source

Related Articles


  1. Not directed at anyone in particular here, but: Disabling or avoiding entire features to avoid *logging* is an absurd workaround.

    If you’re going to work around the issue, I’d suggest to filter your logs. If you have any doubts, double-check with someone that knows regex sufficiently, as well. Doing this wrong will at the worst case cause your logs to be a whole lot emptier than they should be (and that makes things not easily investigated or supportable).

    https://kb.vmware.com/s/article/2118562 has info on using logfilters functionality on ESXi – see this first.

    Now, I am not dealing with this issue, so I don’t have exhaustive examples to reference/use. But the KB on this issue (https://kb.vmware.com/s/article/67543) has two lines to start with. If there are more log sources (hostd, syslog, vmkwarning, etc), then you’ll want to add them as well. Below is a line I drummed up based on the example lines from the KB, and it seems to cite only VMkernel:

    0 | vmkernel | MemSchedAdmit.*sioc

    After putting that in the `/etc/vmware/logfilters` file, run `esxcli system syslog reload` and see if your log spew subsides (you can run `tail -f /var/run/log/vmkernel.log`; press ctrl+C to exit to prompt again).

    Note: Reverse/remove these lines later and restart syslog, when the fix/patch is out, or you/VMware will not be able to diagnose new/legitimate issues with SIOC memory admission faults.

    Edit: If SIOC itself has been suggested to cause host unresponsiveness due to these admission faults, that’s another matter. But you may as well see if this helps, in case you actually depend on the feature.

  2. We had the same problem as well, found out if you go to the Configure tab of your datastore head to General and edit the Datastore Capabilities. In there you want to Disable Storage I/O Control and statistics collection. That‘s it.

  3. We have the same issue. There is a bug with SIOC. We openend a case at VMware. They advised us not to use SIOC if its not necessary. They also said it would be fixed in 6.7U3.

  4. Yep, as soon as you turn it back on it starts logging like mad!
    We’re managing it with a custom dashboard on LogInsight, if any hosts goes above expected levels we intervene

  5. Any progress on this? We’re experimenting with Graylog and the 6.7 hosts are going nuts with over 50k messages every five minutes!

Leave a Reply