Esx Problem Hyperthreading Unmitigated Formatonhost Not Found



Warning “esx.problem.hyperthreading.unmitigated” after installing ESXi patches. This warning may appear after installing patches contained in release ESXi601 (14 Aug 2018) and you have not updated your vCenter Server. XXX esx.problem.hyperthreading.unmitigated.formatOnHost not found XXX (Build 9313334). Standard VMware ESXi ISO, is the easiest and most reliable way to install ESXi on HPE servers.It includes all of the required drivers and management software to run ESXi on HPE servers, and works seamlessly with Intelligent Provisioning.

Upgraded one of our ESXi hosts with the latest patches released today that are aimed at fixing the L1 Terminal Fault issues. After that the host started giving this warning: esx.problem.hyperthreading.unmitigated. No idea what it’s supposed to mean!

Went to Configure > Settings > Advanced System Settings and searched for anything with “hyperthread” in it. Found VMkernel.Boot.hyperthreadingMitigation, which was set to “false” but sounded suspiciously similar to the warning I had. Changed it to “true”, rebooted the host, and Googled on this setting to come across this KB article. It’s a good read but here’s some excerpts if you are interested in only the highlights:

Esxi esx.problem.hyperthreading.unmitigated.formatonhost not found

Esx Problem Hyperthreading Unmitigated Formatonhost Not Found Within

Like Meltdown, Rogue System Register Read, and “Lazy FP state restore”, the “L1 Terminal Fault” vulnerability can occur when affected Intel microprocessors speculate beyond an unpermitted data access. By continuing the speculation in these cases, the affected Intel microprocessors expose a new side-channel for attack. (Note, however, that architectural correctness is still provided as the speculative operations will be later nullified at instruction retirement.)

CVE-2018-3646 is one of these Intel microprocessor vulnerabilities and impacts hypervisors. It may allow a malicious VM running on a given CPU core to effectively infer contents of the hypervisor’s or another VM’s privileged information residing at the same time in the same core’s L1 Data cache. Because current Intel processors share the physically-addressed L1 Data Cache across both logical processors of a Hyperthreading (HT) enabled core, indiscriminate simultaneous scheduling of software threads on both logical processors creates the potential for further information leakage. CVE-2018-3646 has two currently known attack vectors which will be referred to here as “Sequential-Context” and “Concurrent-Context.” Both attack vectors must be addressed to mitigate CVE-2018-3646..

Attack Vector Summary

  • Sequential-context attack vector: a malicious VM can potentially infer recently accessed L1 data of a previous context (hypervisor thread or other VM thread) on either logical processor of a processor core.
  • Concurrent-context attack vector: a malicious VM can potentially infer recently accessed L1 data of a concurrently executing context (hypervisor thread or other VM thread) on the other logical processor of the hyperthreading-enabled processor core.

13 votes, 25 comments. We have some new ESXi 6.7 hosts that are showing the message 'esx.problem.hyperthreading.unmitigated' after applying the. For whatever reason, performance took a hit and cause the ESXi host to freeze. VCenter is hosted as a VM server in ESXi. I cold-booted the host, and when it powered back on 90% of my VMs were 'inaccessible.' I tried removing them from inventory, browsing the datastore, and adding them to inventory, but that option was not available/greyed out.

Mitigation Summary

  • Mitigation of the Sequential-Context attack vector is achieved by vSphere updates and patches. This mitigation is enabled by default and does not impose a significant performance impact. Please see resolution section for details.
  • Mitigation of the Concurrent-context attack vector requires enablement of a new feature known as the ESXi Side-Channel-Aware Scheduler. The initial version of this feature will only schedule the hypervisor and VMs on one logical processor of an Intel Hyperthreading-enabled core. This feature may impose a non-trivial performance impact and is not enabled by default.
Esx problem hyperthreading unmitigated formatonhost not found within

So that’s what the warning was about. To enable the ESXi Side Channel Aware scheduler we need to set the key above to “true”. More excerpts:

The Concurrent-context attack vector is mitigated through enablement of the ESXi Side-Channel-Aware Scheduler which is included in the updates and patches listed in VMSA-2018-0020. This scheduler is not enabled by default. Enablement of this scheduler may impose a non-trivial performance impact on applications running in a vSphere environment. The goal of the Planning Phase is to understand if your current environment has sufficient CPU capacity to enable the scheduler without operational impact.

Esxi 5.5 Esx.problem.hyperthreading.unmitigated.formatonhost Not Found

The following list summarizes potential problem areas after enabling the ESXi Side-Channel-Aware Scheduler:

Esx Problem Hyperthreading Unmitigated Formatonhost Not Found
  • VMs configured with vCPUs greater than the physical cores available on the ESXi host
  • VMs configured with custom affinity or NUMA settings
  • VMs with latency-sensitive configuration
  • ESXi hosts with Average CPU Usage greater than 70%
  • Hosts with custom CPU resource management options enabled
  • HA Clusters where a rolling upgrade will increase Average CPU Usage above 100%

Note: It may be necessary to acquire additional hardware, or rebalance existing workloads, before enablement of the ESXi Side-Channel-Aware Scheduler. Organizations can choose not to enable the ESXi Side-Channel-Aware Scheduler after performing a risk assessment and accepting the risk posed by the Concurrent-context attack vector. This is NOT RECOMMENDED and VMware cannot make this decision on behalf of an organization.

So to fix the second issue we need to enable the new scheduler. That can have a performance hit, so best to enable it manually so you are aware and can keep an eye on the load and performance hits. Also, if you are not in a shared environment and don’t care, you don’t need to enable it either. Makes sense.

That warning message could have been a bit more verbose though! :)

Lately I have been writing on a variety of topics regarding the use of VOBs (VMkernel Observations) for creating useful vCenter Alarms such as:

I figure it would also be useful to collect a list of all the vSphere VOBs, at least from what I can gather by looking at /usr/lib/vmware/hostd/extensions/hostdiag/locale/en/event.vmsg on the latest version of ESXi. The list below is quite extensive, there are a total of 308 vSphere VOBs not including the VSAN VOBs in my previous articles. For those those of you who use vSphere Replication, you may also find a couple of handy ones in the list.

VOB ID VOB Description
ad.event.ImportCertEventImport certificate success
ad.event.ImportCertFailedEventImport certificate failure
ad.event.JoinDomainEventJoin domain success
ad.event.JoinDomainFailedEventJoin domain failure
ad.event.LeaveDomainEventLeave domain success
ad.event.LeaveDomainFailedEventLeave domain failure
com.vmware.vc.HA.CreateConfigVvolFailedEventvSphere HA failed to create a configuration vVol for this datastore and so will not be able to protect virtual machines on the datastore until the problem is resolved. Error: {fault}
com.vmware.vc.HA.CreateConfigVvolSucceededEventvSphere HA successfully created a configuration vVol after the previous failure
com.vmware.vc.HA.DasHostCompleteDatastoreFailureEventHost complete datastore failure
com.vmware.vc.HA.DasHostCompleteNetworkFailureEventHost complete network failure
com.vmware.vc.VmCloneFailedInvalidDestinationEventCannot complete virtual machine clone.
com.vmware.vc.VmCloneToResourcePoolFailedEventCannot complete virtual machine clone.
com.vmware.vc.VmDiskConsolidatedEventVirtual machine disks consolidation succeeded.
com.vmware.vc.VmDiskConsolidationNeededVirtual machine disks consolidation needed.
com.vmware.vc.VmDiskConsolidationNoLongerNeededVirtual machine disks consolidation no longer needed.
com.vmware.vc.VmDiskFailedToConsolidateEventVirtual machine disks consolidation failed.
com.vmware.vc.datastore.UpdateVmFilesFailedEventFailed to update VM files
com.vmware.vc.datastore.UpdatedVmFilesEventUpdated VM files
com.vmware.vc.datastore.UpdatingVmFilesEventUpdating VM Files
com.vmware.vc.ft.VmAffectedByDasDisabledEventFault Tolerance VM restart disabled
com.vmware.vc.guestOperations.GuestOperationGuest operation
com.vmware.vc.guestOperations.GuestOperationAuthFailureGuest operation authentication failure
com.vmware.vc.host.clear.vFlashResource.inaccessibleHost's virtual flash resource is accessible.
com.vmware.vc.host.clear.vFlashResource.reachthresholdHost's virtual flash resource usage dropped below the threshold.
com.vmware.vc.host.problem.vFlashResource.inaccessibleHost's virtual flash resource is inaccessible.
com.vmware.vc.host.problem.vFlashResource.reachthresholdHost's virtual flash resource usage exceeds the threshold.
com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEventVirtual flash resource capacity is extended
com.vmware.vc.host.vFlash.VFlashResourceConfiguredEventVirtual flash resource is configured on the host
com.vmware.vc.host.vFlash.VFlashResourceRemovedEventVirtual flash resource is removed from the host
com.vmware.vc.host.vFlash.defaultModuleChangedEventDefault virtual flash module is changed to {vFlashModule} on the host
com.vmware.vc.host.vFlash.modulesLoadedEventVirtual flash modules are loaded or reloaded on the host
com.vmware.vc.npt.VmAdapterEnteredPassthroughEventVirtual NIC entered passthrough mode
com.vmware.vc.npt.VmAdapterExitedPassthroughEventVirtual NIC exited passthrough mode
com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEventFT Disabled VM protected as non-FT VM
com.vmware.vc.vcp.FtFailoverEventFailover FT VM due to component failure
com.vmware.vc.vcp.FtFailoverFailedEventFT VM failover failed
com.vmware.vc.vcp.FtSecondaryRestartEventRestarting FT secondary due to component failure
com.vmware.vc.vcp.FtSecondaryRestartFailedEventFT secondary VM restart failed
com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEventNeed secondary VM protected as non-FT VM
com.vmware.vc.vcp.TestEndEventVM Component Protection test ends
com.vmware.vc.vcp.TestStartEventVM Component Protection test starts
com.vmware.vc.vcp.VcpNoActionEventNo action on VM
com.vmware.vc.vcp.VmDatastoreFailedEventVirtual machine lost datastore access
com.vmware.vc.vcp.VmNetworkFailedEventVirtual machine lost VM network accessibility
com.vmware.vc.vcp.VmPowerOffHangEventVM power off hang
com.vmware.vc.vcp.VmRestartEventRestarting VM due to component failure
com.vmware.vc.vcp.VmRestartFailedEventVirtual machine affected by component failure failed to restart
com.vmware.vc.vcp.VmWaitForCandidateHostEventNo candidate host to restart
com.vmware.vc.vm.VmStateFailedToRevertToSnapshotFailed to revert the virtual machine state to a snapshot
com.vmware.vc.vm.VmStateRevertedToSnapshotThe virtual machine state has been reverted to a snapshot
com.vmware.vc.vmam.AppMonitoringNotSupportedApplication Monitoring Is Not Supported
com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEventvSphere HA detected application heartbeat status change
com.vmware.vc.vmam.VmAppHealthStateChangedEventvSphere HA detected application state change
com.vmware.vc.vmam.VmDasAppHeartbeatFailedEventvSphere HA detected application heartbeat failure
esx.audit.agent.hostd.startedVMware Host Agent started
esx.audit.agent.hostd.stoppedVMware Host Agent stopped
esx.audit.dcui.defaults.factoryrestoreRestoring factory defaults through DCUI.
esx.audit.dcui.disabledThe DCUI has been disabled.
esx.audit.dcui.enabledThe DCUI has been enabled.
esx.audit.dcui.host.rebootRebooting host through DCUI.
esx.audit.dcui.host.shutdownShutting down host through DCUI.
esx.audit.dcui.hostagents.restartRestarting host agents through DCUI.
esx.audit.dcui.login.failedLogin authentication on DCUI failed
esx.audit.dcui.login.passwd.changedDCUI login password changed.
esx.audit.dcui.network.factoryrestoreFactory network settings restored through DCUI.
esx.audit.dcui.network.restartRestarting network through DCUI.
esx.audit.esxcli.host.poweroffPowering off host through esxcli
esx.audit.esxcli.host.rebootRebooting host through esxcli
esx.audit.esximage.hostacceptance.changedHost acceptance level changed
esx.audit.esximage.install.novalidationAttempting to install an image profile with validation disabled.
esx.audit.esximage.install.securityalertSECURITY ALERT: Installing image profile.
esx.audit.esximage.profile.install.successfulSuccessfully installed image profile.
esx.audit.esximage.profile.update.successfulSuccessfully updated host to new image profile.
esx.audit.esximage.vib.install.successfulSuccessfully installed VIBs.
esx.audit.esximage.vib.remove.successfulSuccessfully removed VIBs
esx.audit.host.bootHost has booted.
esx.audit.host.maxRegisteredVMsExceededThe number of virtual machines registered on the host exceeded limit.
esx.audit.host.stop.rebootHost is rebooting.
esx.audit.host.stop.shutdownHost is shutting down.
esx.audit.lockdownmode.disabledAdministrator access to the host has been enabled.
esx.audit.lockdownmode.enabledAdministrator access to the host has been disabled.
esx.audit.maintenancemode.canceledThe host has canceled entering maintenance mode.
esx.audit.maintenancemode.enteredThe host has entered maintenance mode.
esx.audit.maintenancemode.enteringThe host has begun entering maintenance mode.
esx.audit.maintenancemode.exitedThe host has exited maintenance mode.
esx.audit.net.firewall.config.changedFirewall configuration has changed.
esx.audit.net.firewall.disabledFirewall has been disabled.
esx.audit.net.firewall.enabledFirewall has been enabled for port.
esx.audit.net.firewall.port.hookedPort is now protected by Firewall.
esx.audit.net.firewall.port.removedPort is no longer protected with Firewall.
esx.audit.net.lacp.disableLACP disabled
esx.audit.net.lacp.enableLACP eabled
esx.audit.net.lacp.uplink.connecteduplink is connected
esx.audit.shell.disabledThe ESXi command line shell has been disabled.
esx.audit.shell.enabledThe ESXi command line shell has been enabled.
esx.audit.ssh.disabledSSH access has been disabled.
esx.audit.ssh.enabledSSH access has been enabled.
esx.audit.usb.config.changedUSB configuration has changed.
esx.audit.uw.secpolicy.alldomains.level.changedEnforcement level changed for all security domains.
esx.audit.uw.secpolicy.domain.level.changedEnforcement level changed for security domain.
esx.audit.vmfs.lvm.device.discoveredLVM device discovered.
esx.audit.vmfs.volume.mountedFile system mounted.
esx.audit.vmfs.volume.umountedLVM volume un-mounted.
esx.clear.coredump.configuredA vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.
esx.clear.coredump.configured2At least one coredump target has been configured. Host core dumps will be saved.
esx.clear.net.connectivity.restoredRestored network connectivity to portgroups
esx.clear.net.dvport.connectivity.restoredRestored Network Connectivity to DVPorts
esx.clear.net.dvport.redundancy.restoredRestored Network Redundancy to DVPorts
esx.clear.net.lacp.lag.transition.uplag transition up
esx.clear.net.lacp.uplink.transition.upuplink transition up
esx.clear.net.lacp.uplink.unblockeduplink is unblocked
esx.clear.net.redundancy.restoredRestored uplink redundancy to portgroups
esx.clear.net.vmnic.linkstate.upLink state up
esx.clear.scsi.device.io.latency.improvedScsi Device I/O Latency has improved
esx.clear.scsi.device.state.onDevice has been turned on administratively.
esx.clear.scsi.device.state.permanentloss.deviceonlineDevice that was permanently inaccessible is now online.
esx.clear.storage.apd.exitExited the All Paths Down state
esx.clear.storage.connectivity.restoredRestored connectivity to storage device
esx.clear.storage.redundancy.restoredRestored path redundancy to storage device
esx.problem.3rdParty.errorA 3rd party component on ESXi has reported an error.
esx.problem.3rdParty.informationA 3rd party component on ESXi has reported an informational event.
esx.problem.3rdParty.warningA 3rd party component on ESXi has reported a warning.
esx.problem.apei.bert.memory.error.correctedA corrected memory error occurred
esx.problem.apei.bert.memory.error.fatalA fatal memory error occurred
esx.problem.apei.bert.memory.error.recoverableA recoverable memory error occurred
esx.problem.apei.bert.pcie.error.correctedA corrected PCIe error occurred
esx.problem.apei.bert.pcie.error.fatalA fatal PCIe error occurred
esx.problem.apei.bert.pcie.error.recoverableA recoverable PCIe error occurred
esx.problem.application.core.dumpedAn application running on ESXi host has crashed and a core file was created.
esx.problem.boot.filesystem.downLost connectivity to the device backing the boot filesystem
esx.problem.coredump.capacity.insufficientThe storage capacity of the coredump targets is insufficient to capture a complete coredump.
esx.problem.coredump.unconfiguredNo vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.
esx.problem.coredump.unconfigured2No coredump target has been configured. Host core dumps cannot be saved.
esx.problem.cpu.amd.mce.dram.disabledDRAM ECC not enabled. Please enable it in BIOS.
esx.problem.cpu.intel.ioapic.listing.errorNot all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform.
esx.problem.cpu.mce.invalidMCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware.
esx.problem.cpu.smp.ht.invalidDisabling HyperThreading due to invalid configuration: Number of threads: {1} Number of PCPUs: {2}.
esx.problem.cpu.smp.ht.numpcpus.maxFound {1} PCPUs but only using {2} of them due to specified limit.
esx.problem.cpu.smp.ht.partner.missingDisabling HyperThreading due to invalid configuration: HT partner {1} is missing from PCPU {2}.
esx.problem.dhclient.lease.noneUnable to obtain a DHCP lease.
esx.problem.dhclient.lease.offered.noexpiryNo expiry time on offered DHCP lease.
esx.problem.esximage.install.errorCould not install image profile.
esx.problem.esximage.install.invalidhardwareHost doesn't meet image profile hardware requirements.
esx.problem.esximage.install.stage.errorCould not stage image profile.
esx.problem.hardware.acpi.interrupt.routing.device.invalidSkipping interrupt routing entry with bad device number: {1}. This is a BIOS bug.
esx.problem.hardware.acpi.interrupt.routing.pin.invalidSkipping interrupt routing entry with bad device pin: {1}. This is a BIOS bug.
esx.problem.hardware.ioapic.missingIOAPIC Num {1} is missing. Please check BIOS settings to enable this IOAPIC.
esx.problem.host.coredumpAn unread host kernel core dump has been found.
esx.problem.hostd.core.dumpedHostd crashed and a core file was created.
esx.problem.iorm.badversionStorage I/O Control version mismatch
esx.problem.iorm.nonviworkloadUnmanaged workload detected on SIOC-enabled datastore
esx.problem.migrate.vmotion.default.heap.create.failedFailed to create default migration heap
esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdownError with migration listen socket
esx.problem.net.connectivity.lostLost Network Connectivity
esx.problem.net.dvport.connectivity.lostLost Network Connectivity to DVPorts
esx.problem.net.dvport.redundancy.degradedNetwork Redundancy Degraded on DVPorts
esx.problem.net.dvport.redundancy.lostLost Network Redundancy on DVPorts
esx.problem.net.e1000.tso6.notsupportedNo IPv6 TSO support
esx.problem.net.fence.port.badfenceidInvalid fenceId configuration on dvPort
esx.problem.net.fence.resource.limitedMaximum number of fence networks or ports
esx.problem.net.fence.switch.unavailableSwitch fence property is not set
esx.problem.net.firewall.config.failedFirewall configuration operation failed. The changes were not applied.
esx.problem.net.firewall.port.hookfailedAdding port to Firewall failed.
esx.problem.net.gateway.set.failedFailed to set gateway
esx.problem.net.heap.belowthresholdNetwork memory pool threshold
esx.problem.net.lacp.lag.transition.downlag transition down
esx.problem.net.lacp.peer.noresponseNo peer response
esx.problem.net.lacp.policy.incompatibleCurrent teaming policy is incompatible
esx.problem.net.lacp.policy.linkstatusCurrent teaming policy is incompatible
esx.problem.net.lacp.uplink.blockeduplink is blocked
esx.problem.net.lacp.uplink.disconnecteduplink is disconnected
esx.problem.net.lacp.uplink.fail.duplexuplink duplex mode is different
esx.problem.net.lacp.uplink.fail.speeduplink speed is different
esx.problem.net.lacp.uplink.inactiveAll uplinks must be active
esx.problem.net.lacp.uplink.transition.downuplink transition down
esx.problem.net.migrate.bindtovmkInvalid vmknic specified in /Migrate/Vmknic
esx.problem.net.migrate.unsupported.latencyUnsupported vMotion network latency detected
esx.problem.net.portset.port.fullFailed to apply for free ports
esx.problem.net.portset.port.vlan.invalididVlan ID of the port is invalid
esx.problem.net.proxyswitch.port.unavailableVirtual NIC connection to switch failed
esx.problem.net.redundancy.degradedNetwork Redundancy Degraded
esx.problem.net.redundancy.lostLost Network Redundancy
esx.problem.net.uplink.mtu.failedFailed to set MTU on an uplink
esx.problem.net.vmknic.ip.duplicateA duplicate IP address was detected on a vmknic interface
esx.problem.net.vmnic.linkstate.downLink state down
esx.problem.net.vmnic.linkstate.flappingLink state unstable
esx.problem.net.vmnic.watchdog.resetNic Watchdog Reset
esx.problem.ntpd.clock.correction.errorNTP daemon stopped. Time correction out of bounds.
esx.problem.pageretire.platform.retire.requestMemory page retirement requested by platform firmware.
esx.problem.pageretire.selectedmpnthreshold.host.exceededNumber of host physical memory pages selected for retirement exceeds threshold.
esx.problem.scratch.partition.size.smallSize of scratch partition is too small.
esx.problem.scratch.partition.unconfiguredNo scratch partition has been configured.
esx.problem.scsi.apd.event.descriptor.alloc.failedNo memory to allocate APD Event
esx.problem.scsi.device.close.failedScsi Device close failed.
esx.problem.scsi.device.detach.failedDevice detach failed
esx.problem.scsi.device.filter.attach.failedFailed to attach filter to device.
esx.problem.scsi.device.io.bad.plugin.typePlugin trying to issue command to device does not have a valid storage plugin type.
esx.problem.scsi.device.io.inquiry.failedFailed to obtain INQUIRY data from the device
esx.problem.scsi.device.io.invalid.disk.qfull.valueScsi device queue parameters incorrectly set.
esx.problem.scsi.device.io.latency.highScsi Device I/O Latency going high
esx.problem.scsi.device.io.qerr.change.configQErr cannot be changed on device. Please change it manually on the device if possible.
esx.problem.scsi.device.io.qerr.changedScsi Device QErr setting changed
esx.problem.scsi.device.is.local.failedPlugin's isLocal entry point failed
esx.problem.scsi.device.is.pseudo.failedPlugin's isPseudo entry point failed
esx.problem.scsi.device.is.ssd.failedPlugin's isSSD entry point failed
esx.problem.scsi.device.limitreachedMaximum number of storage devices
esx.problem.scsi.device.state.offDevice has been turned off administratively.
esx.problem.scsi.device.state.permanentlossDevice has been removed or is permanently inaccessible.
esx.problem.scsi.device.state.permanentloss.noopensPermanently inaccessible device has no more opens.
esx.problem.scsi.device.state.permanentloss.pluggedbackDevice has been plugged back in after being marked permanently inaccessible.
esx.problem.scsi.device.state.permanentloss.withreservationheldDevice has been removed or is permanently inaccessible.
esx.problem.scsi.device.thinprov.atquotaThin Provisioned Device Nearing Capacity
esx.problem.scsi.scsipath.badpath.unreachpevVol PE path going out of vVol-incapable adapter
esx.problem.scsi.scsipath.badpath.unsafepeCannot safely determine vVol PE
esx.problem.scsi.scsipath.limitreachedMaximum number of storage paths
esx.problem.scsi.unsupported.plugin.typeStorage plugin of unsupported type tried to register.
esx.problem.storage.apd.startAll paths are down
esx.problem.storage.apd.timeoutAll Paths Down timed out, I/Os will be fast failed
esx.problem.storage.connectivity.deviceporFrequent PowerOn Reset Unit Attention of Storage Path
esx.problem.storage.connectivity.lostLost Storage Connectivity
esx.problem.storage.connectivity.pathporFrequent PowerOn Reset Unit Attention of Storage Path
esx.problem.storage.connectivity.pathstatechangesFrequent State Changes of Storage Path
esx.problem.storage.iscsi.discovery.connect.erroriSCSI discovery target login connection problem
esx.problem.storage.iscsi.discovery.login.erroriSCSI Discovery target login error
esx.problem.storage.iscsi.isns.discovery.erroriSCSI iSns Discovery error
esx.problem.storage.iscsi.target.connect.erroriSCSI Target login connection problem
esx.problem.storage.iscsi.target.login.erroriSCSI Target login error
esx.problem.storage.iscsi.target.permanently.lostiSCSI target permanently removed
esx.problem.storage.redundancy.degradedDegraded Storage Path Redundancy
esx.problem.storage.redundancy.lostLost Storage Path Redundancy
esx.problem.syslog.configSystem logging is not configured.
esx.problem.syslog.nonpersistentSystem logs are stored on non-persistent storage.
esx.problem.vfat.filesystem.full.otherA VFAT filesystem is full.
esx.problem.vfat.filesystem.full.scratchA VFAT filesystem, being used as the host's scratch partition, is full.
esx.problem.visorfs.failureAn operation on the root filesystem has failed.
esx.problem.visorfs.inodetable.fullThe root filesystem's file table is full.
esx.problem.visorfs.ramdisk.fullA ramdisk is full.
esx.problem.visorfs.ramdisk.inodetable.fullA ramdisk's file table is full.
esx.problem.vm.kill.unexpected.fault.failureA VM could not fault in the a page. The VM is terminated as further progress is impossible.
esx.problem.vm.kill.unexpected.forcefulPageRetireA VM did not respond to swap actions and is forcefully powered off to prevent system instability.
esx.problem.vm.kill.unexpected.noSwapResponseA VM did not respond to swap actions and is forcefully powered off to prevent system instability.
esx.problem.vm.kill.unexpected.vmtrackA VM is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability.
esx.problem.vmfs.ats.support.lostDevice Backing VMFS has lost ATS Support
esx.problem.vmfs.error.volume.is.lockedVMFS Locked By Remote Host
esx.problem.vmfs.extent.offlineDevice backing an extent of a file system is offline.
esx.problem.vmfs.extent.onlineDevice backing an extent of a file system came online
esx.problem.vmfs.heartbeat.recoveredVMFS Volume Connectivity Restored
esx.problem.vmfs.heartbeat.timedoutVMFS Volume Connectivity Degraded
esx.problem.vmfs.heartbeat.unrecoverableVMFS Volume Connectivity Lost
esx.problem.vmfs.journal.createfailedNo Space To Create VMFS Journal
esx.problem.vmfs.lock.corruptondiskVMFS Lock Corruption Detected
esx.problem.vmfs.lock.corruptondisk.v2VMFS Lock Corruption Detected
esx.problem.vmfs.nfs.mount.connect.failedUnable to connect to NFS server
esx.problem.vmfs.nfs.mount.limit.exceededNFS has reached the maximum number of supported volumes
esx.problem.vmfs.nfs.server.disconnectLost connection to NFS server
esx.problem.vmfs.nfs.server.restoredRestored connection to NFS server
esx.problem.vmfs.resource.corruptondiskVMFS Resource Corruption Detected
esx.problem.vmsyslogd.remote.failureRemote logging host has become unreachable.
esx.problem.vmsyslogd.storage.failureLogging to storage has failed.
esx.problem.vmsyslogd.storage.logdir.invalidThe configured log directory cannot be used. The default directory will be used instead.
esx.problem.vmsyslogd.unexpectedLog daemon has failed for an unexpected reason.
esx.problem.vpxa.core.dumpedVpxa crashed and a core file was created.
hbr.primary.AppQuiescedDeltaCompletedEventApplication consistent delta completed.
hbr.primary.ConnectionRestoredToHbrServerEventConnection to VR Server restored.
hbr.primary.DeltaAbortedEventDelta aborted.
hbr.primary.DeltaCompletedEventDelta completed.
hbr.primary.DeltaStartedEventDelta started.
hbr.primary.FSQuiescedDeltaCompletedEventFile system consistent delta completed.
hbr.primary.FSQuiescedSnapshotApplication quiescing failed during replication.
hbr.primary.FailedToStartDeltaEventFailed to start delta.
hbr.primary.FailedToStartSyncEventFailed to start full sync.
hbr.primary.HostLicenseFailedEventvSphere Replication is not licensed replication is disabled.
hbr.primary.InvalidDiskReplicationConfigurationEventDisk replication configuration is invalid.
hbr.primary.InvalidVmReplicationConfigurationEventVirtual machine replication configuration is invalid.
hbr.primary.NoConnectionToHbrServerEventNo connection to VR Server.
hbr.primary.NoProgressWithHbrServerEventVR Server error: {[email protected]}
hbr.primary.QuiesceNotSupportedQuiescing is not supported for this virtual machine.
hbr.primary.SyncCompletedEventFull sync completed.
hbr.primary.SyncStartedEventFull sync started.
hbr.primary.SystemPausedReplicationSystem has paused replication.
hbr.primary.UnquiescedDeltaCompletedEventDelta completed.
hbr.primary.UnquiescedSnapshotUnable to quiesce the guest.
hbr.primary.VmLicenseFailedEventvSphere Replication is not licensed replication is disabled.
hbr.primary.VmReplicationConfigurationChangedEventReplication configuration changed.
vim.event.LicenseDowngradedEventLicense downgrade
vim.event.SystemSwapInaccessibleSystem swap inaccessible
vim.event.UnsupportedHardwareVersionEventThis virtual machine uses hardware version {version} which is no longer supported. Upgrade is recommended.
vprob.net.connectivity.lostLost Network Connectivity
vprob.net.e1000.tso6.notsupportedNo IPv6 TSO support
vprob.net.migrate.bindtovmkInvalid vmknic specified in /Migrate/Vmknic
vprob.net.proxyswitch.port.unavailableVirtual NIC connection to switch failed
vprob.net.redundancy.degradedNetwork Redundancy Degraded
vprob.net.redundancy.lostLost Network Redundancy
vprob.scsi.device.thinprov.atquotaThin Provisioned Device Nearing Capacity
vprob.storage.connectivity.lostLost Storage Connectivity
vprob.storage.redundancy.degradedDegraded Storage Path Redundancy
vprob.storage.redundancy.lostLost Storage Path Redundancy
vprob.vmfs.error.volume.is.lockedVMFS Locked By Remote Host
vprob.vmfs.extent.offlineDevice backing an extent of a file system is offline.
vprob.vmfs.extent.onlineDevice backing an extent of a file system is online.
vprob.vmfs.heartbeat.recoveredVMFS Volume Connectivity Restored
vprob.vmfs.heartbeat.timedoutVMFS Volume Connectivity Degraded
vprob.vmfs.heartbeat.unrecoverableVMFS Volume Connectivity Lost
vprob.vmfs.journal.createfailedNo Space To Create VMFS Journal
vprob.vmfs.lock.corruptondiskVMFS Lock Corruption Detected
vprob.vmfs.nfs.server.disconnectLost connection to NFS server
vprob.vmfs.nfs.server.restoredRestored connection to NFS server
vprob.vmfs.resource.corruptondiskVMFS Resource Corruption Detected

More from my site