vSphere vCenter HTML5 Web Client buttons and icons all wrong after patching

Recently we patched our vSphere vCenter 6.5  to latest version and we noticed that after upgrade most of icons were weird and some of the buttons were wrong.

At first my heart missed a beat since all green icons were suddenly red. After closer look we saw that the status was OK but icon was wrong. I’m not gonna list all the things we tried to fix the issue but eventually we figured out the solution – clear Chrome cache. After clearing the cache the HTML5 GUI was “green” again.

VMware ESXi HA State Uninitialized after exiting maintenance mode

Some time ago I had issue with on ESXi 6.5 host. After exiting ESXi 6.5 hosts from maintenance mode HA state was “Uninitialized”. It was OK before entering maintenance mode.  That host was running build 10884925. All the other hosts including HA master were running build 11925212. I tried to “Reconfigure for VMware HA” – didn’t help. After upgrading the host to build 11925212 HA worked. My colleague also saw similar issue on another cluster with mixed ESXi builds.

Dell software repository for VMware Update Manager

I have written in the past about HPE software and driver repositories – Automatically download HP drivers to VMware Update Manager and New URLs for HP(E) Online Depot for VMWare

There is also similar repository for Dell – https://vmwaredepot.dell.com/index.xml. By adding this to Update Manager it will download additional Dell software – Dell EMC OpenManage Server Administrator and Dell EMC iDRAC Service Module.

We have found the Dell EMC iDRAC Service Module very useful. It provides OS information to iDRAC and also installs binaries which you can use to reset iDRAC from ESXi if it becomes unresponsive (command: /opt/dell/srvadmin/iSM/bin/Invoke-iDRACHardReset -f )

VMWare Workstation VMs does not work with Windows Server 2019 deduplicated volumes.

I have been running some of my home lab VMs on top of Windows Server 2012 R2 using VMware Workstation for years now. Over the holidays I decided to setup a test system with Windows Server 2019 and VMware Workstation 15. During the testing I discovered that some the VMs were randomly dying with a crash message in the eventlog.

Faulting application name: vmware-vmx.exe, version: 15.0.2.40550, time stamp: 0x5bf5251a
Faulting module name: vmware-vmx.exe, version: 15.0.2.40550, time stamp: 0x5bf5251a
Exception code: 0xc0000005
Fault offset: 0x000000000049db36
Faulting process id: 0x2a9c
Faulting application start time: 0x01d48a67966e2ea4
Faulting application path: C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe
Faulting module path: C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe
Report Id: 7dd5aa8c-2f51-471d-b9e4-e04d0caf099b
Faulting package full name:
Faulting package-relative application ID: 

All these VMs were located on disks with data deduplication enabled. One the disks was ReFS and other was NTFS file system. Deduplication settings on Windows Server 2019 and Windows Server 2012 R2 are the same. The issue does not occur on Windows Server 2012 R2.

I also tested other deduplication settings (General purpose file server and Virtualized Backup Server) and the result was the same. VMs on those deduplicated volumes crashed.

 

Nested ESXi 6.7 fails to boot

Recently I moved some of my nested ESXi 6.7 VMs from standalone host to a 3 node vSAN cluster. Standalone host is running ESXi 6.7 on HPE DL380 G7 with Xeon 5600 series CPU. vSAN cluster is running similar hardware. I moved the ESXi 6.7 VMs while they were offline. When I powered on the VMs they failed to boot. I saw following error message:

VMB: 553:
Unsupported CPU: Intel family 0x06, model 0x25, stepping 0x1
Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
See http://www.vmware.com/resources/compatibility

CPU unsupported

Only difference between standalone host and cluster hosts was that I had EVC mode enabled on the cluster – Intel® “Westmere” Generation. After I disabled EVC mode the ESXi 6.7 VMs booted without any issues.

 

Home lab update – 2018

Past couple of months I have been working on to update and upgrade my home lab.

My LAB now includes:

3 node VSAN cluster with HPE DL360 G7 SFF, HPE DL380 G7 SFF and HPE DL380 G7 LFF.
A standalone ESXi running on HPE DL380 G7 for running vCenter 6.7 U1 and other supporting services.
A standalone Windows Server HPE DL380 Gen8 to run VMs in WMware Workstation and file server service.

Currently network is 1G. Planning to upgrade to 10G in the future.

Some things I discovered during building the lab.
HPE DL380 G7 LFF with HP P410i also accepts 8TB disks. HPE quick specs only include disks as large as 4TB.
HPE DL380 Gen8 also works with DDR3 16GB 1067Mhz Quad Rank RDIMM memory modules. I was able to install 128GB per CPU. The operating frequency was reduced to 800Mhz.

Excessive amount of “addVob” logs in syslog.log on ESXi 6.5 (Updated 08.01.2019)

Last week we noticed that the amount of logs that our ESXi 6.5 servers are sending to syslog server have increased significantly. After some analysis we say that almost 60% of logs are from syslog.log with lines like this – “2018-11-15T15:37:18Z addVob[297777]:<message>” 

We have opened a case in VMware Support but no help so far. I will update the post as the support case progresses.

Some one else had/has a similar issue: https://communities.vmware.com/message/2817143

Update 18.11.2018 These following lines repeat in syslog.log every 5 seconds.

2018-11-15T21:09:42Z addVob[316208]: Could not expand environment variable HOME.
2018-11-15T21:09:42Z addVob[316208]: Could not expand environment variable HOME.
2018-11-15T21:09:42Z addVob[316208]: DictionaryLoad: Cannot open file “/usr/lib/vmware/config”: No such file or directory.
2018-11-15T21:09:42Z addVob[316208]: DictionaryLoad: Cannot open file “~/.vmware/config”: No such file or directory.
2018-11-15T21:09:42Z addVob[316208]: DictionaryLoad: Cannot open file “~/.vmware/preferences”: No such file or directory.
2018-11-15T21:09:42Z addVob[316208]: Wrong number of arguments for vob vob.user.vmsyslogd.remote.failure (got 2; expected 1)
2018-11-15T21:09:42Z addVob[316208]: Usage: /usr/lib/vmware/vob/bin/addvob vob-id [vob-args]
2018-11-15T21:09:42Z addVob[316208]: Recognized vobs:
2018-11-15T21:09:42Z addVob[316208]: vob.user.shell.enabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.shell.disabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.ssh.enabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.ssh.disabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.dcui.enabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.dcui.disabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.lockdownmode.enabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.lockdownmode.disabled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.account.loginfailures (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.account.locked (3 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.maintenancemode.entering (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.maintenancemode.canceled (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.maintenancemode.entered (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.maintenancemode.exited (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.maintenancemode.failed (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.vmsyslogd.remote.failure (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.vmsyslogd.storage.failure (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.vmsyslogd.unexpected (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.vmsyslogd.storage.logdir.invalid (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.hostacceptance.changed (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.install.invalidhardware (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.install.securityalert (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.install.stage.error (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.install.error (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.vib.install.successful (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.vib.remove.successful (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.install.novalidation (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.profile.install.successful (3 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esximage.profile.update.successful (3 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.dhclient.lease.none (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.dhclient.lease.offered.noexpiry (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.ntpd.clock.correction.error (2 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.host.boot (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.host.stop.reboot (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.host.stop.shutdown (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esxcli.host.reboot (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.esxcli.host.poweroff (1 arg)
2018-11-15T21:09:42Z addVob[316208]: vob.user.host.coredump (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.coredump.unconfigured (0 args)
2018-11-15T21:09:42Z addVob[316208]: vob.user.coredump.copyspace (1 arg)

VMware Support claims this is normal.

Update 19.11.2018

In the repeating logs entries there is a message “2018-11-15T21:09:42Z addVob[316208]: Wrong number of arguments for vob vob.user.vmsyslogd.remote.failure (got 2; expected 1)”. This line got me thinking. We have configured two syslog targets. I decided to remove one them and as soon as I did that the spamming in syslog.log stopped. As soon as I added second syslog target back it started again.

Update 08.01.2019

After upgrading some ESXi 6.5 hosts to build 10884925 this addVob spamming has stopped in the syslog.log. The ESXi host with a build 10390116 still has this spamming in the log file.