Warning: Number of cores per socket cannot be greater than number of virtual CPUs

Recently I saw couple of VMs which were giving me a warning – “Number of cores per socket cannot be greater than number of virtual CPUs”

 

 

 

 

This happens when number of vcpu-s is set to smaller number than cores per socket. In my case developer used API to set number of vcpu-s to 2 and number of cores per socket to 4. He made a mistake of thinking that number of vcpu-s was actually number of sockets. After correcting the value on number of vcpu-s to 8 the warning disappeared.

Advertisements

Active Directory groups not available in new vSphere HTML5 UI

I discovered an issue with my vSphere 6.5 (build 5973321) when trying to delegate permissions via new HTML5 UI – when I try to search for a Active Directory group nothing is found. Same operation in old Flash based UI successfully found the group. I also tried with latest vSphere build 7119157 – the issue exists in that version as well. Authentication source Active Directory is configured as “Active Directory (Windows Integrated Authentication).

As the old UI works I’ll be opening a support case sometimes in the new year to confirm the issue with VMWare.

05.01.2018 Update: According to VMWare support HTML5 GUI is not fully supported and this type of issues may occur. It will be fixed when HTML5 GUI will be fully supported.

Incompatible device backing specified for device ’13’

I was doing some Shared Nothing Live Migrations between two VMware clusters (version 5.5) and I was getting following error at 25% of the migration – “Incompatible device backing specified for device ’13′”. Searching from internet indicated issues with network adapter but in this case network adapter was not the case.

Issue in this case was a raw device mapping (RDM) that had a different LUN ID in destination cluster.

vMotion between the clusters worked for VM when the datastore was made visible for all the hosts. Storage vMotion did not work in destination cluster – got same error.

Solution for me was to present destination datastore to original hosts and perform Storage vMotion in original location and then perform a vMotion to destination cluster.

Another solution I tested

  • Shutdown the VM
  • Remove the RDM
  • Perform migration to destination cluster
  • Reattach the RDM

Snapshot fails for VM with running Docker container

Recently I noticed some Linux VM backups were failing and sometimes even crashing with following errors:
An error occurred while taking a snapshot: msg.snapshot.error-QUIESCINGERROR.
An error occurred while saving the snapshot: msg.snapshot.error-QUIESCINGERROR.

On closer look another error was visible in hostd.log file – Error when enabling the sync provider.

All of these VMs had one thing in common – they were running Docker containers.
I was not able to figure out why it happened but I was able to find a workaround – disable the VMWare Sync driver.

Copy-paste from Veritas KB article – https://www.veritas.com/support/en_US/article.000021419

Steps to Disable VMware vmsync driver
To prevent the vmsync driver from being called during the quiesce phase of a VMware snapshot, edit the VMware Tools configuration file as follows:

1) Open a console session to the Redhat Linux virtual machine.
2) Navigate to the /etc/vmware-tools directory
3) Using a text editor, modify the tools.conf file with the following entry

[vmbackup]
enableSyncDriver = false

Note: If the tools.conf file does not exist, create a new empty file and add the above parameters.

 

ESXi will not resume syslog log sending when destination has been down for some time

Recently I was playing with ESXi syslog and Logstash + Graylog. For some reason Logstash instance died. After restarting the Logstash only some ESXi hosts resumed log sending. Quick google search revealed that it is a know issue and solution is to reload syslog on the host. After running following script in PowerCLI against my vCenter the log sending resumed.

$hosts = Get-VMHost
foreach($vihost in $hosts){
$esxcli = get-vmhost $vihost | Get-EsxCli
$esxcli.system.syslog.reload()
}

Good information about PowerCLI and ESXCLI:
http://www.virten.net/2016/11/how-to-use-esxcli-v2-commands-in-powercli/
http://www.virten.net/2014/02/howto-use-esxcli-in-powercli/

 

Firmware update fails on HPE server when Serial Number and Product ID is missing

Recently I was having issues updating HPE ProLiant BL460c G7 with latest SPP (2016.10). Firmware update just stopped on Step 1. Also HPE custom ESXi ISO failed to work.

After some digging around I discovered that server Serial Number and Product ID were missing. I went to BIOS and filled in the correct Serial Number and Product ID and after that the firmware update worked and I was also able to install HPE custom ESXi.

I suspect that the Serial Number and Product ID were lost when this blade server was removed from one Virtual Connect infrastructure and placed to another.