Recently I was having issues updating HPE ProLiant BL460c G7 with latest SPP (2016.10). Firmware update just stopped on Step 1. Also HPE custom ESXi ISO failed to work.
After some digging around I discovered that server Serial Number and Product ID were missing. I went to BIOS and filled in the correct Serial Number and Product ID and after that the firmware update worked and I was also able to install HPE custom ESXi.
I suspect that the Serial Number and Product ID were lost when this blade server was removed from one Virtual Connect infrastructure and placed to another.
Recently we ere applying a SPP 2016.04 for some our blade servers. After upgrade one the server did not have network. From ESXi console everything looked OK. Tried cold boot – nothing. Tried downgrade of Emulex CNA firmware – nothing. Tried latest Emulex firmware again – nothing. Finally turned off server, went to VCEM (Virtual Connect Enterprise Manager) and edited the faulty profile by just clicking edit and then saved the profile again. Powered up the server and now everything was OK. I guess firmware update somehow damaged the profile and by re-applying the profile using VCEM it got fixed.
HP (now HPE) has made available new URLs for HPE Online Depot for VMWare Update Manager. Old URLs still work but in future they might stop working.
HP Management Software – http://vibsdepot.hpe.com/hpq/hpq-index.xml
HP Drivers – http://vibsdepot.hpe.com/hpq/hpq-index-drv.xml
HP Management Software – http://vibsdepot.hpe.com/index.xml
HP Drivers – http://vibsdepot.hpe.com/index-drv.xml
They have changed from hp.com to hpe.com.
Automatically download HP drivers to VMware Update Manager
HP has made available 64GB memory modules for their servers – HP 64GB (1x64GB) Quad Rank x4 DDR4-2133 CAS-15-15-15 Load Reduced Memory Kit – 726724-B21. List price on HP Simple Configurator is USD4999. For comparison – 32GB module list price is USD899. With 64GB module available it is now possible to configure ProLiant BL460c Gen9 with 1TB of RAM and ProLiant DL380 Gen9 with 1,5TB of RAM.
8 March update: 536FLB with bnx2fc driver version 1.710.70.v55.3 and VPLEX seems to working.
Shortly: HP FlexFabric 10Gb 2-port 536FLB Adapter for HP BL460c Gen9 and EMC VPLEX in VMware environment does not seems to be working. All paths appear as “dead”
According to quick specs the HP FlexFabric 10Gb 2-port 536FLB Adapter uses QLogic 57840S chip. ESXi identifies this card has Broadcom NetXtreme II BMC57840. It is most likely that the chip is provided by Broadcom as both companies have entered into ASIC partnership
EMC KB article (000192744 Version:2) is about Broadcom 57810 series chips but the symptoms seems to be the same with 57840 chip. The article says that this issue is caused by non-standard SCSI Command Opcode.
VMware support matrix for connectivity to VPLEX
To avoid unpleasant surprises always check multiple support matrices!
Recently during VMware Flash Read Cache testing I noticed that drivers on my ESXi hosts (HP BL460c blades) were not up to date. After looking into it I discovered that not all drivers were downloaded to VUM automatically from HP vibsdepot.
Currently I had only added source http://vibsdepot.hp.com/index.xml to VUM which provided me with updates to HP Management tools for ESXi.
To get the latest drivers automatically I added a new URL to VUM – http://vibsdepot.hp.com/index-drv.xml
More information can be found at – http://vibsdepot.hp.com/
I recently had a need to modify VMware Update Manager (VUM) host reboot timeout to allow host firmware patching during reboot without timing out remediation job. Most VUM settings are in vci-integrity.xml file located in “C:\Program Files (x86)\VMware\Infrastructure\Update Manager“.
Lines I need to change were following:
I changed the values to:
This change allowed me to patch ESXi host (HP Blade server) and install new firmwares with a same reboot and with as less operations as possible.
My workflow was following:
1) Open iLO consoles for all servers in the cluster I needed to patch
2) Mount an HP firmware iso to all servers
3) Scan host with VUM
4) Remediate hosts with VUM
5) Do something useful while hosts are been patched and firmwares are been updated.
With this workflow VUM will put ESXi host to maintanance mode, install some patches, reboot, host will boot from firmware DVD, new firmwares will be automatically installed, host is rebooted again and it will boot back to ESXi. After that VUM will repeat same tasks on all other servers.