VCSA 6.5: The mysterious dependency on the IPv6 protocol – Part 1

Starting from vSphere 4.1, IPv6 support has been introduced to the virtual platform from VMware. It is enabled in the vCenter Server Appliance by default and can be controlled in VCSA 6.0 and 6.5 from the Direct Console User Interface (Customize System > Configure Management Network > IPv6 Configuration).

IPv6-Issue-01

To my surprise, disabling IPv6 can cause some problems with the VCSA updates. I will explain this statement and provide a workaround in the paragraphs below.

Imagine your security team requires IPv6 to be turned off on vCenter Server. Following this call, you proceeded with the configuration change in DCUI.

IPv6-Issue-02

After rebooting the virtual machine, it all should work fine. Now, it is time to update the virtual appliance to a newer version. You downloaded a patch file, attached it to the VM, and started the update process from the VMware vSphere Appliance Management Interface.

When the server reboots, you will notice the Appliance Management User Interface is not accessible anymore. To troubleshoot this issue further, we need to open SSH session with the appliance and enable Shell mode.

Firstly, we need to netstat command to see if any service is listening on TCP port 5480. The command output does not show anything.

IPv6-Issue-03

The next step is to identify the service which provides the Appliance MUI and its current status. Fortunately, I have noticed an error message which is related to the problem when the operating system is booting up.

IPv6-Issue-04

Querying the vami-lighttp.service status shows the following results.

IPv6-Issue-05

So it is a duplicate parameter server.use-ipv6 in the configuration file which was causing this behaviour. To find this file, I was using a combination of rpm and egrep commands to filter the output.

IPv6-Issue-06

A quick search in /opt/vmware/etc/lighttpd/lighttpd.conf shows that there are two identical lines with IPv6 settings as follows:

IPv6-Issue-07

To fix this issue, I removed one of the lines, started the vami-lighttp.service and checked that the service works as expected.

IPv6-Issue-08

To be continued…

vSphere 6.0: Available storage for /storage/log reached warning thershold – less then 30 % available space

For those who have vCenter Server Appliance with an External Platform Services Controller, you might notice a warning message in Services Health area in Administration -> System Configuration -> Summary tab.

VMware Syslog Service reports a warning message as soon as /storage/log has less than 30 percent of free space, similar to what is in the picture below.

syslog-service-issue-01

syslog-service-issue-02

The problem appears to be with the VMDK disk for a /storage/log mount point. On PSC, it has a default size of 5 GB and is quickly filling in with the SSO log files.

syslog-service-issue-03

VMware has two possible solutions to resolve this issue, as follows:

The second option sounds more preferable, as it eliminates the need to monitor changes in the log4j.properties file after a system update. However, the commands in the VMware KB 2126276 do not apply to the Platform Services Controller appliance. It doesn’t have a vpxd_servicecfg script to automate the volume extension.

Fortunately, Florian Grehl has documented a workaround for PSC, which requires us to extend the VMDK5 using the vSphere Web Client and execute the following commands in an SSH session on the affected server:

1. Rescan the SCSI Bus to make Linux aware of the resized virtual disk

# rescan-scsi-bus.sh -w –forcerescan

2. Change the size of the Volume Group by using the Disk Device from the table above

# pvresize /dev/sde

3. Resize the Logical Volume by using the name from the table above

# lvresize –resizefs -l +100%FREE /dev/log_vg/log

After completing the commands and verifying the volume size, we should restart VMware Syslog Service to refresh its state. It can be done from the same SSH session or using vSphere Web Client.

syslog-service-issue-04

And this is how things are back to normal 🙂

vSphere 6.5 GA: VMware-VMRC.exe – Failed to install hcmon driver.

After upgrading the vCenter Server Appliance to version 6.5, I needed to install a new version of VMware Remote Console 9.0 on my Windows 10 machine.

vmware-vmrc-install
VMware-VMRC.msi was downloaded from the vCenter Server, and I initiated its installation.

vmware-vmrc-download

To my surprise, this task ended up with an error message below.

vmware-vmrc-error

I immediately searched on VMware for any explanation and found KB # 2130850. Despite the workaround provided, I haven’t had vSphere Client installed on the computer.

Quickly checking the list of VMware products available, I was able to identify the package which caused the problem. It was a VMware Remote Console Plug-in 5.1 from the previous version of vSphere which prevented the installer from doing its job. Removing the old piece of software completely resolved the obstacle for my environment. Easy-peasy!