vCenter 6.7 Update 2: Error in creating a backup schedule

One of the improvements in vCenter 6.7 Update 2 includes Samba (SMB) protocol support for the built-in File-Based Backup and Restore. Excited about the news, I decided to test this functionality and backup data to the Windows share.

I filled in the backup schedule parameters in the vCenter Server Appliance Management Interface (VAMI) and pressed the Create button, when the following error message appeared: Error in method invocation module ‘util.Messages’ has no attribute ‘ScheduleLocationDoesNotExist’.

Puzzled with this message and not knowing which log file to inspect, I ran the following command in the local console session on the
vCenter Server Appliance (VCSA):

grep -i ‘ScheduleLocationDoesNotExist’ $(find /var/log/vmware/ -type f -name ‘*.log’)

The search results led me to /var/log/vmware/applmgmt/applmgmt.log where I found another clue:

2019-04-30T01:25:24.111 [2476]ERROR:vmware.appliance.backup_restore.schedule_impl:Failed to mount the cifs share // at /storage/remote/backup/cifs/; Err: rc=32, stdOut:, stdErr: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

At first, after some reading, I thought it was related to the SMB protocol version or the wrong security type for the server. So I decided to look for any security events on the file server.

In Windows Event Log, I saw the following:

After double-checking the NTFS and share permissions for the network share, I was confident that the user had permissions to access it and write data into it.

Run out of ideas, I was just looking into the official documentation and some blog posts to see if something was missing. What stroke me was no references to the domain name, neither in a UPN format nor in a form of sAMAccountName, in the backup server credentials in the Create Backup Schedule wizard.

It was easy for me to test if skipping the domain name would make any difference, and it did! The backup job worked like a charm and was completed successfully.

“The device cannot start. (Code 10)” for Microsoft ISATAP and Microsoft Teredo Tunneling adapters

I was checking the system settings for one of the Windows 2008 R2 virtual machines that had been provisioned from the template recently when ran over this issue.

Both Microsoft ISATAP Adapter and Microsoft Teredo Tunneling Adapter had warning icons in the Device Manager.



Even if it is a minor obstacle, I prefer to resolve any problems with the operating system before installing and configuring applications.

After searching on Microsoft web-site, I came to the following forum thread where the user named Dork Man pointed at the DisabledComponents registry value in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\TCPIP6\Parameters registry hive.

In my case, it has been set to a value of 0xfffffff for some reason.



I found the Microsoft KB # 929852 that explained this parameter. In this article Microsoft states the following:

…system startup will be delayed for 5 seconds if IPv6 is disabled by incorrectly, setting the DisabledComponents registry setting to a value of 0xfffffff.

Microsoft supports only the following values to configure the IPv6 protocol:

  • 0 – re-enables all IPv6 components (Windows default setting)
  • 0xff – disables all IPv6 components except the IPv6 loopback interface
  • 0x20 – makes IPv4 preferable over IPv6 by changing entries in the prefix policy table
  • 0x10 – disables IPv6 on all non-tunnel interfaces (both LAN and PPP)
  • 0x01 – disables IPv6 on all tunnel interfaces
  • 0x11 – disables all IPv6 interfaces except for the IPv6 loopback interface.

I haven’t had any specific requirements for the setting. So changing the DisabledComponents registry value to 0 and rebooting the server resolved the problem completely.


Recently one of my friends has been testing a greenfield vSphere environment and came across an issue with Storage vMotion being slow. It took almost one hour to copy a VM with 510GB VMDK (thick provision eager zeroed) across two LUNs within the same physical array.


In this case, it was EMC VNX5200 with the following firmware versions:

  • OE for Block –
  • OE for File – 8.1.9-155.

Multipathing policies are default for the storage with SATP set to VMW_SATP_ALUA_CX and PSP set to VMW_PSP_RR.

According to EMC and VMware HCL, this storage should offload XCOPY operations using VAAI feature in ESXi 6.0.

The ESXi hosts were connected through 8Gb SAN, all with firmware and driver versions supported by VMware.

What was more interesting, he noticed that the host had had warning messages as follows:

Device naa. performance has deteriorated. I/O latency increased from average value of XXXXX microseconds to XXXXXX microsecond

VMware KB article # 2007236 states that the possible root causes for this behaviour could be changes made on the target, disk or media failures, overload conditions on the device, and failover. Storage system didn’t report any hardware failures in the past. So, most probably, it was the result of misconfiguration or software fault.

A quick search on the Internet directed me to Neal Dolson’s blog post published in 2014 that described a similar problem. Using the same methodology as the author did, we have received the same results.


Esxtop showed high storage device command latency and constant switches between vmhba3 and vmhba4.


On the storage side, a response time in Unisphere Analyser went up from few milliseconds to 850-900 milliseconds. In the graph above, VMFS_05 is the LUN from which data has been migrated.

Neal’s article suggested contacting the vendor and upgrading the storage firmware. EMC released a fix for this particular problem in version of VNX OE for Block (page 17 of the document). However, it applies only to the first generation of VNX:

VNX5100 VNX5150 VNX5300 VNX-VSS100 VNX5500 VNX5700 VNX7500


Frequency of occurrence:
Always under a specific set of circumstances

Tracking number:

Slow performance was seen on a storage system when running VMware ESX operations that use the VAAI (vStorage APIs for Array Integration) data move primitive (xcopy), such as cloning virtual machines or templates, migrating virtual machines with storage vmotion, and deploying virtual machines from template.

This software has multiple enhancements to improve latency, as well as new code efficiencies to greatly improve cloning and vmotion.

KnowledgeBase ID:

Fixed in version:

I looked at the latest release notes for VNX Operating Environment for Block for VNX5200, and couldn’t find similar information there.

As a workaround, we disabled “DataMover.HardwareAcceleratedMove” option in Advanced System Settings on all hosts using this simple PowerCLI command:

Get-VMHost | Get-AdvancedSetting -Name DataMover.HardwareAcceleratedMove | Set-AdvancedSetting -Value 0

This change is not destructive and can be done online (even if you have Storage vMotion running).

The next step is to log the case with VMware and wait for the resolution.

If you had a similar problem, feel free to share your experience in the comments.

I will keep updating this post when more information is available.

23/09/2016 – Update 1: VMware GSS confirmed that the system had been configured correctly and suggested contacting the storage vendor about the matter.

16/03/2017 – Update 2: A workaround for this issue is to follow the recommendations from EMC and increase the value of DataMover.MaxHwTransferSize parameter to “16384” on each host connected to the LUN.