One of the improvements in vCenter 6.7 Update 2 includes Samba (SMB) protocol support for the built-in File-Based Backup and Restore. Excited about the news, I decided to test this functionality and backup data to the Windows share.
I filled in the backup schedule parameters in the vCenter Server Appliance Management Interface (VAMI) and pressed the Create button, when the following error message appeared: Error in method invocation module ‘util.Messages’ has no attribute ‘ScheduleLocationDoesNotExist’.
Puzzled with this message and not knowing which log file to inspect, I ran the following command in the local console session on the vCenter Server Appliance (VCSA):
grep -i ‘ScheduleLocationDoesNotExist’ $(find /var/log/vmware/ -type f -name ‘*.log’)
The search results led me to /var/log/vmware/applmgmt/applmgmt.log where I found another clue:
2019-04-30T01:25:24.111 ERROR:vmware.appliance.backup_restore.schedule_impl:Failed to mount the cifs share //fileserver.company.local/Archive/VMware at /storage/remote/backup/cifs/fileserver.company.local/D4Ji3vNM/fmuCEc6m; Err: rc=32, stdOut:, stdErr: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
At first, after some reading, I thought it was related to the SMB protocol version or the wrong security type for the server. So I decided to look for any security events on the file server.
In Windows Event Log, I saw the following:
After double-checking the NTFS and share permissions for the network share, I was confident that the user had permissions to access it and write data into it.
Run out of ideas, I was just looking into the official documentation and some blog posts to see if something was missing. What stroke me was no references to the domain name, neither in a UPN format nor in a form of sAMAccountName, in the backup server credentials in the Create Backup Schedule wizard.
It was easy for me to test if skipping the domain name would make any difference, and it did! The backup job worked like a charm and was completed successfully.
Over time almost every virtualisation specialist asks himself a simple question: ‘Do I need a home lab?’
In recent years, this topic has become more and more popular. Many top bloggers write at least once about their experience with building a home lab; some contribute more to the community providing scripted installs, OVA-templates for the nested virtualisation, and even drivers for unsupported devices.
There is plenty of choice in terms of hardware platforms and networking devices to build your lab (including Raspberry Pi), and the sky’s the limit.
My preference would be for the all-in-one solution, and the rest is done in the nested environment. It should be relatively compact and quiet, with minimum wired connections to the router – an ideal option for someone leaving in the apartment.
As a result of my research, I bought Lenovo P-series ThinkStation with two Intel Xeon CPUs and 80 GB of RAM a few years ago. Instead of using magnetic drives, I put in NVMe M.2 SSDs (used for the VMFS and vSAN datastores) and one USB flash drive for the ESXi boot partitions. A workstation has two onboard 1 Gbps network cards and I have added an additional quad-port 1 Gbps PCIe card to test different configurations (bonding, path-through, etc). All NICs are connected to the router which provides access to the home network and to the Internet.
This platform is sufficient for setting up the vSphere, vSAN, and vRealize Automation labs.
In a serious of articles, I am planning to show how to automate different parts of those labs. It all starts here with the scripted ESXi installation using a bootable USB flash drive.
To create a bootable media, we need to do the following steps:
Format a USB flash drive to boot the ESXi Installer.
Copy files from the ESXi ISO image to the USB flash drive.
Modify the configuration file SYSLINUX.CFG.
Modify the configuration file BOOT.CFG.
Create an answer file KS.CFG.
In the paragraphs below, I am going to discuss those steps in detail.
Step #1 – Format a USB flash drive
Depending on the operating system, the process can vary. In the official documentation, VMware details this task for Linux. To have it done on the computer with Mac OS, I used steps described in the blog posts here and here.
Firstly, we need to identify the USB disk using the diskutil list command. In my case, it is /dev/disk2.
Then, we erase that disk using the diskutileraseDisk command:
diskutil eraseDisk FAT32 ESXIBOOT MBRFormat /dev/disk2 Started erase on disk2 Unmounting disk Creating the partition map Waiting for partitions to activate Formatting disk2s1 as MS-DOS (FAT32) with name ESXIBOOT 512 bytes per physical sector /dev/rdisk2s1: 7846912 sectors in 980864 FAT32 clusters (4096 bytes/cluster) bps=512 spc=8 res=32 nft=2 mid=0xf8 spt=32 hds=255 hid=2048 drv=0x80 bsec=7862272 bspf=7664 rdcl=2 infs=1 bkbs=6 Mounting disk Finished erase on disk2
It is important to choose the MBR format for the disk (MBRFormat option). Otherwise, when you boot from this USB, the ESXi won’t be able to copy data from that partition and will generate the following error message: ‘exception.HandledError: Error (see log for more info): cannot find kickstart file on usb with path — /KS.CFG.’
As a result, you will have one MS-DOS FAT32 partition /dev/disk2s1. The next step is to mark it as active and bootable:
diskutil unmount /dev/disk2s1 Volume ESXIBOOT on disk2s1 unmounted
Then we define a location of BOOT.CFG (here ‘-p 1‘ refers to /dev/disk2s1):
sed -e ‘/-c boot.cfg/s/$/ -p 1/’ -i _BACK /Volumes/ESXIBOOT/SYSLINUX.CFG
Step #4 – Modify BOOT.CFG
Now we need to add a path to the answer file (ks=usb:/KS.CFG) into the boot loader (BOOT.CFG).
However, there are two boot loaders available with the image – one for the BIOS boot, and another one for EFI.
find /Volumes/ESXIBOOT -type f -name ‘BOOT.CFG’ /Volumes/ESXIBOOT/BOOT.CFG /Volumes/ESXIBOOT/EFI/BOOT/BOOT.CFG
So it makes sense to edit both of them to eliminate any possible issues.
sed -e ‘s+cdromBoot+ks=usb:/KS.CFG+g’ -i _BACK $(find /Volumes/ESXIBOOT -type f -name ‘BOOT.CFG’)
In the example above, I created a backup of the original BOOT.CFG files and replace ‘cdromBoot‘ with ‘ks=usb:/KS.CFG‘ inside them.
Step #5 – Create KS.CFG
Finally, we can work on the answer file that will be used to automate the ESXi host installation.
In a basic scenario, the KS.CFG file should include the following:
Accept VMware License agreement,
Set the root password,
Choose the installation path,
Set the network settings,
Reboot the host after installation is completed.
A best practice would be to encrypt the root password. This can be done using OpenSSL:
openssl passwd -1
To identify the installation path, I normally boot ESXi with a dummy installation script and then use a local console to search for the device names in /vmfs/devices/disks. An MPX format is a preferable option for the disk device name.
A sample installation script is shown below.
In the next post, I will show how to complete the initial server configuration using PowerCLI.
With the upcoming release of vSphere 6.7 Update 2, there will be an option to complete the whole migration using the vSphere Client – super easy!
Meanwhile, the process of moving from an external PSC deployment to the embedded one using CLI consists of two manual steps – converge and decommission. A detailed instruction of how to prepare for and execute each of those steps is documented in the David Stamen’s post ‘Understanding the vCenter Server Converge Tool‘.
What I found tricky was running the converge step when the external PSC had been previously joined to the child domain in Active Directory. In this case, the vCenter Server Converge Tool precheck ran with the default parameters generates the following error message in vcsa-converge-cli.log:
2019-04-06 03:08:15,979 – vCSACliConvergeLogger – ERROR – AD Identity store present on the PSC:root.domain.com 2019-04-06 03:08:15,979 – vCSACliConvergeLogger – INFO – ================ [FAILED] Task: PrecheckSameDomainTask: Running PrecheckSameDomainTask execution failed at 03:08:15 ================ 2019-04-06 03:08:15,980 – vCSACliConvergeLogger – DEBUG – Task ‘PrecheckSameDomainTask: Running PrecheckSameDomainTask’ execution failed because [ERROR: Template AD info not providded.], possible resolution is [Refer to the log for details] 2019-04-06 03:08:15,980 – vCSACliConvergeLogger – INFO – ============================================================= 2019-04-06 03:08:16,104 – vCSACliConvergeLogger – ERROR – Error occurred. See logs for details. 2019-04-06 03:08:16,105 – vCSACliConvergeLogger – DEBUG – Error message: com.vmware.vcsa.installer.converge.prechecksamedomain: ERROR: Template AD info not providded.
In this example, the root.domain.com refers to the root domain; whereas, the computer object for PSC is in the child domain.
To workaround this issue, I had to use the –skip-domain-handling flag to skip the AD Domain related handling in both precheck and actual converge.
By doing this, the vCenter Server Appliance should be joined to the correct AD domain manually after the converge succeed and before the external PSC will be decommissioned.
In the Part 1 of this series, I was writing about the most common cases which might prevent a successful migration to VMFS-6. There is another one to cover.
For ESXi hosts that boot from a flash storage or from memory, a diagnostic core dump file can also be placed on a shared datastore. You won’t be able to un-mount this datastore without deleting a core dump first.
VMware recommends using an esxcli utility to view/edit the core dump settings. This also can be automated via PowerCLI.
To check if the core dump file exists and is active, please use the following code:
To delete an old configuration that points to the VMFS-5 datastore, the following script can help:
With this change made you would be able to continue migrating to VMFS-6 without any issue.
If you have any suggestions or concerns, feel free to share them in the comments below.