VMFleet: The specified network password is not correct

When you do setup of VMFleet in order to run S2D stress and performance tests, you are required to provide values for parameters connectuser and connectpass to describe user credentials for loopback host connection as shown in the example:

.\create-vmfleet.ps1 -basevhd 'C:\ClusterStorage\Collect\VMFleet-Gold_disk_1.vhdx' -vms 10 -adminpass 'Pass123' -connectuser 'rllab\rlevchenko' -connectpass 'RTY$nTyK@2y9'

These credentials are then injected to each VMFleet VM (162 and 163 strings in the create-vmfleet.ps1):

del -Force z:\users\administrator\launch.ps1 -ErrorAction SilentlyContinue
gc C:\ClusterStorage\collect\control\launch-template.ps1 |% { $_ -replace '__CONNECTUSER__',$using:connectuser -replace '__CONNECTPASS__',$using:connectpass } > z:\users\administrator\launch.ps1

Once VMFleet deployed required VMs, you usually start them by using start-vmfleet.ps1, and then VMFleet automatically tries running the injected launch.ps1 mentioned above :

$script = 'c:\run\master.ps1'

while ($true) {
    Write-Host -fore Green Launching $script `@ $(Get-Date)
    & $script -connectuser __CONNECTUSER__ -connectpass __CONNECTPASS__
    sleep -Seconds 1
}

As a result, master.ps1 will be executed on autologon by each VM using the provided connectuser and connectpass parameters. Master.ps1 establishes a mapping a location containing the Run Script (a standalone load script) and a problem that you might experience at this stage  – wrong credentials.

If you, like me, using special characters in the password (for example, $, @ and etc), you will need to type password using this format: ‘’’ password ‘’’. Otherwise, VMs won’t make a SMB mapping and master.ps1 script failed. The reason is that  connectpass and connectuser values are pasted to the launch.ps1  without any escaping (‘’, “ “ and so on) and PowerShell can cut off your password. So, the right command to create VMFleet VMs, in my example, looks as follows:

.\create-vmfleet.ps1 -basevhd 'C:\ClusterStorage\Collect\VMFleet-Gold_disk_1.vhdx' -vms 10 -adminpass 'Pass123' -connectuser 'rllab\rlevchenko' -connectpass '''RTY$nTyK@2y9'''

I’d also recommend to change ErrorAction to Stop in master.ps1 (45-49 strings) before creating VMs to simplify troubleshooting in case of any errors:

$null = net use l: /d

if ($(Get-SmbMapping) -eq $null) {
    New-SmbMapping -LocalPath l: -RemotePath \\169.254.1.1\c$\clusterstorage\collect\control -UserName $connectuser -Password $connectpass -ErrorAction Stop
}

S2D: disks don’t show up in the disk manager

When you configure S2D cluster with autoconfig=yes or using VMM, S2D creates storage pool with all disks of which parameter CanPool is True. For example, if you have 2 SSDs and 4 HDDs targeted to S2D and other 2 HDDs for system partitions (you plan to use mirrored volumes, for example), they all will be added to S2D pool (except of a system disk, of course, as it’s cannot be pooled).  All physical disks that are part of any pool,including Primordial pool, will not be shown in the disk manager (diskmgmt.msc) if they had been claimed by S2D subsystem. In that case, to make your disk available in the diskmgmt, you need to remove it from pool, rebuild virtual disks (if existed) and unclaim it from S2D. Use the steps below to get your disk back from pool:

#Get all available disks 

Get-PhysicalDisk|ft FriendlyName,SerialNumber,CanPool

#Variable for the disk that you want to remove from $pool

$disk=Get-PhysicalDisk -SerialNumber disk_serial_number
$pool=Get-StoragePool S2D*

#Set disk usage to Retired

$disk|Set-PhysicalDisk -Usage Retired

#Repair Virtual Volumes in the pool

Get-VirtualDisk | ft -AutoSize
Repair-VirtualDisk -FriendlyName "vdisk_name"
Get-StorageJob

#Remove the disk from the pool.
#The disk will be added to the Primordial pool
#...but not be shown in the diskmgmt

Remove-PhysicalDisk $disk -StoragePool $pool

#Tell S2D cluster to not use the removed disk for pool purposes
#....make disk available in the diskmgmt

Set-ClusterS2DDisk -CanBeClaimed 0 -PhysicalDiskIds disk_id

#Optimize pool
Get-StoragePool $pool|Optimize-StoragePool

s2d disk claim

If you removed disk improperly from the pool, you might have the following status for the disk:

{Removing from Pool, Unrecognized Metadata)

Try to reset the physical disk and then disk will be shown in the diskmgmt as well.

Get-PhysicalDisk | ? OperationalStatus -eq "Unrecognized Metadata"|Reset-PhysicalDisk

s2d disk claim_2

P.S. if you add a new server to the S2D, its drives will be automatically added to the pool by default. You can change this behavior by setting AutoPool setting to False.

Get-StorageSubSystem Clu* | Set-StorageHealthSetting -Name “System.Storage.PhysicalDisk.AutoPool.Enabled” -Value False

s2d disk claim_3