S2D: disks don’t show up in the disk manager

When you configure S2D cluster with autoconfig=yes or using VMM, S2D creates storage pool with all disks of which parameter CanPool is True. For example, if you have 2 SSDs and 4 HDDs targeted to S2D and other 2 HDDs for system partitions (you plan to use mirrored volumes, for example), they all will be added to S2D pool (except of a system disk, of course, as it’s cannot be pooled).  All physical disks that are part of any pool,including Primordial pool, will not be shown in the disk manager (diskmgmt.msc) if they had been claimed by S2D subsystem. In that case, to make your disk available in the diskmgmt, you need to remove it from pool, rebuild virtual disks (if existed) and unclaim it from S2D. Use the steps below to get your disk back from pool:

#Get all available disks 

Get-PhysicalDisk|ft FriendlyName,SerialNumber,CanPool

#Variable for the disk that you want to remove from $pool

$disk=Get-PhysicalDisk -SerialNumber disk_serial_number
$pool=Get-StoragePool S2D*

#Set disk usage to Retired

$disk|Set-PhysicalDisk -Usage Retired

#Repair Virtual Volumes in the pool

Get-VirtualDisk | ft -AutoSize
Repair-VirtualDisk -FriendlyName "vdisk_name"
Get-StorageJob

#Remove the disk from the pool.
#The disk will be added to the Primordial pool
#...but not be shown in the diskmgmt

Remove-PhysicalDisks $disk -StoragePool $pool

#Tell S2D cluster to not use the removed disk for pool purposes
#....make disk available in the diskmgmt

Set-ClusterS2DDisk -CanBeClaimed 0 -PhysicalDiskIds disk_id

#Optimize pool
Get-StoragePool $pool|Optimize-StoragePool

s2d disk claim

If you removed disk improperly from the pool, you might have the following status for the disk:

{Removing from Pool, Unrecognized Metadata)

Try to reset the physical disk and then disk will be shown in the diskmgmt as well.

Get-PhysicalDisk | ? OperationalStatus -eq "Unrecognized Metadata"|Reset-PhysicalDisk

s2d disk claim_2

P.S. if you add a new server to the S2D, its drives will be automatically added to the pool by default. You can change this behavior by setting AutoPool setting to False.

Get-StorageSubSystem Clu* | Set-StorageHealthSetting -Name “System.Storage.PhysicalDisk.AutoPool.Enabled” -Value False

s2d disk claim_3

How to enable nested virtualization in Azure

We have already mentioned new Azure VM series Dv3 and Ev3 which enable running VMs inside Azure VMs or just nested virtualization. Today we are going to get it configured and to run our first nested VM in Azure.

But before we start, let’s review some Dv3 and Ev3 facts:

  • they introduce Hyper-Threading Technology running on the Intel® Broadwell E5-2673 v4 2.3GHz processor and Intel® Haswell 2.4 GHz E5-2673 v3
  • they made shift from physical core to virtual CPUs (thanks to HT technology) to support larger VM sizes
  • they are the first Azure VMs running on Windows Server 2016 hosts
  • Dv3 VMs are up to 64 vCPUs and 256 GB RAM
  • Ev3 VMs are up to 64 vCPUs and 432 Gb RAM
  • they are currently available only for certain regions (West Europe, US East, US West 2, Asia Pacific Southeast)
  • they already come with ExposeVirtualizationExtensions enabled. we don’t need to enable CPU extensions as we have to do for on-premises WS2016 hosts

To get started with “nesting” you need to create one or more Dv3/Ev3 VMs in Azure within compatible region. For quick demo purposes, I created D2S_V3  VM with Windows Server 2016 DC , standard managed disk with no data disks attached.

TIP: actually you can , for instance, create 2 or more VMs , add data disks and configure storage spaces between them to achieve higher IO performance.

Then you need to install Hyper-V role and restart VM to apply changes

Install-WindowsFeature Hyper-V -IncludeManagementTools -Restart

nested virtualization azure 1

Verify that Hyper-V role is installed and add internal switch. New adapter “vEthernet (switchname)” will be created under network connections list (ncpa.cpl)

Define a new IP address for this adapter (I’m using 192.168.0.0/24 subnet).  This network will be used as a NAT gateway for new VMs in order to allow internet access from nested VMs.

#Check Hyper-V role state
Get-WindowsFeature Hyper-V|ft InstallState, PostConfigurationNeeded

#Add new internal switch
New-VMSwitch -SwitchName "NSW01" -SwitchType Internal

# IP Configuration for vNIC
New-NetIPAddress -InterfaceAlias "vEthernet (NSW01)" -IPAddress 192.168.0.23 -PrefixLength 24

nested virtualization azure 2

Configure NAT rule to provide “access” to our nested VMs

New-NetNat -Name Nat_VM -InternalIPInterfaceAddressPrefix 192.168.0.0/24

image

Now our nested VMs can assign IP addresses from 192.168.0.0/24 subnet  (manual assignment). If you want to have dynamic IP assignment – create add. VM and configure DHCP. Continue reading “How to enable nested virtualization in Azure”