vSphere Pluggable Storage Architecture


Introduction

Storage design is the one of the most critical part of planning phase of every IT system.
It becomes more crucial when you need to achieve high availability , performance and scalability for “tough” business-critical applications.

As you know, latest vSphere supports  several types of shared storage protocols (block-level: FC, FCoE, iSCSI and file-level: NFS 3.0/4.1) and it’s features, such as Storage Distributed Resource Scheduler, helps to balance IOs inside of any datastore clusters and eliminates insignificant issues with performance and availability (HA+DRS, for example).

It’s not necessary to say, vSphere is not able to solve all problems in your virtual infrastructure. If you don’t have a storage network redundancy , vSphere is useless in that cases. So, it’s very important to get to know about multipathing.

Multipathing is the term used to define a technique that lets you use more than one physical path that transfers data between the host and an external storage device. It’s basically common term for block-level storage but can be applied for file-level storage as well (it works slightly different than multipathing for block devices). In case of a failure of any element in the SAN network, such as an adapter, switch, or cable, ESXi can switch to another physical path, which does not use the failed component.

Prior to vSphere 4, there was a invariable list (was only updated by implementing new vSphere release) of failover and multipathing polices that described how vSphere storage stack needs to deal with multiple paths. It was thus until the new flexible architecture, called as Pluggable Storage Architecture, had been introduced in vSphere 4. PSA is still a key element of storage stack in vSphere 6.

The PSA has modular architecture and coordinates not only native multipathing work itself but allows to add 3rd party multipathing plugins (MPPs) to implement own failover and load balancing mechanisms for particular storage arrays. It is also a required component for VAAI (vSphere Storage API Array Integration) or, simply put, offloading storage-related operations between arrays and ESXi hosts (see value in the Hardware Acceleration column in the properties of each storage devices)

Pluggable Storage Architecture vSphere

As shown on the picture above, PSA uses 4 modules and performs the following operations:

  • Manage physical path claiming and unclaiming.
  • Manage creation, registration, and deregistration of logical devices.
  • Associate physical paths with logical devices.
  • Support path failure detection and remediation.
  • Process I/O requests to logical devices:
  • Select an optimal physical path for the request.
  • Depending on a storage device, perform specific actions necessary to handle path failures and I/O command retries.

VMware NMP or Native MultiPathing Plugin works side-to-side with Storage Array Type (SATP) and Path-Selection (PSP) sub-plugins.

  • SATP handles path failover for a given storage array and determines failover type for storage.
  • PSP determines which physical path is used to issue an I/O request to a storage device
  • SATP/PSP can be built-in and provided by VMware , or provided by 3rd parties

Third-Party MultiPath Plugin –  adds enhanced multipathing to vSphere and replaces NMP and it’s sub-modules (PSP/SATP).

  • The common example of MPP is EMC PowerPath (available for Windows and WSFC as well).
  • It’s a full package that you need to install manually (vCLI) on each ESXi hosts or using the VMware Update Manager

PSP Modules

To get list of PSP modules run the following command:

vSphere CLI or DCUI:

esxcli.exe -d certificate thumbprint -s vCenter FQDN --vihost ESXi FQDN --username username storage nmp psp list

vmware psa 2

PowerCLI:

$esx=Get-EsxCli -VMHost esxi1.democorp.ru
$esx.storage.nmp.psp.list()

image

GUI:

vSphere Web Client – Hosts and Clusters – ESXi host – Manage-Storage-Storage Devices

image

By default, there are 3 PSPs :

  • Most Recently Used Path Selection (VMW_PSP_MRU) .
    It’s straightforward. This PSP uses the last used path while it is active. If this path becomes unavailable , host switches to an other available path.
    It is default for active-passive storage systems
  • Round Robin Path Selection (VMW_PSP_RR) . Enables basic load balancing by distributing traffic among all paths (rational order).
  • Fixed (VMW_PSP_FIXED). Uses predefined preferred path as active or first path that was discovering during  boot time.
    It’s default for active-active storage systems or active-passive with ALUA

Depending on what type of storage you have, vSphere selects appropriate PSP for your system. In my case, Fixed was selected by default. If it is required, you can also override default assigned policy.

SATP modules

To get list of SATP modules run the following command:

vSphere CLI or DCUI:

esxcli.exe -d certificate thumbprint -s vCenter FQDN --vihost ESXi FQDN --username username storage nmp satp list

vmware psa 1

PowerCLI:

As you can see, only 2 modules are loaded. It depends on storage arrays type which connected to your ESXi host

(PSA checks SCSI Array ID via SCSI query and loads appropriate SATP for this array)

image

GUI (to get an active SATP policy for the specific device):

image

NMP Devices

To get list of NMD devices presented in vSphere and their array types/policies :

vSphere CLI or DCUI:

esxcli.exe -d certificate thumbprint -s vCenter FQDN --vihost ESXi FQDN --username usernamet storage nmp device list

PowerCLI:

$esx=Get-EsxCli -VMHost esxi1.democorp.ru
$esx.storage.nmp.device.list()

image

GUI:

vSphere Web Client – Hosts and Clusters – ESXis host name – Manage – Storage – Storage Devices

image

References

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: