No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Huawei SAN Storage Host Connectivity Guide for VMware ESXi

HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Multipath Connectivity

Multipath Connectivity

UltraPath

UltraPath is a Huawei-developed multipathing software. It can manage and process disk creation/deletion and I/O delivery of operating systems.

UltraPath provides the following functions:

  • Masking of redundant LUNs

    In a redundant storage network, an application server with no multipathing software detects a LUN on each path. Therefore, a LUN mapped through multiple paths is mistaken for two or more different LUNs. UltraPath installed on the application server masks redundant LUNs on the operating system driver layer to provide the application server with only one available LUN, the virtual LUN. In this case, the application server only needs to deliver data read and write operations to UltraPath that masks the redundant LUNs, and properly writes data into LUNs without damaging other data.

  • Optimum path selection

    In a multipath environment, the owning controller of the LUN on the storage system mapped to an application server is the prior controller. With UltraPath, an application server accesses the LUN on the storage system through the prior controller, thereby obtaining the highest I/O speed. The path to the prior controller is the optimum path.

  • Failover and failback
    • Failover

      When a path fails, UltraPath fails over its services to another functional path.

    • Failback

      UltraPath automatically delivers I/Os to the first path again after the path recovers from the fault.

  • I/O Load balancing

    UltraPath provides load balancing within a controller and across controllers.

    • For load balancing within a controller, I/Os poll among all the paths of the controller.
    • For load balancing across controllers, I/Os poll among the paths of all these controllers.
  • Path test

    UltraPath tests the following paths:

    • Faulty paths

      UltraPath tests faulty paths with a high frequency to detect the path recover as soon as possible.

    • Idle paths

      UltraPath tests idle paths to identify faulty paths in advance, preventing unnecessary I/O retries. The test frequency is kept low to minimize impact on service I/Os.

VMware NMP

Overview

VMware ESXi has its own multipathing software Native Multipath Module (NMP), which is available without the need for extra configurations.

This section details the NMP multipathing software.

VMware PSA

Overview

VMware ESXi 4.0 incorporates a new module Pluggable Storage Architecture (PSA) that can be integrated with third-party Multipathing Plugin (MPP) or NMP to provide storage-specific plug-ins such as Storage Array Type Plug-in (SATP) and Path Selection Plugin (PSP), thereby enabling the optimal path selection and I/O performance.

Figure 2-6 VMware pluggable storage architecture

VMware NMP

NMP is the default multipathing module of VMware. This module provides two submodules to implement failover and load balancing.

  • SATP: monitors path availability, reports path status to NMP, and implements failover.
  • PSP: selects optimal I/O paths.

PSA is compatible with the following third-party multipathing plugins:

  • Third-party SATP: Storage vendors can use the VMware API to customize SATPs for their storage features and optimize VMware path selection.
  • Third-party PSP: Storage vendors or third-party software vendors can use the VMware API to develop more sophisticated I/O load balancing algorithms and achieve larger throughput from multiple paths.

VMware Path Selection Policy

  • Built-in PSP

By default, the PSP of VMware ESXi 5.0 or later supports three I/O policies: Most Recently Use (MRU), Round Robin, and Fixed. VMware ESXi 4.1 supports an additional policy: Fixed AP.

  • Third-Party software

Third-party MPP supports comprehensive fault tolerance and performance processing, and runs on the same layer as NMP. For some storage systems, Third-Party MPP can substitute NMP to implement path failover and load balancing.

Functions and Features

To manage storage multipathing, ESX/ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operations of MPPs.

The VMkernel multipathing plugin that ESX/ESXi provides, by default, is VMware NMP. NMP is an extensible module that manages subplugins. There are two types of NMP plugins: SATPs and PSPs. Figure 2-7 shows the architecture of VMkernel.

Figure 2-7 VMkernel architecture

If more multipathing functionality is required, a third party can also provide an MPP to run in addition to, or as a replacement for, the default NMP. When coordinating with the VMware NMP and any installed third-party MPPs, PSA performs the following tasks:

  • Loads and unloads multipathing plug-ins.
  • Hides virtual machine specifics from a particular plug-in.
  • Routes I/O requests for a specific logical device to the MPP managing that device.
  • Handles I/O queuing to the logical devices.
  • Implements logical device bandwidth sharing between virtual machines.
  • Handles I/O queuing to the physical storage HBAs.
  • Handles physical path discovery and removal.
  • Provides logical device and physical path I/O statistics.

VMware NMP Path Selection Policy

Policies and Differences

VMware supports the following path selection policies, as described in Table 2-1.

Table 2-1 Path selection policies

Policy/Controller

Active/Active

Active/Passive

Most Recently Used

Administrator action is required to fail back after path failure.

Administrator action is required to fail back after path failure.

Fixed

VMkernel resumes using the preferred path when connectivity is restored.

VMkernel attempts to resume using the preferred path. This can cause path thrashing or failure when another SP now owns the LUN.

Round Robin

The host uses automatic path selection algorithm to ensure that I/Os are delivered to all active paths in turn.

It will not switch back even after the faulty path recovers.

The host uses automatic path selection algorithm to always select the next path in the RR scheduling queue, therefore ensuring that I/Os are delivered to all active paths in turn.

Fixed AP

For ALUA arrays, VMkernel picks the path set to be the preferred path.

For both A/A, A/P, and ALUA arrays, VMkernel resumes using the preferred path, but only if the path-thrashing avoidance algorithm allows the failback.

Fixed AP is available only in VMware ESX/ESXi 4.1.

The following details each policy.

  • Most Recently Used (VMW_PSP_MRU)

The host selects the path that is used recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert to the original path when the path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for active-passive storage devices.

Working principle: uses the most recently used path for I/O transfer. When the path fails, I/O is automatically switched to the last used path among the multiple available paths (if any). When the failed path recovers, I/O is not switched back to that path.

  • Round Robin (VMW_PSP_RR)

The host uses an automatic path selection algorithm rotating through all available active paths to enable load balancing across the paths. Load balancing is a process to distribute host I/Os on all available paths. The purpose of load balancing is to achieve the optimal throughput performance (IOPS, MB/s, and response time).

Working principle: uses all available paths for I/O transfer.

  • Fixed (VMW_PSP_FIXED)

The host always uses the preferred path to the disk when that path is available. If the host cannot access the disk through the preferred path, it tries the alternative paths. The default policy for active-active storage devices is Fixed. After the preferred path recovers from fault, VMkernel continues to use the preferred path. This attempt may results in path thrashing or failure because another SP now owns the LUN.

Working principle: uses the fixed path for I/O transfer. When the current path fails, I/O is automatically switched to a random path among the multiple available paths (if any). When the original path recovers, I/O will be switched back to the original path.

  • Fixed AP (VMW_PSP_FIXED_AP)

This policy is only supported by VMware ESX/ESXi 4.1.x and is incorporated to VMW_PSP_FIXED in later ESX versions.

Fixed AP extends the Fixed functionality to active-passive and ALUA mode arrays.

ALUA

  • ALUA definition:

    Asymmetric Logical Unit Access (ALUA) is a multi-target port access model. In a multipathing state, the ALUA model provides a way of presenting active/passive LUNs to a host and offers a port status switching interface to switch over the working controller. For example, when a host multipathing program that supports ALUA detects a port status change (the port becomes unavailable) on a faulty controller, the program will automatically switch subsequent I/Os to the other controller.

  • Support by Huawei storage:

    Old-version Huawei storage supports ALUA only in two-controller configuration, but not in multi-controller or HyperMetro configuration.

    New-version Huawei storage supports ALUA in two-controller, multi-controller, and HyperMetro configurations.

    Table 2-2 defines old- and new-version Huawei storage.

Table 2-2 Old- and new-version Huawei storage

Storage Type

Version

Old-version Huawei storage

OceanStor T V1/T V2/18000 V1/V3 V300R001/V3 V300R002/V3 V300R003C00/V3 V300R003C10/V3 V300R005/Dorado V3 V300R001C00

New-version Huawei storage

OceanStor V5 V500R007C00 and later/V3 V300R003C20SPC200 and later/V3 V300R006C00SPC100 and later/Dorado V3 V300R001C01SPC100 and later

  • ALUA impacts

    ALUA is mainly applicable to a storage system that has only one prior LUN controller. All host I/Os can be routed through different controllers to the working controller for execution. ALUA will instruct the hosts to deliver I/Os preferentially from the LUN working controller, thereby reducing the I/O routing-consumed resources on the non-working controllers.

    If all I/O paths of the LUN working controller are disconnected, the host I/Os will be delivered only from a non-working controller and then routed to the working controller for execution.

  • Suggestions for using ALUA on Huawei storage

    To prevent I/Os from being delivered to a non-working controller, you are advised to ensure that:

    • LUN home/working controllers are evenly distributed on storage systems so that host service I/Os are delivered to multiple controllers for load balancing.
    • Hosts always try the best to select the optimal path to deliver I/Os even after an I/O path switchover.
Download
Updated: 2020-01-17

Document ID: EDOC1000144883

Views: 143136

Downloads: 7269

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Share
Previous Next