No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Huawei SAN Storage Host Connectivity Guide for VMware ESXi

HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
UltraPath

UltraPath

This section describes the operations on storage systems and hosts when UltraPath is used.

Storage System Configuration

If you use UltraPath, retain the default initiator settings. Do not select Use third-party multipath software.

Figure 6-9 Initiator setting when UltraPath is used

Host Configuration

Install UltraPath by following instructions in the OceanStor UltraPath for vSphere User Guide.

Context

  • On UltraPath, set the HyperMetro working mode to local preferred mode. In this mode, the local storage array is preferred in processing host services. The remote storage array is used only when the local array is faulty. This improves the service response speed and reduces the access latency.
  • This configuration must be performed separately on all hosts.

Precautions

  • If two HyperMetro LUNs are mapped to a host as VMFS datastores, their host LUN IDs must be the same when the host is of ESXi 6.5.0 GA build 4564106 or a subsequent version earlier than ESXi 6.5 U1 build 5969303. For other ESXi versions, it is recommended that the host LUN IDs be the same.
  • If two HyperMetro LUNs are mapped to a host as raw devices (RDM), their host LUN IDs must be the same regardless of host versions.
  • If a HyperMetro LUN is mapped to multiple ESXi hosts in a cluster as VMFS datastores or raw devices (RDM), the host LUN IDs of the LUN for all of these ESXi hosts must be the same. You are advised to add all ESXi hosts in a cluster that are served by the same storage device to a host group and to the same mapping view.

You can query the host LUN ID mapped to the ESXi host in the Mapping View of OceanStor DeviceManager, as shown in Figure 6-10.

Figure 6-10 Changing the host LUN ID
Before modifying the Host LUN ID, read the following warnings carefully since misoperations may cause service interruption. To modify the host LUN ID for a LUN, right-click the LUN and choose Change host LUN ID from the shortcut menu. In the displayed dialog box, set the same Host LUN ID value for the two storage devices in the HyperMetro pair and then click OK.

Changing the host LUN ID with an incorrect procedure may cause service interruption.

If no datastore has been created on either LUN in the HyperMetro pair, you can directly change the host LUN ID for the LUNs. Wait for about 5 to 15 minutes after the modification is complete, and then run the Rescan command in the ESXi host CLI to verify that the LUNs in the HyperMetro pair have been restored and are online.

If a datastore has been created on either LUN in the HyperMetro pair and a service has been deployed in the datastore, change the host LUN ID using only the following two methods (otherwise, changing the host LUN ID for either LUN will cause the LUN to enter the PDL state and consequently interrupt services):
  • Method 1: You do not need to restart the ESXi host. Migrate all VMs in the datastore deployed on the LUNs in the HyperMetro pair to another datastore, and then change the host LUN ID on the OceanStor DeviceManager. Wait for about 5 to 15 minutes after the modification is complete, and then run the Rescan command in the ESXi host CLI to verify that the LUNs in the HyperMetro pair have been restored and are online. Then, migrate the VMs back to the datastore deployed on the LUNs in the HyperMetro pair.
  • Method 2: You need to restart the ESXi host. Power off all VMs in the datastore deployed on the LUNs in the HyperMetro pair to ensure that no service is running on the LUNs. Then, modify the host LUN ID on the OceanStor DeviceManager. Then, restart the ESXi host for the modification to take effect. After restarting the ESXi host, check whether the LUNs in the HyperMetro pair have been restored and are online.

Configuring UltraPath

  1. Set the trespass policy for LUNs.

    • For UltraPath earlier than 21.2.0, it is recommended that you run the set luntrespass command to disable the trespass policy.
      # esxcli upadm set luntrespass -m off 
      Succeeded in executing the command.
      The command format is set luntrespass [ -a array-id | -l vlun-id ] -m mode. Table 6-9 describes the key parameters in the command.
      Table 6-9 Parameter description

      Parameter

      Description

      Default Value

      -a array-id

      ID of the storage array

      You can run the show diskarray command to query the ID of the storage array.

      None

      -l vlun-id

      ID of the virtual LUN

      You can run the show vlun command to query the IDs of all virtual LUNs.

      None

      -m mode

      Trespass policy of LUNs

      • on: Switchover of LUNs' working controller is enabled.
      • off: Switchover of LUNs' working controller is disabled.

      on

    • For UltraPath 21.2.0 and later, it is recommended that you retain the default settings (the trespass policy is disabled by default).

  2. Run the set hypermetro workingmode -m mode -p primary_array_id command (for example, esxcli upadm set hypermetro workingmode -m priority -p 0) to set the HyperMetro working mode.

    In VMware vSphere, adding esxcli upadm in front of a command navigates the user to the UltraPath CLI.

    Table 6-10 describes the parameters in the set hypermetro workingmode command.

    Table 6-10 Parameter description

    Parameter

    Description

    Default Value

    -m mode

    HyperMetro working mode.

    • priority: local preferred mode
    • balance: load balancing mode
    NOTE:

    If you set the HyperMetro working mode for a specific virtual LUN first and then the global HyperMetro working mode for the storage system, the working mode for the virtual LUN remains unchanged.

    priority

    priority is recommended. balance is applicable when two active-active data centers are in the same building.

    -p primary_array_id

    ID of the preferred storage array.

    The ID is allocated by UltraPath. The storage array that is in the same data center as the application hosts reside is preferred.

    Run the esxcli upadm show diskarray command to obtain the storage array ID.

    NOTE:
    • In priority mode, this parameter indicates the storage array to which I/Os are preferentially delivered.
    • In balance mode, this parameter indicates the storage array where the first slice section resides.

    None

    NOTE:

    Mapping relationship between application hosts and storage arrays:

    • Storage array A is the preferred array for all application hosts in data center A.
    • Storage array B is the preferred array for all application hosts in data center B.

  1. Set timeout parameters.

    FC networking does not require setting of timeout parameters.

    For iSCSI networking, run the following commands:

    esxcli iscsi adapter param set -A vmhba35 -k NoopOutInterval -v 1
    esxcli iscsi adapter param set -A vmhba35 -k NoopOutTimeout -v 10
    esxcli iscsi adapter param set -A vmhba35 -k RecoveryTimeout -v 1
    • The preceding commands can be used only in VMware 5.0 and later versions. For details on the VMware versions supported in HyperMetro, see http://support-open.huawei.com/ready/pages/user/compatibility/support-matrix.jsf.
    • The field in italic and bold (vmhba35 in this example) indicates the iSCSI adapter. Change this value according to your site.
    • The settings require a host restart to take effect.
    • The settings shorten the path switchover time to about 11s. In comparison, the default ESXi settings may result in an up-to-25s path switchover time.

  2. (Optional) Enable APD to PDL for the VMware ESXi hosts.

    You must enable APD to PDL for the VMware ESXi hosts when all of the conditions are met:

    • The ESXi hosts are deployed in a cluster.
    • The hosts and storage systems are connected by a parallel Fibre Channel network.
    • The storage system versions are earlier than Dorado V3 V300R001C01SPC100 or OceanStor V3 V300R003C20SPC200.

    You need to disable APD to PDL if the hosts and storage systems are connected by an iSCSI network or a cross-connected Fibre Channel network, or the storage system versions are none of the above.

    To enable APD to PDL, perform the following operations:

    1. Run the esxcli upadm set apdtopdl -m on command.
    2. Run the esxcli show upconfig command to query the configuration result.

      If APD to PDL Mode is on, the APD to PDL function is successfully configured.

    vSphere Security documentation for more information      
    ~ # esxcli upadm show upconfig      
    =============================================================== UltraPath Configuration      
    =============================================================== Basic Configuration       
     Working Mode : load balancing within controller       
     LoadBanlance Mode : min-queue-depth       
     Loadbanlance io threshold : 1       
     LUN Trespass : off         
    Advanced Configuration       
     Io Retry Times : 10       
     Io Retry Delay : 0       
     Faulty path check interval : 10       
     Idle path check interval : 60       
     Failback Delay Time : 600       
     Max io retry timeout : 1800  
    Path reliability configuration       
     Timeout degraded statistical time : 600          
     Timeout degraded threshold : 1       
     Timeout degraded path recovery time : 1800       
     Intermittent IO error degraded statistical time : 300       
     Min. I/Os for intermittent IO error degraded statistical : 5000 Intermittent IO error derraded threshold : 20       
     Intermittent IO error derraded path recovery time : 1800     
     Intermittent fault degraded statistical time : 1800       
     Intermittent fault degraded threshold : 3       
     Intermittent fault degraded path recovery time : 3600       
     High latency degraded statistical time : 300       
     High latency degraded threshold : 1000       
     High latency degraded path recovery time : 3600      
    APDtoPDL configuration       
    APD to PDL Mode : on
    APD to PDL Timeout : 10 

Configuring a VMware Cluster

If you want to configure VMware clusters, see section "Virtualization Platform Configuration" in the BC&DR Solution Product Documentation (Active-Active Data Center). Its main contents are as follows.

Table 6-11 Cluster configuration when UltraPath is used

VMware vSphere ESXi Version

Host Parameter

Cluster Parameter

Precautions

VMkernel.Boot.terminateVMOnPDL

Disk.AutoremoveOnPDL

VM policies for APD and PDL

5.5.*

True

0

Select Turn on vSphere HA.

After configuring an ESXi 5.5.* host, you must restart the host for the configuration to take effect.

6.0 GA

6.0 U1

After configuring an ESXi 6.0 GA or ESXi 6.0 U1 host, you do not need to restart the host because the parameters take effect immediately.

6.0 U2

6.0 U3

False (Retain the default value.)

1 (Retain the default value.)

  1. Select Turn on vSphere HA.
  2. Set Datastore with PDL to Power off and restart VMs.
  3. Set Datastore with APD to Power off and restart VMs - Aggressive restart policy.

For a host of a version from ESXi 6.0 U2 to ESXi 6.7.*, retain the default host parameter settings. You only need to enable HA again in vCenter for the settings to take effect.

6.5.*

6.7.*

Mandatory Configuration Items:

  • Deploy ESXi hosts across data centers in an HA cluster and configure the cluster with the HA advanced parameter das.maskCleanShutdownEnabled = True for VMware vSphere 5.0 u1, 5.1, and 5.5 versions.
  • A VM service network requires L2 interworking between data centers so that VM migration between data centers will not affect VM services.
  • For VMware vSphere 5.0 u1, later 5.0 versions, and 5.1 versions, log in to the CLI of each ESXi host using SSH and add Disk.terminateVMOnPDLDefault = True in the /etc/vmware/settings file.
  • For VMware vSphere 5.5.*, 6.0 u1, and versions between them, use vSphere Web Client to connect to vCenter, go to the cluster HA configuration page, and select Turn on vSphere HA. Then, log in to each ESXi host using vSphere Web Client or vCenter and complete the following settings:
    Set VMkernel.Boot.terminateVMOnPDL = True. The parameter forcibly powers off VMs on a datastore when the datastore enters the PDL state.
    Figure 6-11 Boot parameter settings
    Set Disk.AutoremoveOnPDL = 0. This setting ensures that datastores in the PDL state will not be automatically removed.
    Figure 6-12 Disk parameter settings
  • For VMware vSphere 6.0 u2 and later updates:

    After connecting to vCenter through the Web Client, enter the cluster HA configuration and set the parameters as follows.

    Figure 6-13 vSphere 6.0 cluster configuration
  • For VMware vSphere 6.5:
    After connecting to vCenter through the Web Client, enter the cluster HA configuration and set the parameters as follows.
    Figure 6-14 vSphere 6.5 cluster configuration-1

    Figure 6-15 vSphere 6.5 cluster configuration-2

  • For VMware vSphere 6.7 and later updates:

    After connecting to vCenter through the Web Client, enter the cluster HA configuration and set the parameters as follows.

    Figure 6-16 vSphere 6.7 cluster configuration-1
    Figure 6-17 vSphere 6.7 cluster configuration-2

For VMware vSphere 5.1 to 5.5 versions, restart hosts for the configuration to take effect.

For VMware vSphere 6.0 to 6.7 versions, re-enable the HA cluster for the configuration to take effect if you do not want to restart the hosts.

Recommended configuration items:

  • Configure the vMotion, service, and management networks with different VLAN IDs to prevent network interference.
  • Configure the management network to include the vCenter Server management node and ESXi hosts. Deny access from external applications.
  • Divide the service network into VLANs to ensure logical isolation and control broadcast domains.
  • Configure a DRS group to ensure that VMs can be recovered first in the local data center in the event of the breakdown of a single host.

Verification

In VMware vSphere, run the esxcli upadm show upconfig command.

In VMware vSphere, adding esxcli upadm in front of a command navigates the user to the UltraPath CLI.

If the command output contains the following information, the configuration is successful.

HyperMetro WorkingMode : read write within primary array

Figure 6-18 provides an example.

Figure 6-18 Verifying the HyperMetro working mode

Download
Updated: 2020-01-17

Document ID: EDOC1000144883

Views: 144728

Downloads: 7338

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Share
Previous Next