No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade
Huawei SAN Storage Host Connectivity Guide for VMware ESXi

HUAWEI SAN Storage Host Connectivity Guide for VMware ESXi Servers

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
OS Native Multipathing Software

OS Native Multipathing Software

Storage System Configuration

Configuring the Initiators

If you want to configure the switchover mode for initiators, perform the following operations.

  1. Log in to OceanStor DeviceManager. In the navigation tree on the right side, click Provisioning and then click Host.

    Figure 6-19 Accessing the host configuration page

  2. On the Host page, select the host you want to modify. Then select an initiator on the host and click Properties.

    Figure 6-20 Selecting an initiator to modify

  3. In the Initiator Properties dialog box, configure the initiator as required.

    Figure 6-21 Modifying an initiator

  4. Repeat the preceding operations to modify other initiators on the host.

Recommended Storage Configuration

This section provides recommended configurations on HyperMetro storage arrays for interconnection with VMware ESXi hosts using VMware NMP.

Table 6-12 Configurations on HyperMetro OceanStor V3, OceanStor V5, or Dorado V3 storage arrays

HyperMetro Working Mode

Storage Array

OS Setting

Third-Party Multipathing Software

Switchover Mode

Special Mode

Path Type

Load balancing

Local

VMware ESXi

Enabled

Special mode

Mode 1

Optimal

Remote

VMware ESXi

Enabled

Special mode

Mode 1

Optimal

Local preferred

Local

VMware ESXi

Enabled

Special mode

Mode 2

Optimal

Remote

VMware ESXi

Enabled

Special mode

Mode 2

Non-optimal

You must configure initiators according to the requirements of the specific OS that is installed on the host. All of the initiators added to a single host must be configured with the same switchover mode. Otherwise, host services may be interrupted.

For details about the VMware ESXi versions, see the Huawei Storage Interoperability Navigator.

If a LUN has been mapped to the host, you must restart the host for the configuration to take effect after you modify the initiator parameters. If you configure the initiator for the first time, restart is not needed.

In OceanStor V3 V300R003C20, special mode 1 and special mode 2 are disabled by default. For details about how to enable them, see the OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3 Storage System V300R003C20 Restricted Command Reference or OceanStor 18500 V3&18800 V3 Storage System V300R003C20 Restricted Command Reference. Contact Huawei technical support engineers to obtain the documents.

In OceanStor V5 V500R007C00, OceanStor V3 V300R006C00SPC100, Dorado V3 V300R001C01SPC100, and later versions, you can configure special mode 1 and special mode 2 on DeviceManager directly.

Figure 6-22 Querying the special mode type

Host Configuration

VMware NMP is ESXi's native multipathing software and therefore does not need to be installed separately.

Recommended VMware NMP Configuration

This section provides the recommended configuration for VMware NMP when it is used for HyperMetro deployment of OceanStor V3, OceanStor V5, and Dorado V3 storage systems.

Table 6-13 Recommended VMware NMP configuration for HyperMetro

Storage Array

ALUA Enabled or Not

VM Cluster

SATP Type

PSP Type

Dorado V3 series, OceanStor V3, OceanStor V5, 18000 V3 series V300R003C20 and later

Yes

N/A

VMW_SATP_ALUA

VMW_PSP_RR

  1. For the MSCS and WSFC clusters deployed on VMware ESXi 5.1 or earlier VMs, you cannot set the path selection policy to Round Robin for RDM LUNs used by the MSCS and WSFC clusters, but you can set it to FIXED. For details, see How Can I Query and Modify the Path Selection Policy? or VMware KB1036189.
  2. When using all-flash arrays, it is advised to set IO Operation Limit to 1 on ESXi. For ESXi 5.x and 6.x, the command is as follows:
    esxcli storage nmp psp roundrobin deviceconfig set --device=device_NAA** --iops=1 --type iops

    Change the preceding information in bold based on your actual situation.

  3. Dorado V3 systems must be V300R001C01SPC100 or later versions, with multi-controller ALUA and HyperMetro ALUA supported.

For details about the VMware ESXi versions, see the Huawei Storage Interoperability Navigator.

Precautions

  • If two HyperMetro LUNs are mapped to a host as VMFS datastores, their host LUN IDs must be the same when the host is of ESXi 6.5.0 GA build 4564106 or a subsequent version earlier than ESXi 6.5 U1 build 5969303. For other ESXi versions, it is recommended that the host LUN IDs be the same.
  • If two HyperMetro LUNs are mapped to a host as raw devices (RDM), their host LUN IDs must be the same regardless of host versions.
  • If a HyperMetro LUN is mapped to multiple ESXi hosts in a cluster as VMFS datastores or raw devices (RDM), the host LUN IDs of the LUN for all of these ESXi hosts must be the same. You are advised to add all ESXi hosts in a cluster that are served by the same storage device to a host group and to the same mapping view.
You can query the host LUN ID mapped to the ESXi host in the Mapping View of OceanStor DeviceManager, as shown in Figure 6-23.
Figure 6-23 Changing the host LUN ID
Before modifying the Host LUN ID, read the following warnings carefully since misoperations may cause service interruption. To modify the host LUN ID for a LUN, right-click the LUN and choose Change host LUN ID from the shortcut menu. In the displayed dialog box, set the same Host LUN ID value for the two storage devices in the HyperMetro pair and then click OK.

Changing the host LUN ID with an incorrect procedure may cause service interruption.

If no datastore has been created on either LUN in the HyperMetro pair, you can directly change the host LUN ID for the LUNs. Wait for about 5 to 15 minutes after the modification is complete, and then run the Rescan command in the ESXi host CLI to verify that the LUNs in the HyperMetro pair have been restored and are online.

If a datastore has been created on either LUN in the HyperMetro pair and a service has been deployed in the datastore, change the host LUN ID using only the following two methods (otherwise, changing the host LUN ID for either LUN will cause the LUN to enter the PDL state and consequently interrupt services):
  • Method 1: You do not need to restart the ESXi host. Migrate all VMs in the datastore deployed on the LUNs in the HyperMetro pair to another datastore, and then change the host LUN ID on the OceanStor DeviceManager. Wait for about 5 to 15 minutes after the modification is complete, and then run the Rescan command in the ESXi host CLI to verify that the LUNs in the HyperMetro pair have been restored and are online. Then, migrate the VMs back to the datastore deployed on the LUNs in the HyperMetro pair.
  • Method 2: You need to restart the ESXi host. Power off all VMs in the datastore deployed on the LUNs in the HyperMetro pair to ensure that no service is running on the LUNs. Then, modify the host LUN ID on the OceanStor DeviceManager. Then, restart the ESXi host for the modification to take effect. After restarting the ESXi host, check whether the LUNs in the HyperMetro pair have been restored and are online.

  • For OceanStor V3 V300R003C20SPC200, a single array with ALUA enabled can have a maximum of 8 controllers; two active-active arrays with ALUA enabled can also have no more than 8 controllers.
  • For VMware ESXi 6.x, only VMware ESXi 6.0 U2 and later versions support HyperMetro configuration. Earlier ESXi 6.x versions have defects.
  • Dorado V3 systems must be V300R001C01SPC100 or later versions, with multi-controller ALUA and HyperMetro ALUA supported.

Before deploying HyperMetro with VMware ESXi NMP, consider the compatibility between components (such as storage system, operating system, HBAs, and switches) and the application software. Consult the Huawei Storage Interoperability Navigator to ensure that compatibility requirements have been met.

Host Configuration

Setting SATP Rules for VMware NMP

Run the following command on the host to configure SATP rules:

esxcli storage nmp satp rule add -V HUAWEI-M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on

In the command, HUAWEI is an example of the storage vendor and XSG1 is an example of the storage model. Change the two values based on your actual storage configurations. Table 6-14 provides the vendor and model information of Huawei mainstream storage devices.

Table 6-14 Huawei storage vendor and model information

Storage Device

Vendor

Model

S2200T/S2600T/S5500T/S5600T/S5800T/S6800T

HUAWEI/SYMANTEC/HUASY

S2200T/S2600T/S5500T/S5600T/S5800T/S6800T

Dorado2100 G2

HUAWEI/SYMANTEC/HUASY

Dorado2100\ G2

Dorado5100

HUAWEI/SYMANTEC/HUASY

Dorado5100

18500

HUAWEI

HVS85T

18800/18800F

HUAWEI

HVS88T

V5 series

V3 series

18000 V3 series

18000 V5 series

Dorado V3 series

HUAWEI

XSG1

Setting Timeout Parameters

FC networking does not require setting of timeout parameters.

For iSCSI networking, execute the following commands on ESXi hosts:
esxcli iscsi adapter param set -A vmhba35 -k NoopOutInterval -v 1
esxcli iscsi adapter param set -A vmhba35 -k NoopOutTimeout -v 10
esxcli iscsi adapter param set -A vmhba35 -k RecoveryTimeout -v 1
  • All the preceding commands are available only in VMware 5.0 and later versions. For details on the VMware versions supported in HyperMetro, see http://support-open.huawei.com/ready/pages/user/compatibility/support-matrix.jsf.
  • The field in italic (vmhba35 in this example) indicates the iSCSI initiator. Change it according to your own hosts.
  • The settings require a host restart to take effect.
  • The settings shorten the path switchover time to about 11s. In comparison, the default ESXi settings may result in an up-to-35s path switchover time for ESXi 6.0.* and ESXi 6.5.* and an up-to-25s path switchover time for ESXi 6.7.*.

Configuring a VMware Cluster

Table 6-15 Cluster configuration when the OS native multipathing software (VMware NMP) is used

VMware vSphere

ESXi Version

Host Parameter

Cluster Parameter

Remarks

VMkernel.Boot.terminateVMOnPDL

Disk.AutoremoveOnPDL

VM policies for APD and PDL

5.5.*

True

0

Select Turn on vSphere HA.

After configuring an ESXi 5.5.* host, you must restart the host for the configuration to take effect.

6.0 GA

6.0 U1

HyperMetro is not supported if the OS native multipathing software (VMware NMP) is used. You are advised to upgrade the operating system to ESXi 6.0 U2 or later.

NOTE:

Storage PDL responses may not trigger path failover in VMware vSphere 6.0 GA and 6.0 U1. For details, see VMware KB.

6.0 U2

6.0 U3

False (Retain the default value.)

1 (Retain the default value.)

  1. Select Turn on vSphere HA.
  2. Set Datastore with PDL to Power off and restart VMs.
  3. Set Datastore with APD to Power off and restart VMs- Aggressive restart policy.

For a host of a version from ESXi 6.0 U2 to ESXi 6.7.*, retain the default host parameter settings. You only need to enable HA again in vCenter for the settings to take effect.

6.5.*

6.7.*

Mandatory Configuration Items:

  • Deploy ESXi hosts across data centers in an HA cluster and configure the cluster with the HA advanced parameter das.maskCleanShutdownEnabled = True for VMware vSphere 5.0 u1, 5.1, and 5.5 versions.
  • A VM service network requires L2 interworking between data centers so that VM migration between data centers will not affect VM services.
  • For VMware vSphere 5.0 u1, later 5.0 versions, and 5.1 versions, log in to the CLI of each ESXi host using SSH and add Disk.terminateVMOnPDLDefault = True in the /etc/vmware/settings file.
  • For VMware vSphere 5.5.*, 6.0 u1, and versions between them, use vSphere Web Client to connect to vCenter, go to the cluster HA configuration page, and select Turn on vSphere HA. Then, log in to each ESXi host using vSphere Web Client or vCenter and complete the following settings:
    Set VMkernel.Boot.terminateVMOnPDL = True. The parameter forcibly powers off VMs on a datastore when the datastore enters the PDL state.
    Figure 6-24 Boot parameter settings
    Set Disk.AutoremoveOnPDL = 0. This setting ensures that datastores in the PDL state will not be automatically removed.
    Figure 6-25 Disk parameter settings
  • For VMware vSphere 6.0 u2 and later update versions:

    After connecting to vCenter through the Web Client, enter the cluster HA configuration and set the parameters as follows.

    Figure 6-26 vSphere 6.0 cluster configuration

  • For VMware vSphere 6.5 and later update versions:

    After connecting to vCenter through the Web Client, enter the cluster HA configuration and set the parameters as follows.

    Figure 6-27 vSphere 6.5 cluster configuration-1
    Figure 6-28 vSphere 6.5 cluster configuration-2

  • For VMware vSphere 6.7 and later update versions:

    After connecting to vCenter through the Web Client, enter the cluster HA configuration and set the parameters as follows.

    Figure 6-29 vSphere 6.7 cluster configuration-1
    Figure 6-30 vSphere 6.7 cluster configuration-2

Recommended Configuration Items:

  • Configure the vMotion, service, and management networks with different VLAN IDs to prevent network interference.
  • Configure the management network to include the vCenter Server management node and ESXi hosts. Deny access from external applications.
  • Divide the service network into VLANs to ensure logical isolation and control broadcast domains.
  • Configure a DRS group to ensure that VMs can be recovered first in the local data center in the event of the breakdown of a single host.

Verification

Perform the following operations to verify that VMware NMP configurations have taken effect:

  1. Run the esxcli storage nmp satp rule list | grep -i huawei command to verify that SATP rules are successfully added.

    The command output shows that SATP rules are successfully added.

  2. Run the esxcli storage nmp device list -d=naa.6xxxxxxx command to verify that working paths of LUNs are properly configured.

    naa.6xxxxxxx indicates the drive letter of a LUN after being mapped to the host.

    Working paths are successfully configured if their Storage Array Type and Path Selection Policy are the same as those configured, and the number of Working Paths is equal to the total number of paths in the port group.

    Example:

    The following SATP rule is configured:

    esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on

    The port group has three paths.

    In the preceding command output, Storage Array Type is VMW_SATP_ALUA, Path Selection Policy is VMW_PSP_RR, and Working Path is 3, which are consistent with the corresponding AO path parameters. Therefore, working paths are successfully configured.

    When Path Selection Policy is VMW_PSP_FIXED, only one working path is available, which is any path in the port group where AO paths reside.

Download
Updated: 2020-01-17

Document ID: EDOC1000144883

Views: 156664

Downloads: 7749

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Share
Previous Next