VMware ESXi
Storage System Configuration
Recommended Storage Configuration
This section provides recommended configurations on HyperMetro storage systems for interconnection with VMware ESXi hosts using VMware NMP.
HyperMetro Working Mode |
Storage System |
OS Setting |
Host Access Mode |
Preferred Path for HyperMetro |
Description |
---|---|---|---|---|---|
Load balancing mode |
Local storage |
VMware ESXi |
Load balancing |
N/A |
The host uses all paths of a disk with equal priority. |
Remote storage |
VMware ESXi |
Load balancing |
N/A |
||
Local preferred mode |
Local storage |
VMware ESXi |
Asymmetric |
Yes |
The host considers the paths from the local storage system as preferred paths, and those from the remote storage system as non-preferred paths. |
Remote storage |
VMware ESXi |
Asymmetric |
No |
- Use the recommended configurations in Table 7-17. Other configurations may cause problems.
- If a LUN has been mapped to a host, you must restart the host for the configuration to take effect after you modify Host Access Mode. If you map the LUN for the first time, restart is not needed.
- When data is migrated from other Huawei storage systems (including OceanStor Dorado V3, OceanStor V3, and OceanStor V5) to OceanStor Dorado V6, configure the storage system by following instructions in "Recommended Configurations for OceanStor Dorado V6 for Taking Over Data from Other Huawei Storage Systems When the Host Uses the OS Native Multipathing Software" in the OceanStor Dorado Host Connectivity Guide for VMware ESXi.
- When a LUN of a HyperMetro pair is mapped to all ESXi hosts in a cluster, the LUN must have the same host LUN ID on all of the hosts. You are advised to add all ESXi hosts in a cluster that are served by the same storage device to a host group and to the same mapping.
Configuring the Load Balancing Mode
Perform the following operations to configure the load balancing mode:
- On DeviceManager, choose Services > Hosts. Select the desired host, click
on the right, and choose Modify.
Figure 7-61 Modifying the host properties- The information displayed on the GUI may vary slightly with the product version.
- On DeviceManager of OceanStor Dorado V6 6.0.1 and later versions, the
icon is changed to More.
- On the Modify Host page, set Host Access Mode to Load balancing.Figure 7-62 Settings on the local storage system
- Repeat the preceding steps to set Host Access Mode of the remote storage system to Load balancing.Figure 7-63 Settings on the remote storage system
Configuring the Local Preferred Mode
Perform the following operations to configure the local preferred mode:
- On DeviceManager, choose Services > Hosts. Select the desired host, click
on the right, and choose Modify, as shown in.
Figure 7-64 Modifying the host properties - For the local storage system, set Host Access Mode to Asymmetric and Preferred Path for HyperMetro to Yes.Figure 7-65 Settings on the local storage system
- For the remote storage system, set Host Access Mode to Asymmetric and Preferred Path for HyperMetro to No.Figure 7-66 Settings on the remote storage system
Host Configuration
Requirements on Host LUN IDs
- All path LUN IDs of a LUN are consistent for a host.
- All host LUN IDs of a LUN shared by multiple hosts are consistent across these hosts.
If LUN IDs do not meet the preceding requirements, correct them by referring to "ID Description" in the OceanStor Dorado Host Connectivity Guide for VMware ESXi.
Recommended VMware NMP Configuration
- VMware NMP is ESXi's native multipathing software and therefore does not need to be installed separately.
- Before mapping LUNs, ensure that Huawei-recommended SATP and PSP rules are configured on the VMware ESXi host. Otherwise, the mapped LUNs can match SATP and PSP rules only after the ESXi host is restarted.
- This section provides the recommended configuration for VMware NMP when it is used for HyperMetro deployment of OceanStor Dorado V6 storage systems.
Table 7-18 Recommended VMware NMP configuration for HyperMetro
HyperMetro Working Mode
Storage System
Recommended SATP Rule
Recommended PSP Rule
Load balancing mode
Local storage
VMW_SATP_DEFAULT_AA
VMW_PSP_RR
Remote storage
VMW_SATP_DEFAULT_AA
VMW_PSP_RR
Local preferred mode
Local storage
VMW_SATP_ALUA
VMW_PSP_RR
Remote storage
VMW_SATP_ALUA
VMW_PSP_RR
- It is advised to set IOPS limit to 1. For details, see "How Can I Set IOPS Limit for PSP Round Robin to 1?" in the OceanStor Dorado Host Connectivity Guide for VMware ESXi.
- When employing the HyperMetro solution based on VMware ESXi NMP-inherent multi-pathing, consider compatibility between components (such as storage systems, operating system, HBAs, and switches) and upper-layer software. For details, see the Huawei Storage Interoperability Navigator.
- If your VMware ESXi host version is later than or equal to the earliest version in Table 7-19, the host has integrated Huawei storage VMW_SATP_ALUA/VMW_PSP_RR and VMW_SATP_DEFAULT_AA/VMW_PSP_RR policies by default and you do not need to manually add them.
Table 7-19 Earliest versions of VMware ESXi hosts that integrate Huawei storage SATP and PSP rules by default
Version
Earliest Version That Integrates
VMW_SATP_ALUA and VMW_PSP_RR Policies
Earliest Version That Integrates VMW_SATP_DEFAULT_AA and VMW_PSP_RR Policies
ESXi 6.0
ESXi 6.0 P07 (build number 9239799)
None. You need to manually add the VMW_SATP_DEFAULT_AA and VMW_PSP_RR policies.
ESXi 6.5
ESXi 6.5 Patch 02 (build number 7388607)
ESXi 6.5 Patch 05 (build number 16576891)
ESXi 6.7
ESXi 6.7 GA
ESXi 6.7 Patch 03 (build number 16713306)
ESXi 7.0
ESXi 7.0 GA
ESXi 7.0 U1 (build number 16850804)
- For details about the VMware ESXi version release schedule, visit https://kb.vmware.com/s/article/2143832.
- VMware ESXi 6.0 U2 and later versions support HyperMetro configuration. Versions earlier than VMware ESXi 6.0 U2 have their defects.
- You can run the following commands to manually add VMW_SATP_ALUA/VMW_PSP_RR and VMW_SATP_DEFAULT_AA/VMW_PSP_RR policies:
Local preferred mode
esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on
Load balancing mode
esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR -c tpgs_off
- In the command, HUAWEI is the storage vendor and XSG1 is the storage model.
- New SATP rules will immediately take effect for newly mapped LUNs, but will not take effect for previously mapped LUNs until the host is restarted.
- For details about the parameters in the host commands, see https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-D10F7E66-9DF1-4CB7-AAE8-6F3F1F450B42.html.
- Set timeout parameters.
FC networking does not require setting of timeout parameters.
For iSCSI networking, perform the following operations on ESXi hosts. You must restart the host for the configuration to take effect. Exercise caution when performing this operation.
- Obtain the iSCSI adapter name.
[root@esxi16113:~] esxcfg-scsidevs -a | grep -i iscsi vmhba35 iscsi_vmk online iqn.XXXX-XX.com.vmware:esxi16113-xxxxxxxxxSCSI Software Adapte
In the command output, the iSCSI adapter name is vmhba35.
- Query the current timeout parameter settings.
[root@esxi16113:~] esxcli iscsi adapter param get -A vmhba35 | egrep 'NoopOutInterval|NoopOutTimeout|RecoveryTimeout' NoopOutInterval 15 15 1 60 true false NoopOutTimeout 10 10 10 30 true false RecoveryTimeout 10 10 1 120 true false
- Set the timeout parameters.
esxcli iscsi adapter param set -A vmhba35 -k NoopOutInterval -v 1 esxcli iscsi adapter param set -A vmhba35 -k NoopOutTimeout -v 10 esxcli iscsi adapter param set -A vmhba35 -k RecoveryTimeout -v 1
- All the preceding commands are available only in VMware ESXi 5.5, 6.0, 6.5, 6.7, and 7.0. For details on the VMware versions supported in HyperMetro, see the Huawei Storage Interoperability Navigator.
- The information in bold is the iSCSI adapter. Change it based on the site requirements. (Run the esxcfg-scsidevs -a command to query the iSCSI adapter.)
- You must restart the host for the configuration to take effect.
- The settings shorten the path switchover time to about 11s. In comparison, the default ESXi settings may result in an up-to-35s path switchover time for ESXi 6.0.* and ESXi 6.5.* and an up-to-25s path switchover time for ESXi 6.7.*.
- Verify the modification.
[root@esxi16113:~] esxcli iscsi adapter param get -A vmhba35 | egrep 'NoopOutInterval|NoopOutTimeout|RecoveryTimeout' NoopOutInterval 1 15 1 60 true false NoopOutTimeout 10 10 10 30 true false RecoveryTimeout 1 10 1 120 true false
- Obtain the iSCSI adapter name.
Host Configuration
- Configuring a VMware cluster (mandatory)
Table 7-20 Cluster configuration
VMware vSphere ESXi Version
Host Parameter
Cluster Parameter (VM Policy for APD and PDL)
Remarks
5.0 U1
5.1
Log in to each ESXi host using SSH, open the /etc/vmware/settings file, and add Disk.terminateVMOnPDLDefault = True to the file.
Use the vSphere Client to log in to each ESXi host. In the advanced settings, set das.maskCleanShutdownEnabled = True.
After configuring host parameters, restart the host for the configuration to take effect.
5.5.*
VMkernel.Boot.terminateVMOnPDL=True
Disk.AutoremoveOnPDL=0
Select Turn on vSphere HA.
After configuring host parameters, restart the host for the configuration to take effect.
After configuring cluster parameters, re-enable HA for the configuration to take effect.
6.0 GA
6.0 U1
HyperMetro is not supported if the OS native multipathing software (VMware NMP) is used. You are advised to upgrade the operating system to ESXi 6.0 U2 or later.
NOTE:Storage PDL responses may not trigger path failover in VMware vSphere 6.0 GA and 6.0 U1. For details, see VMware KB.
6.0 U2
6.0 U3
VMkernel.Boot.terminateVMOnPDL=False (Retain the default value.)
Disk.AutoremoveOnPDL=1 (Retain the default value.)
- Select Turn on vSphere HA.
- Set Datastore with PDL to Power off and restart VMs.
- Set Datastore with APD to Power off and restart VMs - Aggressive restart policy.
Retain the default host parameter settings. You only need to enable HA again in vCenter for the settings to take effect.
6.5.*
6.7.*
7.0.*
The following configurations are mandatory when you configure a VMware cluster:
- A VM service network requires L2 interworking between data centers so that VM migration between data centers will not affect VM services.
- For VMware vSphere 5.0 u1, later 5.0 versions, and vSphere 5.1:
- Cluster parameter configuration: Use the vSphere Client to log in to each ESXi host. In the advanced settings, set das.maskCleanShutdownEnabled = True.
- Host parameter configuration: Log in to each ESXi host using SSH and add Disk.terminateVMOnPDLDefault = True to the /etc/vmware/settings file. After configuring host parameters, restart the host for the configuration to take effect.
- For VMware vSphere 5.5 and its update versions:
- Cluster parameter configuration: Use vSphere Web Client to connect to vCenter, go to the cluster HA configuration page, and select Turn on vSphere HA. After configuring cluster parameters, re-enable HA for the configuration to take effect.
- Host parameter configuration: Log in to each ESXi host using vSphere Client or vCenter and complete the following advanced settings. After configuring host parameters, restart the host for the configuration to take effect.Set VMkernel.Boot.terminateVMOnPDL = True. The parameter forcibly powers off VMs on a datastore when the datastore enters the PDL state.Figure 7-67 Boot parameter settingsSet Disk.AutoremoveOnPDL = 0. This setting ensures that datastores in the PDL state will not be automatically removed.Figure 7-68 Disk parameter settings
- For VMware vSphere 6.0 u2 and later updates:
After connecting to vCenter through the vSphere Web Client, enter the cluster HA configuration and set the parameters as follows.
Figure 7-69 vSphere 6.0 cluster configuration
- For VMware vSphere 6.5:
After connecting to vCenter through the vSphere Web Client, enter the cluster HA configuration and set the parameters as follows.
Figure 7-70 vSphere 6.5 cluster configuration-1Figure 7-71 vSphere 6.5 cluster configuration-2 - For VMware vSphere 6.7 and 7.0 and later updates:
After connecting to vCenter through the vSphere Web Client, enter the cluster HA configuration and set the parameters as follows.
Figure 7-72 vSphere 6.7 and 7.0 cluster configuration-1Figure 7-73 vSphere 6.7 and 7.0 cluster configuration-2
- Configuring a VMware cluster (optional)
- Configure the vMotion, service, and management networks with different VLAN IDs to prevent network interference.
- Configure the management network to include the vCenter Server management node and ESXi hosts. Deny access from external applications.
- Divide the service network into VLANs to ensure logical isolation and control broadcast domains.
- Configure a DRS group to ensure that VMs can be recovered first in the local data center in the event of the breakdown of a single host.
Verification
Verifying the Load Balancing Mode
Perform the following operations to verify that VMware NMP configurations have taken effect:
- Run the esxcli storage nmp satp rule list | grep -i huawei command to verify that SATP rules are successfully added.
[root@localhost:~] esxcli storage nmp satp rule list | grep -i huawei VMW_SATP_ALUA HUAWEI XSG1 system tpgs_on VMW_PSP_RR VMW_SATP_DEFAULT_AA HUAWEI XSG1 user tpgs_off VMW_PSP_RR
The command output includes VMW_SATP_DEFAULT_AA, which means that SATP rules have been successfully added. Manually added SATP rules (user-level) have a higher priority than the default ones (system-level).
- Run the esxcli storage nmp device list -d=naa.6xxxxxxx command to verify that working paths of LUNs are properly configured.
naa.6xxxxxxx indicates the drive letter of a LUN after being mapped to the host.
Working paths are successfully configured if their Storage Array Type and Path Selection Policy are the same as those configured, and the number of Working Paths is equal to the total number of paths in the port group.
Example:
The following SATP rule is configured:
esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR -c tpgs_off
The port group has four paths.
Figure 7-74 Checking the working paths of LUNsIn the preceding command output, Storage Array Type is VMW_SATP_DEFAULT_AA, Path Selection Policy is VMW_PSP_RR, and there are four Working Paths, which are consistent with the configuration. Therefore, working paths are successfully configured.
When Path Selection Policy is VMW_PSP_FIXED, only one working path is available, which may be any path in the port group.
Verifying the Local Preferred Mode
Perform the following operations to verify that VMware NMP configurations have taken effect:
- Run the esxcli storage nmp satp rule list | grep -i huawei command to verify that SATP rules are successfully added.
[root@localhost:~] esxcli storage nmp satp rule list | grep -i huawei VMW_SATP_ALUA HUAWEI XSG1 system tpgs_on VMW_PSP_RR VMW_SATP_ALUA HUAWEI XSG1 user tpgs_on VMW_PSP_RR
The command output includes VMW_SATP_ALUA, which means that SATP rules have been successfully added. Manually added SATP rules (user-level) have a higher priority than the default ones (system-level).
- Run the esxcli storage nmp device list -d=naa.6xxxxxxx command to verify that working paths of LUNs are properly configured.
naa.6xxxxxxx indicates the drive letter of a LUN after being mapped to the host.
Working paths are successfully configured if their Storage Array Type and Path Selection Policy are the same as those configured, and the number of Working Paths is equal to the total number of paths in the port group.
Example:
The following SATP rule is configured:
esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on
The port group has three paths.
Figure 7-75 Checking the working paths of LUNsIn the preceding command output, Storage Array Type is VMW_SATP_ALUA, Path Selection Policy is VMW_PSP_RR, and there are three Working Paths, which are consistent with the corresponding preferred path parameters. Therefore, working paths are successfully configured.
When Path Selection Policy is VMW_PSP_FIXED, only one working path is available, which may be any path in the port group where preferred paths reside.