VMware ESXi
Storage System Configuration
This section provides recommended configurations on HyperMetro storage systems for interconnection with VMware ESXi hosts.
HyperMetro Working Mode |
Storage System |
OS Setting |
Host Access Mode |
Preferred Path for HyperMetro |
Description |
---|---|---|---|---|---|
Load balancing mode |
Local storage |
VMware ESXi |
Load balancing |
N/A |
The host uses all paths of a disk with equal priority. |
Remote storage |
VMware ESXi |
Load balancing |
N/A |
||
Local preferred mode |
Local storage |
VMware ESXi |
Asymmetric |
Yes |
The host considers the paths from the local storage system as preferred paths, and those from the remote storage system as non-preferred paths. |
Remote storage |
VMware ESXi |
Asymmetric |
No |
- Use the recommended configurations in Table 7-7. Other configurations may cause problems.
- If a LUN has been mapped to a host, you must restart the host for the configuration to take effect after you modify Host Access Mode. If you map the LUN for the first time, restart is not needed.
- When a LUN of a HyperMetro pair is mapped to all ESXi hosts in a cluster, the LUN must have the same host LUN ID on all of the hosts. You are advised to add all ESXi hosts in a cluster that are served by the same storage device to a host group and to the same mapping.
Configuring the Load Balancing Mode
Perform the following operations to configure the load balancing mode:
- On DeviceManager, choose Services > Hosts. Select the desired host, click
on the right, and choose Modify.
Figure 7-16 Modifying the host properties- The information displayed on the GUI may vary slightly with the product version.
- On DeviceManager of OceanStor Dorado V6 6.0.1 and later versions, the
icon is changed to More.
- On the Modify Host page, set Host Access Mode to Load balancing.Figure 7-17 Settings on the local storage system
- Repeat the preceding steps to set Host Access Mode of the remote storage system to Load balancing.Figure 7-18 Settings on the remote storage system
Configuring the Local Preferred Mode
Perform the following operations to configure the local preferred mode:
- On DeviceManager, choose Services > Hosts. Select the desired host, click
on the right, and choose Modify, as shown in.
Figure 7-19 Modifying the host properties - For the local storage system, set Host Access Mode to Asymmetric and Preferred Path for HyperMetro to Yes.Figure 7-20 Settings on the local storage system
- For the remote storage system, set Host Access Mode to Asymmetric and Preferred Path for HyperMetro to No.Figure 7-21 Settings on the remote storage system
Host Configuration
Install UltraPath by following instructions in the OceanStor UltraPath for vSphere User Guide.
Context
This configuration must be performed separately on all hosts.
UltraPath 21.3.0 and earlier versions support Secure Boot on servers.
Prerequisites
VMware has the following requirements on storage LUN IDs:
- All path LUN IDs of a LUN are consistent for a host.
- All host LUN IDs of a LUN shared by multiple hosts are consistent across these hosts. For details, see Setting LUN Allocations at the VMware office site.
If LUN IDs do not meet the preceding requirements, correct them by referring to "ID Description" in the OceanStor Dorado Host Connectivity Guide for VMware ESXi.
Configuring UltraPath
- Set the HyperMetro working mode.
Use either of the following methods to set the HyperMetro working mode for UltraPath:
Method 1: Run the esxcli upadm set hypermetro workingmode -m auto command to configure UltraPath to automatically adapt the HyperMetro working mode. This setting enables UltraPath to periodically query the host access mode configured on HyperMetro storage systems and adapt its HyperMetro working mode according to the host access mode.
Method 2: Run the following command to set UltraPath to work in a fixed HyperMetro working mode:
Table 7-8 Command for setting the HyperMetro working mode for UltraPathCommand
Example
set hypermetro workingmode -m [priority | balance] -p primary_array_id
esxcli upadm set hypermetro workingmode -m priority -p 0
NOTE:In VMware vSphere, adding esxcli upadm in front of a command navigates the user to the UltraPath CLI.
Table 7-9 describes the command parameters.
Table 7-9 Parameter descriptionParameter
Description
Default Value
-m mode
HyperMetro working mode.
- priority: local preferred mode
- balance: load balancing mode
priority
priority is recommended. balance is applicable when two active-active data centers are in the same building.
-p primary_array_id
ID of the preferred storage array.
The ID is allocated by UltraPath. The storage array that is in the same data center as the application hosts reside is preferred.
Run the esxcli upadm show diskarray command to obtain the storage array ID.
NOTE:- In priority mode, this parameter indicates the storage array to which I/Os are preferentially delivered.
- In balance mode, this parameter indicates the storage array where the first slice section resides.
None
NOTE:Mapping relationship between application hosts and storage arrays:
- Storage array A is the preferred array for all application hosts in data center A.
- Storage array B is the preferred array for all application hosts in data center B.
If you set UltraPath to automatically adapt the HyperMetro working mode, ensure that the host access mode on the storage system is consistent with that on the physical network.
- Configure the load balancing policy.
If HyperMetro works in load balancing mode, you can run the esxcli upadm set hypermetro loadbalancemode –m [split-size | round-robin] command to configure the load balancing policy. The following table describes the parameters in the command.
Parameter
Description
Default Value
-m mode
Load balancing policy for HyperMetro systems.
- split-size: slicing mode across storage systems.
In this mode, UltraPath delivers I/Os to a specific storage system based on the start addresses of I/Os, slice size, and preferred storage system. For example, if the slice size is 128 MB, the I/Os whose start addresses range from 0 to 128 MB (excluding 128 MB) are preferentially delivered to the preferred storage system and the I/Os whose start addresses range from 128 MB to 256 MB (excluding 256 MB) are delivered to the non-preferred storage system. The default slice size is 128 MB. You can run the esxcli upadm set hypermetro split_size command to change it.
- round-robin: round-robin mode across storage systems.
In this mode, UltraPath selects two storage systems in turn to deliver I/Os.
split-size
- split-size: slicing mode across storage systems.
- Set timeout parameters.
FC networking does not require setting of timeout parameters.
For iSCSI networking, perform the following operations on ESXi hosts. You must restart the host for the configuration to take effect. Exercise caution when performing this operation.
- Obtain the iSCSI adapter name.
[root@esxi16113:~] esxcfg-scsidevs -a | grep -i iscsi vmhba35 iscsi_vmk online iqn.XXXX-XX.com.vmware:esxi16113-xxxxxxxxxSCSI Software Adapte
In the command output, the iSCSI adapter name is vmhba35.
- Query the current timeout parameter settings.
[root@esxi16113:~] esxcli iscsi adapter param get -A vmhba35 | egrep 'NoopOutInterval|NoopOutTimeout|RecoveryTimeout' NoopOutInterval 15 15 1 60 true false NoopOutTimeout 10 10 10 30 true false RecoveryTimeout 10 10 1 120 true false
- Set the timeout parameters.
esxcli iscsi adapter param set -A vmhba35 -k NoopOutInterval -v 1 esxcli iscsi adapter param set -A vmhba35 -k NoopOutTimeout -v 10 esxcli iscsi adapter param set -A vmhba35 -k RecoveryTimeout -v 1
- All the preceding commands are available only in VMware ESXi 5.5, 6.0, 6.5, and 6.7. For details on the VMware versions supported in HyperMetro, see the Huawei Storage Interoperability Navigator.
- The information in bold is the iSCSI adapter. Change it based on the site requirements. (Run the esxcfg-scsidevs -a command to query the iSCSI adapter.)
- You must restart the host for the configuration to take effect.
- The settings shorten the path switchover time to about 16s. In comparison, the default ESXi settings may result in an up-to-25s path switchover time.
- Verify the modification.
[root@esxi16113:~] esxcli iscsi adapter param get -A vmhba35 | egrep 'NoopOutInterval|NoopOutTimeout|RecoveryTimeout' NoopOutInterval 1 15 1 60 true false NoopOutTimeout 10 10 10 30 true false RecoveryTimeout 1 10 1 120 true false
- Obtain the iSCSI adapter name.
Configuring a VMware Cluster
Mandatory configuration items:
VMware vSphere ESXi Version |
Host Parameter |
Cluster Parameter (VM Policy for APD and PDL) |
Remarks |
|
---|---|---|---|---|
5.0 U1 5.1 |
Log in to each ESXi host using SSH, open the /etc/vmware/settings file, and add Disk.terminateVMOnPDLDefault = True to the file. |
Use the vSphere Client to log in to each ESXi host. In the advanced settings, set das.maskCleanShutdownEnabled = True. |
After configuring host parameters, restart the host for the configuration to take effect. |
|
5.5.* |
VMkernel.Boot.terminateVMOnPDL=True |
Disk.AutoremoveOnPDL=0 |
Select Turn on vSphere HA. |
After configuring an ESXi 5.5.* host, you must restart the host for the configuration to take effect. After configuring cluster parameters, re-enable HA for the configuration to take effect. |
6.0 GA 6.0 U1 |
After configuring an ESXi 6.0 GA or ESXi 6.0 U1 host, you do not need to restart the host because the parameters take effect immediately. After configuring cluster parameters, re-enable HA for the configuration to take effect. |
|||
6.0 U2 6.0 U3 |
VMkernel.Boot.terminateVMOnPDL=False (Retain the default value.) |
Disk.AutoremoveOnPDL=1 (Retain the default value.) |
|
Retain the default host parameter settings. You only need to enable HA again in vCenter for the settings to take effect. |
6.5.* |
||||
6.7.* |
- A VM service network requires L2 interworking between data centers so that VM migration between data centers will not affect VM services.
- For VMware vSphere 5.0 u1, later 5.0 versions, and vSphere 5.1:
- Cluster parameter configuration: Use the vSphere Client to log in to each ESXi host. In the advanced settings, set das.maskCleanShutdownEnabled = True.
- Host parameter configuration: Log in to each ESXi host using SSH and add Disk.terminateVMOnPDLDefault = True to the /etc/vmware/settings file. After configuring host parameters, restart the host for the configuration to take effect.
- For VMware vSphere 5.5.*, 6.0 u1, and versions between them:
- Cluster parameter configuration: Use vSphere Web Client to connect to vCenter, go to the cluster HA configuration page, and select Turn on vSphere HA. After configuring cluster parameters, re-enable HA for the configuration to take effect.
- Host parameter configuration: Log in to each ESXi host using vSphere Client or vCenter and complete the following advanced settings.Set VMkernel.Boot.terminateVMOnPDL = True. The parameter forcibly powers off VMs on a datastore when the datastore enters the PDL state.Figure 7-22 Boot parameter settingsSet Disk.AutoremoveOnPDL = 0. This setting ensures that datastores in the PDL state will not be automatically removed.Figure 7-23 Disk parameter settings
- For VMware vSphere 6.0 u2 and later updates:
After connecting to vCenter through the vSphere Web Client, enter the cluster HA configuration and set the parameters as follows.
Figure 7-24 vSphere 6.0 cluster configuration
- For VMware vSphere 6.5:After connecting to vCenter through the vSphere Web Client, enter the cluster HA configuration and set the parameters as follows.Figure 7-25 vSphere 6.5 cluster configuration-1Figure 7-26 vSphere 6.5 cluster configuration-2
- For VMware vSphere 6.7 and later updates:
After connecting to vCenter through the vSphere Web Client, enter the cluster HA configuration and set the parameters as follows.
Figure 7-27 vSphere 6.7 cluster configuration-1Figure 7-28 vSphere 6.7 cluster configuration-2
Recommended configuration items:
- Configure the vMotion, service, and management networks with different VLAN IDs to prevent network interference.
- Configure the management network to include the vCenter Server management node and ESXi hosts. Deny access from external applications.
- Divide the service network into VLANs to ensure logical isolation and control broadcast domains.
- Configure a DRS group to ensure that VMs can be recovered first in the local data center in the event of the breakdown of a single host.
Verification
In VMware vSphere, run the esxcli upadm show upconfig command.
In VMware vSphere, adding esxcli upadm in front of a command navigates the user to the UltraPath CLI.
If the command output contains the following information, the configuration is successful.
HyperMetro WorkingMode : read write within primary array
Figure 7-29 provides an example.