OceanStor Dorado and OceanStor 6.x and V700R001 Host Connectivity Guide for AIX
OS Native Multipathing Software
Storage System Configuration
If the OS native multipathing software is used, retain the default host and initiator settings. By default, Host Access Mode is Load balancing. You can click the host name and check the settings on the Summary tab page.
The information displayed on the GUI may vary slightly with the product version.
If Host Access Mode is not Load balancing, perform the following steps to change it:
- Click the host name and choose Operation > Modify.Figure 6-3 Modifying the host properties
- Set Host Access Mode to Load balancing and click OK.Figure 6-4 Modifying the host access mode
- Confirm the information and click OK.Figure 6-5 Confirming the operation
- For details about the AIX versions, see the Huawei Storage Interoperability Navigator.
- If a LUN has been mapped to a host, you must restart the host for the configuration to take effect after you modify Host Access Mode. If you map the LUN for the first time, restart is not needed.
- To change the LUN mapping on the storage system, including but not limited to changing the host LUN ID, changing the port online, and removing and adding a LUN, follow the instructions in How Can I Change LUN Mappings When Non-Huawei Multipathing Software Is Used? to correctly change the LUN mapping. Otherwise, services may be interrupted.
- When data is migrated from other Huawei storage systems (including Dorado V3, OceanStor V3, and OceanStor V5) to 6.x and V700R001 series storage systems, configure the storage system by following instructions in Recommended Configurations for 6.x and V700R001 Series Storage Systems for Taking Over Data from Other Huawei Storage Systems When the Host Uses the OS Native Multipathing Software.
Host Configuration
Operating Environment Requirements
- In SAN Boot mode, the virtual LUN running the host's operating system must be a common virtual LUN. You can change a common virtual LUN to a HyperMetro virtual LUN only after ODM is installed on the host and the host is restarted.
- When NPIV coupled with VIOS is used, the requirements of NPIV on hardware and software must be met.
If AIX SAN Boot is used, the recommended configuration procedure is as follows:
- Use a single path to install the AIX SAN Boot system and start the operating system through a single-path disk.
- Install the multipathing software, restart the host for the multipathing software to take effect, and take over the SAN Boot disk.
- Finally, connect multiple paths.
Installing and Enabling Multipathing Software
AIX native MPIO can take over Huawei storage disks only if the AIX ODM package has been installed. After AIX ODM has been installed, the fc_err_recov and dyntrk parameters of the FC HBAs must be set. For details on how to install AIX ODM, see the AIX ODM for MPIO User Guide.
Run the following command to verify that MPIO has taken over the disks from Huawei storage.
Configuring Multipathing Software
The default I/O policy is fail_over. I/Os can be delivered only on one path. To deliver I/Os on multiple paths on a controller, run the following command to change the I/O policy to round_robin. In UltraPath 31.0.2 and later, the default path selection algorithm of AIX ODM for MPIO is changed from fail_over to round_robin.
- When native MPIO is used, services must be suspended before you change the I/O policy for hdisk.
- In AIX 6.1 TL9 and later or AIX 7.1 TL5 and later, if the disk type is not SCSI-2 reserves, use the shortest_queue path selection algorithm to maximize SAN resource usage. When the load is light, the shortest_queue algorithm is similar to the round_robin algorithm. Once a path is congested, the system automatically allocates more I/Os to other lightly loaded paths. The queue_depth parameter can be modified based on the customer's host service configurations.
bash-3.2# chdev -l hdisk1 -a algorithm=round_robin hdisk1 changed bash-3.2# lsattr -EHl hdisk1 attribute value description user_settable PCM PCM/friend/MPIOpcm Path Control Module False PR_key_value none Persistant Reserve Key Value True algorithm round_robin Algorithm True clr_q no Device CLEARS its Queue on error True dist_err_pcnt 0 Distributed Error Percentage True dist_tw_width 50 Distributed Error Sample Time True hcheck_cmd test_unit_rdy Health Check Command True hcheck_interval 30 Health Check Interval True hcheck_mode nonactive Health Check Mode True location Location Label True lun_id 0x1000000000000 Logical Unit Number ID False lun_reset_spt yes LUN Level Reset True max_transfer 0x40000 Maximum TRANSFER Size True node_name 0x2100010203040509 FC Node Name False pvid none Physical volume identifier False q_err yes Use QERR bit True q_type simple Queuing TYPE True queue_depth 32 Queue DEPTH True reassign_to 120 REASSIGN time out value True reserve_policy no_reserve Reserve Policy True rw_timeout 30 READ/WRITE time out value True scsi_id 0x10400 SCSI ID False start_timeout 60 START unit time out value True timeout_policy fail_path Timeout Policy True ww_name 0x2991010203040509 FC World Wide Name False bash-3.2#
Verification
Run the following command to check the path status and number of paths. Before service provisioning, all paths are in the Clo state. The total number of paths is the same as that of configured logical paths (eight in this example).
bash-3.2# lsmpio -l hdisk1 name path_id status path_status parent connection hdisk1 0 Enabled Clo fscsi0 2991010203040509,1000000000000 hdisk1 1 Enabled Clo fscsi0 2811010203040509,1000000000000 hdisk1 2 Enabled Clo fscsi0 2992010203040509,1000000000000 hdisk1 3 Enabled Clo fscsi0 2812010203040509,1000000000000 hdisk1 4 Enabled Clo fscsi1 2991010203040509,1000000000000 hdisk1 5 Enabled Clo fscsi1 2811010203040509,1000000000000 hdisk1 6 Enabled Clo fscsi1 2992010203040509,1000000000000 hdisk1 7 Enabled Clo fscsi1 2812010203040509,1000000000000
After service provisioning, all paths are in the Sel state, indicating that all paths are carrying I/Os.
bash-3.2# lsmpio -l hdisk1 name path_id status path_status parent connection hdisk1 0 Enabled Sel fscsi0 2991010203040509,1000000000000 hdisk1 1 Enabled Sel fscsi0 2811010203040509,1000000000000 hdisk1 2 Enabled Sel fscsi0 2992010203040509,1000000000000 hdisk1 3 Enabled Sel fscsi0 2812010203040509,1000000000000 hdisk1 4 Enabled Sel fscsi1 2991010203040509,1000000000000 hdisk1 5 Enabled Sel fscsi1 2811010203040509,1000000000000 hdisk1 6 Enabled Sel fscsi1 2992010203040509,1000000000000 hdisk1 7 Enabled Sel fscsi1 2812010203040509,1000000000000 bash-3.2#
Only AIX 6.1 TL9, AIX 7.1 TL3, and later versions support the lsmpio command. If your OS version does not support lsmpio, use the lspath command to query path information.