No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


Performance Monitoring Guide

OceanStor Dorado V3 Series V300R002

This document describes performance monitoring of storage systems, including the monitoring method, indicator planning, configuration monitoring, and problem diagnosis.
Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Understanding the LUN Performance

Understanding the LUN Performance

Information on how local access to LUNs affects performance helps in the identification and location of problems in the storage system

Local Access

Local access to a LUN means that I/Os destined for a LUN are directly delivered to the owning controller of that LUN. As shown in Figure 4-7, a host is physically connected to controller A, the owning controller of LUN 1 is controller A, and that of LUN 2 is controller B.

  • When the host attempts to access LUN 1, controller A directly delivers the access requests to LUN 1. Such a LUN access mode is called local access.
  • When the host attempts to access LUN 2, the access requests are first delivered to controller A. Then, controller A forwards them to controller B through the mirror channel between controllers A and B. Finally, controller B delivers the access requests to LUN 2. Such a LUN access mode is called peer access.
Figure 4-7 Network diagram

The peer access scenario involves the mirror channel between controllers. The channel limitations affect LUN read/write performance. To prevent peer access, you must ensure that a host has a physical connection to both controllers A and B. If a host is physically connected to only one controller, set the owning controller of the LUN to the one connected to the host.

To check whether the owning and working controllers of a LUN are the same, run show lun general on the CLI. If they are not the same, you are advised to check if the links between the host and the storage are up and whether the multipathing software is working properly.

admin:/>show lun general lun_id=0 

  ID                              : 0 
  Name                            : LUN001 
  Pool ID                         : 0 
  Capacity                        : 10.000GB 
  Subscribed Capacity             : 0.000B 
  Protection Capacity             : 0.000B 
  Sector Size                     : 512.000B 
  Health Status                   : Normal 
  Running Status                  : Online 
  Type                            : Thin 
  IO Priority                     : Low 
  WWN                             : 6e0979610048d172004d3d7700XXXXXX 
  Exposed To Initiator            : No 
  Data Distributing               : -- 
  Write Policy                    : Write Back 
  Running Write Policy            : Write Back 
  Prefetch Policy                 : None 
  Read Cache Policy               : -- 
  Write Cache Policy              : -- 
  Cache Partition ID              : -- 
  Prefetch Value                  : -- 
  Owner Controller                : 0A 
  Work Controller                 : 0A 
  Snapshot ID(s)                  : -- 
  LUN Copy ID(s)                  : -- 
  Remote Replication ID(s)        : -- 
  Split Clone ID(s)               : -- 
  Relocation Policy               : -- 
  Initial Distribute Policy       : -- 
  SmartQoS Policy ID              : -- 
  Protection Duration(days)       : -- 
  Has Protected For(h)            : -- 
  Estimated Data To Move To Tier0 : -- 
  Estimated Data To Move To Tier1 : -- 
  Estimated Data To Move To Tier2 : -- 
  Is Add To Lun Group             : No 
  Smart Cache Partition ID        : -- 
  DIF Switch                      : No 
  Remote LUN WWN                  : -- 
  Disk Location                   : Internal 
  LUN Migration                   : -- 
  Progress(%)                     : -- 
  Smart Cache Cached Size         : -- 
  Smart Cache Hit Rage(%)         : -- 
  Mirror Type                     : -- 
  Thresholds Percent(%)           : 90 
  Thresholds Switch               : Off 
  Usage Type                      : Internal 
  HyperMetro ID(s)                : 
  Dedup Enabled                   : Yes 
  Compression Enabled             : Yes     

Ping-Pong Effect

In a clustered multipathing network environment, UltraPath is able to automatically switch over the working controller of a LUN. When two application servers attempt to access the same LUN whose owning controller is controller A:

  1. If a link connected to application server 1 fails as shown in Figure 4-8, the UltraPath running on application server 1 switches the working controller of the LUN to controller B.
  2. If the two links connected to application server 2 work properly, both controllers of the LUN are the same. The UltraPath of application server 2 will attempt to switch the working controller of the LUN to controller A, and then UltraPath of application server 1 will switch it back to B. As a result, the switching between the two application servers will continue to recur.

This is called the Ping-Pong effect, which reduces the LUN access performance and makes I/O timeout likely occur on application servers.

Figure 4-8 Schematic diagram of the Ping-Pong effect

If the Ping-Pong effect occurs in a storage system, take the following measures:

  1. Disable the automatic LUN switchover function of UltraPath. For details, see the UltraPath User Guide of the corresponding version.
  2. Recover the disconnected link as soon as possible. Ensure that the link between each node and each storage controller is up.
Updated: 2019-07-17

Document ID: EDOC1100049152

Views: 11526

Downloads: 77

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Previous Next