No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>

Reminder

To have a better experience, please upgrade your IE browser.

upgrade
MENU

The S2600 and some other independently developed disk arrays’ common read-write performance is low

Publication Date:  2012-11-23 Views:  5 Downloads:  0
Issue Description
The performance of the read-write service is low, and it hasn’t reached the planning specification.

Alarm Information
None
Handling Process
The performance of the read-write is low, the common causes and the corresponding solutions are as followed:
1. The LUN’s write strategy is write through or the system has some problems (such as the battery’s failure, etc.), and the LUN’s running status is write through:

The solution:
Exclude the system has any problem (such as the battery’s failure, etc.), the LUN’s write strategy will change as to write back automatically; modify the LUN’s write strategy as to compelled write back (There is the risk of discarding the data while powering outage abnormally).

2. When the front line is iSCSI, the speed between the network interfaces of the host service and the disk array is not 1000Mbps, or the network is not steady (ping loses package);
The solution:
Check if the network cable has connected with the 1000Mbps port of the switch; if it goes through multiple switches, we need to check whether all the processing ports in the switches are 1000Mbps.

3. There is bottleneck disk (slow disk) in the RAID group; in the disk array line, use “iostat” to check the status of the member disk “io”, when we find that a disk’s average service time is larger than the other’s in a certain period, or its utilization rate (%util) is near to 100%, while the other disks are comparatively idle, it denotes this disk may be a bottleneck disk, we need analyze it furthermore, such as the hard disk “addg” in the following picture:
Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sddh         0.50   2.00 19.00 51.00 2496.00 6784.00  1248.00  3392.00   132.57     2.15   30.66   8.79  61.50
sddg         0.50  22.00 22.00 32.00 2944.00 7040.00  1472.00  3520.00   184.89    74.34 1437.00  18.54 100.10
sddf         0.00   4.50 20.00 44.00 2560.00 6144.00  1280.00  3072.00   136.00     2.48   37.17   9.41  60.20
sdde         0.50   1.50 15.50 46.50 2048.00 6144.00  1024.00  3072.00   132.13     1.37   21.03   9.05  56.10
sddd         0.00   1.50 17.50 45.50 2240.00 6016.00  1120.00  3008.00   131.05     1.21   19.11   9.02  56.80
sddc         0.00   2.00 15.00 47.00 1920.00 6272.00   960.00  3136.00   132.13     1.64   26.15   9.71  60.20
sddb         0.00   1.00 15.50 50.00 1984.00 6528.00   992.00  3264.00   129.95     2.00   30.85  10.11  66.20
sdda         0.00   1.00 15.50 45.50 1984.00 5952.00   992.00  2976.00   130.10     1.18   19.11   8.75  53.35
The solution:
Replace the bottleneck disk.

4. When the host line has file system, the disk space’s utilization rate is too large (>=85%), the file system has produced too many fragments and cause the performance of read-write decreasing.
The solution:
1. When there is file system, the host’s space disk utilization rate is better to not over 85%;
2. Recognize fragments for the file system regularly.

5. (This item just fits the S2600 V100R001) When the disk array has abnormal condition, it has opened the flow control, execute “tgt showparam” in the MML to check the status. In the normal condition, the “current flow level” is “NONE”, and in the other conditions, it denotes the corresponding level of the flow control, as the following picture displayed, its level is “MAJOR”.


The solution:
Analyze the detail reasons, such as whether the occupation rates of the CPU, Memory are too high, if it can’t recover, please restart the disk array.
6. For other conditions, please contact with the technology support.



Root Cause
None
Suggestions
None

END