No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Facing performence issue in 5800 V3 performence issue

Publication Date:  2017-07-24 Views:  685 Downloads:  5
Issue Description
 

On May 30, 2017, the customer reported that the latency of disks mapped to hosts MED31 and MED33 has become longer since May 26 while that of disks mapped to MED22 been normal. Later on the day, the customer reported that the disks mapped to some other hosts also had longer latency.

The following table lists the IDs, names, and operating system types of the hosts involved in this document.

Host ID

15

56

16

8

9

Host Name

MED31_HOST

MED33_HOST

MED22_HOST

SPB01_HOST

SPB02_HOST

OS Type

Solaris

Solaris

Solaris

Linux

Linux

Handling Process
 

 Storage Performance Data Analysis

 

The following provides performance data in May. The time in figures is Beijing Time (UTC+8:00). The local time is UTC+5:30.

 

Host Bandwidth

The read/write services on hosts MED31 and MED22 have been regular in the recent month. The services have become slightly different since May 26 but the service load not been heavy.

Host MED33 started having a large number of read/write services since 14:00 on May 29. The host hardly carried any service before.

 

Read/Write Latency of Hosts

The write latency of the hosts is about 1 ms to 3 ms and the storage system processes normally. The read latency of hosts is usually below 10 ms but increased to about 20 ms at 14:00 on May 29. The latency of the storage device reading disks is normal.

Note: The statistics item shows the latency of the storage device processing read/write requests only.

 

Read/Write Latency of LUNs Mapped to Hosts

The write latency of LUNs mapped to the hosts increased since 3:00 on May 27. The general read latency of the hosts increased to about 20 ms. (When there were less read requests, the latency increase was not obvious.)

Note: This read latency of this statistics item shows the latency of the storage device processing read requests only, and the write latency shows the latency of the storage device processing write requests plus queuing latency of link transmission.

 

FC Port Bandwidth

The storage device has eight FC ports, of which CTE0.A7.P0, CTE0.A7.P2, CTE0.B7.P0, and CTE0.B7.P2 have small service volumes.

Since 3:00 on May 27, service volumes on CTE0.A7.P1, CTE0.A7.P3, CTE0.B7.P1, and CTE0.B7.P3 increased abruptly to 300 MB/s or above (exceeding the upper limit). This point in time is coincident with that when the write latency of disks increased.

At about 2:00 on May 31, service volumes on the four ports with small service volumes increased and those on the four busy ports decreased but remained at a high level.

NOTE :- The following figure shows historic performance. The statistics period is one minute. The average bandwidth is slighter lower than the actual bandwidth.

 

Bandwidth of Other Hosts

Read service volumes on hosts SPB01_HOST and SPB02_HOST started to increase abruptly since 3:00 on May 27 to about 500 MB/s. This is the main cause for the front-end ports' services reaching the limit.

 

 

Root Cause
 

Storage services at the site concentrate on ports CTE0.A7.P1, CTE0.A7.P3, CTE0.B7.P1, and CTE0.B7.P3. After service volumes exceed the upper limit, I/O queuing on the FC ports causes long latency.

END