No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade
Knowledge Base

Performance Difference Between the Primary and Secondary Nodes of the OceanStor N8500 (V200R001) CIFS Service Is Great

Publication Date:  2019-06-18  |   Views:  180  |   Downloads:  0  |   Document ID:  EKB1100018744

Contents

Issue Description

Eight clients read files generated on the N8000 engine simultaneously when the cluster mode of N8000 is set to CTDB. When the clients read files generated on the primary and secondary nodes from the primary node, the performance of the client reading files generated on the primary node is much greater than that on the secondary node.

Handling Process

Set fs_ctdb to the CIFS Normal share in the CTDB mode.
N8000.CIFS> share show
ShareName FileSystem ShareOptions
share_ctdb fs_ctdbowner=root,group=root,fs_mode=1777,rw,guest,inherit permissions=yes

Map 8 web disks to the 10GE client using 8 service IP addresses of 10GE NIC on the primary node. Open 8 SANergy pages, link one page with one of the drive letters, and then write a 25 GB file into the drive letter. Set the block size to 1024 MB and run the sar command and the traffic reaches about 390 Mbit/s.

20:58:56        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
20:58:58           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      switch0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      switch1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58         eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58         eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58     priveth1     22.00     11.00     11.20      1.42      0.00      0.00      0.00
20:58:58     priveth0     64.50     27.50     13.57      2.85      0.00      0.00      0.00
20:58:58      pubeth4      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      pubeth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      pubeth2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      pubeth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58        eth10      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58        eth11      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      pubeth0     99.00      6.50     10.35      3.65      0.00      0.00      0.00
20:58:58        eth13      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58      pubeth5  22954.50  10188.00 390685.67    840.00      0.00      0.00      0.00
20:58:58      pubeth6      0.00      0.00      0.00      0.00      0.00      0.00      0.00
20:58:58         sit0      0.00      0.00      0.00      0.00      0.00      0.00      0.0

The 8 SANergy pages read the file at the same time. Run the sar command, and the traffic can reach more than 700 Mbit/s and the traffic on the heartbeat link is small.


22:09:59        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
22:10:01           lo     27.14     27.14      3.31      3.31      0.00      0.00      0.00
22:10:01      switch0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01      switch1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01         eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01         eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01     priveth1     15.58      6.03      1.86      0.32      0.00      0.00      0.00
22:10:01     priveth0     28.14     25.63      2.57      2.34      0.00      0.00      0.00
22:10:01      pubeth4      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01      pubeth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01      pubeth2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01      pubeth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01        eth10      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01        eth11      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01      pubeth0     96.98     52.76      8.58      8.62      0.00      0.00      0.50
22:10:01        eth13      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01      pubeth5  93073.37 486847.74   5797.59 705602.43      0.00      0.00      0.00
22:10:01      pubeth6      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:10:01         sit0      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Map one web disk to the 10GE client using a service IP address on the secondary node, and generate a 25 GB file with 1 MB block size using SANergy. Then, use 8 drive letters mapped through the primary node using SANergy and read the file at the same time. The performance is about 360 Mbit/s and the traffic of heartbeat link is heavy.


22:18:55        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
22:18:57           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      switch0      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      switch1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57         eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57         eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57     priveth1   7608.54   7586.93   1819.99   2703.38      0.00      0.00      0.00
22:18:57     priveth0   9293.47   9297.99   2227.32   3312.04      0.00      0.00      0.00
22:18:57      pubeth4      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      pubeth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      pubeth2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      pubeth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57        eth10      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57        eth11      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      pubeth0     85.43      7.04      8.06      3.69      0.00      0.00      0.50
22:18:57        eth13      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57      pubeth5  30979.40 260904.52   2074.67 378135.81      0.00      0.00      0.00
22:18:57      pubeth6      0.00      0.00      0.00      0.00      0.00      0.00      0.00
22:18:57         sit0      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Root Cause

In the CTDB mode, when the primary node reads files from a secondary node, the losses between the heartbeat links occupy a large amount of bandwidth for communication and file lock in the cluster. The losses lead to considerable decrease on the read performance from the secondary node compared with that of reading files from the primary node.

Solution

If the performance bottleneck occurs, take the following measures to avoid this problem:

1. Log in to N8000 as a support user;

2. On all nodes, change the strict_locking parameter in the /etc/samba/smb.conf file to no. If the modification must take effect after the system is restarted, add the preceding items to the /opt/VRTSnasgw/conf/smbglobal.conf file;

       3. Run the support command and execute the service smb restrt command to restart the SMB service;

Suggestions

1. Modifying the file lock parameter in the smb.conf file may cause data inconsistency in concurrent accesses. Therefore, this modification is not applicable to NLE scenarios. 

2. Changing the strict locking parameter to no so that the performance of reading files on the secondary node may be the same as that of reading files on the primary node.