No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R006C00 File System Feature Guide 12

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Application on the RHEL Client

Application on the RHEL Client

This section describes how to install DFSClient on and mount an NFS share to the RHEL client.

Installing DFSClient

This topic introduces the procedure of installing DFSClient on the RHEL client.

Prerequisites

In the RHEL client, the kernel version supported by DFSClient is kernel-2.6.32-279.el6.x86_64. If the kernel version is a version upgraded from 279, do as follows:
  • If the kernel version is kernel-2.6.32-279.14.1.el6.x86_64 or later, contact technical support engineers to obtain suitable DFSClient.
  • If the kernel version is earlier than kernel-2.6.32-279.8.1.el6.x86_64, upgrade it to kernel-2.6.32-279.14.1.el6.x86_64 or a later one to prevent a kernel panic caused by the TCP defect of the kernel version, and then contact technical support engineers to obtain suitable DFSClient. For details, see Kernel Panic on the RHEL Client.
NOTE:

You could run uname -a to query the kernel version of the Linux client.

The installation package of DFSClient has been obtained. Table 12-13 describes the installation package.
Table 12-13  Installation package list

File Name

Description

How to Obtain

OceanStor_9000_VxxxRxxxCxx_InfoTurbo.zip

the installation package of DFSClient.

For enterprise users, log in to http://support.huawei.com/enterprise. For carrier users, log in to http://support.huawei.com. Enter OceanStor 9000 in the search box, and click the path displayed below the search box to enter the product page. For enterprise users, click Software Download. For carrier users, click Software to search for and download the software of the correct version and their *.asc digital certificate files.

Digital certificate verification tool

Used to verify package integrity.

Visit http://support.huawei.com/enterprise and choose Technical Support > Tools > All Tools > Software digital signature validation tool (PGP Verify), select the latest software version and download all the documents (including tool usage description documents).
Table 12-14 lists the software tools required for installation.
Table 12-14  Tools

Tool Name

Description

FTP file upload tool (such as FileZilla)

Used to upload files to servers. When FileZilla is used, you only need to install the FileZilla client version without the need to install the FileZilla server version.

Procedure

  1. Use the digital certificate verification tool to verify package integrity.

    NOTE:

    For details, see the description document of the tool.

  2. Decompress the OceanStor_9000_VxxxRxxxCxx_InfoTurbo.zip package to obtain the OceanStor_9000-VxxxRxxxCxx_DFSClient_Redhat6.3.x86_64.rpm (the installation package of the Centos client is OceanStor_9000-DFSClient_CentOS6.5-xx.x86_64.rpm) installation package of the RHEL client.
  3. Use the FTP file upload tool (such as FileZilla) to upload OceanStor_9000-VxxxRxxxCxx_DFSClient_Redhat6.3.x86_64.rpm to the installation path, such as /home, of the RHEL client.

    For details on how to upload a file and create a directory, see How to Use FileZilla to Upload the Installation Package to the Client?

  4. Run rpm -ivh /home/OceanStor_9000-VxxxRxxxCxx_DFSClient_Redhat6.3.x86_64.rpm to install DFSClient.

    [root@localhost home]# rpm -ivh /home/OceanStor_9000-V300R005C00_DFSClient_Redhat6.3.x86_64.rpm
    preparing...          ######################################################[100%]
       1:OceanStor_9000   ######################################################[100%]
    [root@localhost home]#
    NOTE:

    The default installation path of DFSClient on the Linux client is /usr/local/xnfs, do not change the installation directory.

  5. Configure the IP address routing policy for the RHEL client, The following is an example of configuring routing table for four network ports.

    NOTE:

    Configure routing table records as many as network ports of the client. By doing so, In this manner, data can be transmitted through different network ports using different routing tables.

    1. Run service NetworkManager status to check whether NetworkManager has been installed on the client. If yes, run service NetworkManager stop to stop using the tool, and run chkconfig NetworkManager off to forbid the service from being started upon a startup of the client.

      [root@localhost home]# service NetworkManager status
      NetworkManager (pid  2480) is running...
      [root@localhost home]# service NetworkManager stop
      Stopping NetworkManager daemon:                            [ OK ]
      [root@localhost home]# chkconfig NetworkManager off
      NOTE:

      Do not use NetworkManager when DFSClient is working. Otherwise, service performance will be compromised when a network port becomes faulty.

    2. Run vim /etc/iproute2/rt_tables. Press I to enter the editing mode and enter routing table numbers. Press Esc to exit the editing mode and enter :wq to save the rt_tables file, then press Enter

      [root@localhost home]#vi /etc/iproute2/rt_tables
      1002 net2
      1003 net3
      1004 net4
      1005 net5
      ~
      ~
      --INSERT--
      NOTE:

      The entered routing table numbers must not be the same as existing ones in the rt_tables file.

    3. Run vim /etc/sysconfig/network-scripts/route-ethx, Press I to enter the editing mode and enter to xx.xx.0.0/16 dev ethx table netx. Press Esc to exit the editing mode and enter :wq to save the file, then press Enter.

      [root@localhost home]# vim /etc/sysconfig/network-scripts/route-eth2
      to 192.168.0.0/16 dev eth2 table net2 
      ~ 
      :wq 
      [root@localhost home]# vim /etc/sysconfig/network-scripts/route-eth3
      to 192.168.0.0/16 dev eth3 table net3 
      ~ 
      :wq 
      [root@localhost home]# vim /etc/sysconfig/network-scripts/route-eth4
      to 192.168.0.0/16 dev eth4 table net4 
      ~ 
      :wq 
      [root@localhost home]# vim /etc/sysconfig/network-scripts/route-eth5
      to 192.168.0.0/16 dev eth5 table net5 
      ~ 
      :wq 
      Parameter description:
      • XX.XX.0.0/16: network segment where the network port IP address resides.
      • ethx: network port ID of the client. You can run the ifconfig command to view the ID.
      • netx: The routing table name configured in the rt_tables file.

    4. Run vim /etc/sysconfig/network-scripts/rule-ethx, Press I to enter the editing mode and enter from xx.xx.xx.xx/32 pref 1000 table netx. Press Esc to exit the editing mode and enter :wq to save the file, then press Enter.

      [root@localhost home]# vim /etc/sysconfig/network-scripts/rule-eth2
      from 192.168.60.133/32 pref 1000 table net2 
      ~ 
      :wq 
      [root@localhost home]# vim /etc/sysconfig/network-scripts/rule-eth3
      from 192.168.60.134/32 pref 1000 table net3 
      ~ 
      :wq 
      [root@localhost home]# vim /etc/sysconfig/network-scripts/rule-eth4
      from 192.168.60.135/32 pref 1000 table net4 
      ~ 
      :wq 
      [root@localhost home]# vim /etc/sysconfig/network-scripts/rule-eth5
      from 192.168.60.136/32 pref 1000 table net5 
      ~ 
      :wq 
      Parameter description:
      • netx: The routing table name configured in the rt_tables file.
      • XX.XX.XX.XX/32: network port IP address.

    5. Run the service network restart command to restart the network to make the routing policy take effect.
    6. Optional: Run ip route and ip rule to query routing table settings.

      [root@localhost home]#ip route
      192.168.0.0/16 dev eth2  proto kernel  scope link  src 192.168.60.133 
      192.168.0.0/16 dev eth3  proto kernel  scope link  src 192.168.60.134 
      192.168.0.0/16 dev eth4  proto kernel  scope link  src 192.168.60.135 
      192.168.0.0/16 dev eth5  proto kernel  scope link  src 192.168.60.136 
      10.10.0.0/16 dev eth2  scope link  metric 1004 
      10.10.0.0/16 dev eth3  scope link  metric 1005 
      10.10.0.0/16 dev eth4  scope link  metric 1006 
      10.10.0.0/16 dev eth5  scope link  metric 1007 
      default via 192.168.0.1 dev eth2
      [root@Client2 ~]# ip rule 
      0: from all lookup local 
      1000: from 192.168.60.133 lookup net2 
      1000: from 192.168.60.134 lookup net3 
      1000: from 192.168.60.135 lookup net4 
      1000: from 192.168.60.136 lookup net5 
      32766: from all lookup main 
      32767: from all lookup default 

Follow-up Procedure

Uninstall DFSClient.

  1. Optional: Run mount to check whether there a directory to which DFSClient is mounted on the client. If you can ensure that no directory are mounted on the client, perform 2.
    • If yes, run umount.xnfs /local_path to unmount the directory and go to 2 to uninstall DFSClient. For details about how to unmount the directory, see directory unmounting in the follow-up procedure of Mounting an NFS Share
    • If no, go to 2 to unmount DFSClient.
  2. Run rpm -e OceanStor_9000-VxxxRxxxCxx_DFSClient_Redhat6.3.x86_64 to uninstall DFSClient.
  3. Optional: Run rpm -qa | grep DFS to check whether DFSClient is uninstalled successfully in the installation package.
[root@localhost ~]# rpm -e OceanStor_9000-V300R005C00_DFSClient_Redhat6.3.x86_64
[root@localhost ~]# rpm -qa | grep DFS
[root@localhost ~]#
NOTE:

If there is a directory to which DFSClient is mounted on the client, DFSClient cannot be uninstalled.

Upgrade DFSClient.

  1. Optional: Run mount to check whether there a directory to which DFSClient is mounted on the client. If you can ensure that no directory are mounted on the client, perform 2.
    • If yes, run umount.xnfs /local_path to unmount the directory and go to 2 to upgrade DFSClient. For details about how to unmount the directory, see directory unmounting in the follow-up procedure of Mounting an NFS Share
    • If no, go to 2 to upgrade DFSClient.
  2. Obtain the latest installation package using the same method for the installation, verify the package, and upload the package to the client.
  3. Run rpm -Uvh /home/OceanStor_9000-VxxxRxxxCxx_DFSClient_Redhat6.3.x86_64.rpm to upgrade DFSClient.
[root@localhost home]#rpm -Uvh /home/OceanStor_9000-V300R005C00_DFSClient_Redhat6.3.x86_64.rpm
preparing...    #########################################################[100%]
   1:OceanStor_9000 ######################################################[100%]
[root@localhost home]
NOTE:

If there is a directory to which DFSClient is mounted on the client, DFSClient cannot be upgraded.

Configuring the DNS server

If you want to mount a shared directory using a domain name, ensure that the IP address of the DNS server has been configured on the client.

Prerequisites

  • When there is an external DNS server:
    • The client communicates properly with the front-end service network of OceanStor 9000 and the external DNS server. DNS servers communicate properly with the front-end service network of OceanStor 9000.
    • A conditional forwarder has been configured for the DNS server. The DNS server can forward the InfoEqualizer domain name request sent from the client to OceanStor 9000 and OceanStor 9000 sends the front-end service IP address of the node to the client. For details, see InfoEqualizer > Configuration and Management > Connecting to the External DNS Server in OceanStor 9000 Feature Guide.
  • When there is no external DNS server, the client must communicate properly with the front-end service network of OceanStor 9000.

Context

Before mounting an NFS shared directory using a domain name, you must configure the IP address of the DNS server on the client. There are two scenarios as follows:

  • Connecting to an external DNS server: Set the IP address of the DNS server on the client to the IP address of the external DNS server.
  • Not connecting to an external DNS server: Set the IP address of the DNS server on the client to the InfoEqualizer DNS IP address of OceanStor 9000.

Procedure

  1. Log in to the RHEL client as the root user.
  2. On the client, configure the IP address of the DNS server.
    1. Run the setup command.
    2. Press Enter. The Network configuration item is displayed, as shown in Figure 12-19.

      Figure 12-19  Network configuration item

    3. Press Enter. The DNS configuration item is displayed, as shown in Figure 12-20.

      Figure 12-20  DNS configuration item

    4. Enter the IP address of the DNS server. Select OK. Press Enter to return to the previous interface, as shown in Figure 12-21.

      NOTE:
      If an external DNS server is used, enter the IP address and InfoEqualizer DNS IP address of the external DNS server in the DNS configuration of the client node.
      Figure 12-21  Entering the IP address of the DNS server

    5. Select Save&Quit and press Enter to return. Select Quit and press Enter to exit.

Mounting an NFS Share

This section describes how to use DFSClient to mount the created NFS shared directory to the local RHEL client and verify that the directory is accessible.

Prerequisites

  • An NFS share has been created in OceanStor 9000. For details about how to create an NFS shared directory, see OceanStor 9000 File System Administrator Guide.
    NOTE:
    • In NFS file sharing, to prevent users' usage of a directory from being affected by UNIX permissions, select Read, Write, and Execute for User, User Group, and Others in the Advanced window of the directory. You can choose Provisioning > Resource Manager and select the corresponding directory to view permission configurations of its attributes.
    • To allow user root to access shared directories and enhance the data read and write performance on a client, click Advanced on the Add Client page, and set Root Permission Constraint tono_root_squash, and Write Mode to Asynchronous. You can choose Provisioning > Share > NFS (Linux/UNIX/MAC) to select a shared directory. On the Client Information page, choose the corresponding client to view its attribute configurations.
    • During the process of creating an NFS share on DeviceManager, if you want to specify IP addresses of a client in Name or IP Address, you must add all IP addresses of the client. Otherwise, the directory may fail to be mounted. You can choose Provisioning > Share > NFS (Linux/UNIX/MAC) to select a shared directory. On the Client Information page, choose the corresponding client to view its attribute configurations.
  • The RHEL client communicates properly with the front-end service network of the OceanStor 9000.
  • To avoid errors in permissions for directories shared using NFSv3, ensure that the rpc.statd software version of the Linux-based client is later than 1.2.6. The recommended version is 1.2.9. You can run the following command to query the current rpc.statd software version. To upgrade rpc.statd, you can either upgrade the operating system on the client or go to official webiste to download and install the latest nfs-utils.
    Client-178:~ # rpc.statd --version
    rpc.statd version 1.2.9
    

Procedure

  1. Log in to the RHEL client as the root user.
  2. Optional: Run showmount -e xxx.com to check the NFS share created in OceanStor 9000.

    xxx.com indicates the dynamic domain name or zone domain name configured in InfoEqualizer.

  3. Optional: Run mkdir /local_path to create local directory to be mounted /local_path.
  4. Mount the NFS shared directory.
    • By using the mount.xnfs command: mount.xnfs -o vers=3,tcp,nolock,rsize=xxxxxxx,wsize=xxxxxxx,vers=3,sip=xx.xx.xx.xx_xx.xx.xx.xx_xx.xx.xx.xx xxx.com:/share_path /local_path,

      where:

      • vers=3: only supports V3 mount. The parameter must be specified.
      • tcp: only supports TCP connection. The parameter must be specified.
      • nolock: does not use the NFS distributed lock. The nolock parameter must be specified for better performance.
      • rsize: read I/O granularity. The default value is the maximum value of 1048576 Byte.
      • wsize: write I/O granularity. The default value is the maximum value of 1048576 Byte.
        NOTE:

        The unit for the read and write granularity on the RHEL client is Byte. When default values are used, the unit can be omitted.

      • sip: IP addresses of the RHEL client. The maximum number of IP addresses is 8. To ensure system performance, specify at least 2 IP addresses.
      • /share_path: indicates the shared path in DeviceManager.
    • By creating the configuration file:
      1. Optional: Run the touch /home/File.conf command to create configuration file File.conf in the /home directory.
        [root@Client2 ~]# touch /home/File.conf
        
      2. Run the vim /home/File.conf command to edit the file. Press I to enter the editing mode and enter the related information in the file based on the following format:
        [mount_entry] service_ip=xx.xxx.com
        export=/share_path
        mount_point=/local_path
        mount_option=vers=3,rsize=xxxk,wsize=xxxk,sip=xx.xx.xx.xx_xx.xx.xx.xx_xx.xx.xx [/mount_entry]
        
        The parameters are described as follows:
        • service_ip: mount domain name
        • export: the directory to be exported
        • mount_point: the local directory to be mounted
        • mount_option: mount parameters
        NOTE:

        The information must be entered in the specified format and excessive spaces are not allowed. Otherwise, the mounting will fail.

      3. Press Esc to exit the editing mode and enter :wq to save the file, then press Enter.
      4. Run mount.xnfs -c /home/File.conf to invoke the configuration file to mount the shared directory.
  5. Optional: Run the mount command to query details about links and mounted directories.

    NOTE:

    If there is only one IP address in buddy and no multi-connection is set up, see section When an External DNS Server Is Used, the Linux Client Can Obtain Only One IP Address.

  6. Run cd /local_path to enter the shared directory.

Example

[root@Client2 ~]# showmount -e s.hw.com
Export list for s.hw.com:
/xnfs * 
[root@Client2 ~]# mkdir /mnt/DFSclient 
[root@Client2 ~]# mount.xnfs -o rsize=1048576,wsize=1048576,vers=3,sip=192.168.60.133_192.168.60.134 s.hw.com:/xnfs /mnt/DFSclient
[root@Client2 ~]# mount
/dev/mapper/vg_client2-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sdg1 on /boot type ext4 (rw)
/dev/mapper/vg_client2-lv_home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
s.hw.com:/xnfs on /mnt/DFSclient type xnfs (rw,rsize=1048576,wsize=1048576,sip=192.168.60.133_192.168.60.134,addr=192.168.70.46,buddy=192.168.70.45_192.168.70.47)
[root@Client2 ~]#cd /mnt/DFSclient

Follow-up Procedure

Unmount Directory

  1. Optional: Run lsof /local_path to view the ID of the process running on the mounted directory. If you can ensure that no services are running on the directory, perform 2.
    • System process: Go to 2 to unmount the directory.
    • Other application software: Exit the application software and go to 2 to unmount the directory.
    huaweis-2:~ huawei# lsof /mnt/DFSclient
    COMMAND  PID  USER  FD  TYPE  DEVICE  SIZE/OFF  NODE    NAME
    mds      53   root  15r DIR   45,3     4096    54953988 /mnt/DFSclient
    
  2. run umount.xnfs /local_path. When the directory is unmounted, the link information in xnfs_client.conf of DFSClient is also deleted. The directory will not be mounted automatically when the client is restarted.
NOTE:
  • Before unmount the directory, ensure that no business on the directory, or the directory may fail to be unmounted.
  • Unmount command umount /local_path will also delete information in the xnfs_client.conf configuration file.
Translation
Download
Updated: 2019-06-27

Document ID: EDOC1000122519

Views: 85054

Downloads: 151

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next