No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


OceanStor 9000 V300R006C00 File System Feature Guide 12

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Verifying Interconnection Status

Verifying Interconnection Status

This section explains how to verify whether the interconnection is successful.


  1. Log in to FusionInsight Hadoop Manager and check the status of Hadoop components.

    As shown in Figure 14-14, if the status of all components is Good, all components are normal.

    Figure 14-14  Checking the status of Hadoop components

  2. Check whether HDFS Plugin has been installed.
    1. Log in to the first Hadoop node as user root.
    2. Run the following command:

      node1:/ # df -h                                                      
      Filesystem                Size  Used Avail Use% Mounted on                     
      ...   33T   12G   33T   1% /mnt/nfsdata0   
      node1:/ # cd /mnt/nfsdata0                                                 
      node1:/mnt/nfsdata0 # ll                                                   
      total 68                                                                        
      -rw-r--r--  1 root root    21 Oct 23  2015 -root.Meta~$%^                       
      drwxrwxrwx  2 root root  4096 Oct 23  2015 encryption                           
      -rwxrwxrwx  1 root root    17 Oct 23  2015 encryption.Meta~$%^                  
      drwxrwxrwx  2 root root  4096 Oct 23  2015 flume                                
      -rwxrwxrwx  1 root root    18 Oct 23  2015 flume.Meta~$%^                       
      drwxrwxrwx  6 root root  4096 Oct 23  2015 hbase                                
      -rwxrwxrwx  1 root root    18 Oct 23  2015 hbase.Meta~$%^                       
      drwxrwxrwx  4 root root  4096 Oct 23  2015 mr-history                           
      -rwxrwxrwx  1 root root    19 Oct 23  2015 mr-history.Meta~$%^                  
      drwxrwxrwx  2 omm  wheel 4096 Oct 23  2015 sparkJobHistory                      
      -rwxrwxrwx  1 omm  wheel   21 Oct 23  2015 sparkJobHistory.Meta~$%^             
      drwxrwxrwx  6 root root  4096 Oct 23  2015 tmp                                  
      -rwxrwxrwx  1 root root    17 Oct 23  2015 tmp.Meta~$%^                         
      drwxrwxrwx 10 root root  4096 Oct 23  2015 user                                 
      -rwxrwxrwx  1 root root    17 Oct 23  2015 user.Meta~$%^                        

      If the /mnt/nfsdata0 directory exists and contains multiple data directories and metadata files (whose names include .Meta), HDFS Plugin has interconnected with OceanStor 9000.

    3. Run the following command:

      node1:/mnt/nfsdata0 # find /opt/huawei/Bigdata -name Share*.jar

      If the preceding records are generated, the .jar file has been replicated to the local host. The records of the components that are not interconnected may not be displayed.

    4. Repeat 2.a to 2.c to check whether HDFS Plugin is installed on other Hadoop nodes.
  3. Log in to a Hadoop client.
    1. Log in to any Hadoop client as user root.
    2. Go to the installation path of Hadoop client (for example: /opt/client), run cd /opt/client.
    3. Run source bigdata_env to configure environmental variables of the client.
    4. When FusionInsight Hadoop adopts the security mode, run kinit admin and enter the login password of the client user.

      When FusionInsight Hadoop adopts the non-security mode, skip this step.

  4. Try to create a directory on the Hadoop client and check whether the service status is normal.
    1. Run the following command:

      client:/opt/ficlient/HDFS # hadoop fs -mkdir /test
      client:/opt/ficlient/HDFS # hadoop fs -ls /
      Found 8 items
      drwxr-x---   - hdfs   hadoop           4096 2015-10-23 18:00 /encryption
      drwxr-x---   - flume  hadoop           4096 2015-10-23 18:00 /flume
      drwx------   - hbase  hadoop           4096 2015-10-23 18:49 /hbase
      drwxr-xr-x   - hdfs   hadoop           4096 2015-10-23 22:59 /test
      drwxrwxrwx   - mapred hadoop           4096 2015-10-23 18:00 /mr-history
      drwxrwxrwx   - spark  supergroup       4096 2015-10-23 18:54 /sparkJobHistory
      drwxrwxrwx   - hdfs   hadoop           4096 2015-10-23 18:54 /tmp
      drwxrwxrwx   - hdfs   hadoop           4096 2015-10-23 22:24 /user

      If the command is successfully executed and the /test directory is displayed, the service status is normal.

      Run the following command to clear test data:

      client:/opt/ficlient/HDFS # hadoop fs -rmdir /test

    2. View the capacity information.

      Run hadoop fs -df -h /.


      The current version does not support query of the OceanStor 9000 capacity information on the GUI.

Updated: 2019-06-27

Document ID: EDOC1000122519

Views: 108087

Downloads: 163

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Previous Next