No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R005C00 File System Feature Guide 11

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Verifying Interconnection Status

Verifying Interconnection Status

This section explains how to verify whether the interconnection is successful.

Procedure

  1. Log in to Cloudera Manager and check the Hadoop cluster status.

    If the cluster status is like that shown in Figure 14-22, the cluster is normal.

    Figure 14-22  Checking the Hadoop cluster status

  2. Check whether HDFS Plugin has been installed.
    1. Log in to the first Hadoop node as user root.
    2. Run the following command:

      node1:/ # df -h                                                      
      Filesystem                Size  Used Avail Use% Mounted on                     
      ...
      example.com:/hadoop_dir   33T   12G   33T   1% /mnt/nfsdata0   
      node1:/ # cd /mnt/nfsdata0                                                 
      node1:/mnt/nfsdata0 # ll                                                   
      total 24                                                                        
      -rwxrwxrwx 1 root   root     21 Oct 24 16:33 -root.Meta~$%^                     
      drwxrwxrwx 7 hbase  hbase  4096 Oct 24 16:35 hbase                              
      -rwxrwxrwx 1 hbase  hbase    16 Oct 21 13:38 hbase.Meta~$%^                     
      drwxrwxrwx 4 mapred mapred 4096 Oct 21 13:48 tmp                                
      -rwxrwxrwx 1 mapred mapred   18 Oct 21 13:37 tmp.Meta~$%^                       
      drwxrwxrwx 5 root   root   4096 Oct 24 11:26 user                               
      -rwxrwxrwx 1 root   root      0 Oct 23 18:12 user.Meta~$%^                      

      If the /mnt/nfsdata0 directory exists and contains multiple data directories and metadata files (whose names include .Meta), HDFS Plugin has interconnected with OceanStor 9000.

    3. Run the following command:

      node1:/mnt/nfsdata0 # find /opt/cloudera/parcels -name Share*.jar
      /opt/cloudera/parcels/CDH-5.4.1-1.cdh5.4.1.p0.6/lib/hadoop-mapreduce/lib/ShareNASFileSystem.jar
      /opt/cloudera/parcels/CDH-5.4.1-1.cdh5.4.1.p0.6/lib/hadoop-yarn/lib/ShareNASFileSystem.jar
      /opt/cloudera/parcels/CDH-5.4.1-1.cdh5.4.1.p0.6/lib/hadoop-hdfs/lib/ShareNASFileSystem.jar
      /opt/cloudera/parcels/CDH-5.4.1-1.cdh5.4.1.p0.6/lib/spark/lib/ShareNASFileSystem.jar
      /opt/cloudera/parcels/CDH-5.4.1-1.cdh5.4.1.p0.6/lib/hbase/lib/ShareNASFileSystem.jar
      /opt/cloudera/parcels/CDH-5.4.1-1.cdh5.4.1.p0.6/lib/hive/lib/ShareNASFileSystem.jar

      If the preceding records are generated, the .jar file has been replicated to the local host. The records of the components that are not interconnected may not be displayed.

    4. Repeat 2.a to 2.c to check whether HDFS Plugin is installed on other Hadoop nodes.
  3. Try to create a directory on the Hadoop client and check whether the service status is normal.
    1. Log in to any Hadoop node as user root.
    2. Run the following command:

      client:/opt/ficlient/HDFS # hadoop fs -mkdir /test
      ...
      client:/opt/ficlient/HDFS # hadoop fs -ls /
      ...
      Found 4 items                                                                   
      drwxr-xr-x   - hbase  hbase        4096 2015-10-24 16:35 /hbase                 
      drwxr-xr-x   - root   sfcb         4096 2015-10-26 09:42 /test            
      drwxr-xr-x   - mapred hadoop       4096 2015-10-21 13:48 /tmp                   
      drwxrwxrwx   - root   sfcb         4096 2015-10-24 11:26 /user                  

      If the command is successfully executed and the /test directory is displayed, the service status is normal.

      Run the following command to clear test data:

      client:/opt/ficlient/HDFS # hadoop fs -rmdir /test

    3. View the capacity information.

      Run hadoop fs -df -h /.

      NOTE:

      The current version does not support query of the OceanStor 9000 capacity information on the GUI.

Translation
Download
Updated: 2019-03-30

Document ID: EDOC1000101823

Views: 13959

Downloads: 97

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next