No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R005C00 File System Feature Guide 11

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Installing HDFS Plugin for Cloudera Hadoop Nodes

Installing HDFS Plugin for Cloudera Hadoop Nodes

This section explains how to install HDFS Plugin for all Cloudera Hadoop nodes. The OceanStor 9000 provides the batch deployment script. The script can be used to batch deploy HDFS Plugin to all Hadoop nodes.

Prerequisites

  • All Hadoop nodes can communicate with the front-end service network of OceanStor 9000.

  • The OceanStor_9000_version_HDFS.tar.gz installation package has been obtained and its integrity has been verified. In the file name, version indicates the version number.

NOTE:

For details on how to obtain and verify the version software, see the OceanStor 9000 Software Installation Guide.

Procedure

  1. Record the IP address list of Node Manager.

    Log in to Cloudera Manager and choose Yarn > NodeManager. Click each host and record its IP, as shown in Figure 14-19.

    Figure 14-19  Viewing the IP address list of Node Manager

  2. Record the IP address list of ZooKeeper.

    Click Home and choose ZooKeeper > Server. Click each host name and record its IP.

  3. Back up and modify the interconnection configuration files as shown in Table 14-14.

    Make sure that the Hadoop cluster stops running.

    Table 14-14  Interconnection configuration files

    Component

    Configuration File

    Content

    HBase, HDFS, Hive, Yarn, MapReduce

    core-site.xml

    Set the content of the configuration file as follows: Set the fs.nas.task.nodes value to one of the IP addresses of NodeManager recorded in 1. The IP addresses are separated by commas ,.

    <property>
    <name>fs.defaultFS</name>
    <value>nas:///</value>
    </property>
    <property>
    <name>fs.nas.mount.dir</name>
    <value>/mnt/nfsdata0</value>
    </property>
    <property>
    <name>fs.nas.impl</name>
    <value>com.huawei.nasfilesystem.ShareNASFileSystem</value>
    </property>
    <property>
    <name>fs.AbstractFileSystem.nas.impl</name>
    <value>com.huawei.nasfilesystem.WushanFs</value>
    </property>
    <property>
    <name>fs.nas.task.nodes</name>
    <value>10.10.10.11,10.10.10.12</value>
    </property>
    <property>
    <name>default.fs.name</name>
    <value>nas:///</value>
    </property>

    HBase

    hbase-site.xml

    <property>
    <name>hbase.rootdir</name>
    <value>nas:/hbase</value>
    </property>

    Modifying the core-site.xml configuration file of HDFS is used as an example:

    1. Log in to Cloudera Manager.
    2. In the component list, click HDFS, Configuration, and Advanced in the lower left area.
    3. Manually replicate and back up the name and contents of the configuration file in the text box to the local host.
    4. Copy the target content in Table 14-14 to the text box on the right of core-site.xml, as shown in Figure 14-20.

      Figure 14-20  Modifying the interconnection configuration file

    5. After typing modification causes in the text box, click Save Changes on the upper area of the page.
  4. Use the FTP tool to upload OceanStor_9000_version_HDFS.tar.gz to /temp/tools of the first Hadoop node.

    You can upload the file as user root.

    NOTE:

    You can also upload the file to another Hadoop node. For convenience purposes, this document uses the first Hadoop node as an example.

  5. Decompress the installation package.
    1. Log in to the Hadoop node as user root.
    2. Run the following command to decompress the installation package:

      node1:~ # chmod 755 /temp/tools -R                                 
      node1:~ # cd /temp/tools                                                 
      node1:/temp/tools # tar xvf OceanStor_9000_version_HDFS.tar.gz  
      tools/                                                                        
      tools/pluginPkg/                                                              
      tools/pluginPkg/ShareNASFileSystem.jar                                        
      tools/deployTools.sh                                                          
      tools/pluginConf/                                                             
      ......
      node1:/temp/tools # cd tools/                                            
      

    3. Use the vi editor to change the values in /temp/tools/tools/deploy.properties. Table 14-15 describes the related parameters.

      NOTE:

      You can also download the file to the local host, change the parameter values using the wordpad, and upload it to overwrite the original file.

      Table 14-15  deploy.properties parameters

      Parameter

      Description

      HADOOP_NODES

      IP addresses or host names of all Hadoop nodes that need to install HDFS Plugin. Multiple records are separated by commas ,.

      If one node has multiple IP addresses that can communicate with the first Hadoop node, you only need to enter one IP address.

      ACTIVE_OM_NODE

      There is no need to set the parameter.

      WHICH_HADOOP

      Set to CDH.

      NEED_UPDATE_CLUSTER

      Default value true is used and cannot be changed.

      NEED_DO_MOUNT

      Default value true is used and cannot be changed.

      NEED_DO_PRECREATE_DIR

      Default value true is used and cannot be changed.

      FS_DNS_IP

      InfoEqualizer DNS IP address. To view the parameter value, log in to DeviceManager and choose Settings > Cluster Settings > InfoEqualizer > Network Settings.

      FS_DOMAIN_NAME

      Dynamic Domain Name configured during file system initialization. To view the parameter value, log in to DeviceManager and choose Settings > Cluster Settings > InfoEqualizer > Basic Information.

      If Dynamic Domain Name is not configured, enable InfoEqualizer by referring to "Initializing the System" in the OceanStor 9000 File System Administrator Guide, and configure Dynamic Domain Name and add dynamic front-end IP addresses to storage nodes. Then you can use the configured dynamic domain name.

      FS_SHARE_DIR

      Data analysis directory of the file system. That is the NFS file shared directory. For example: /hadoop_dir.

      CDH

      HADOOP_HOME

      Deleting # at the start of the row will validate the parameter. Set the path according to the actual CDH version.

      CM_LIB_HOME

      Deleting # at the start of the row will validate the parameter. Set the path according to the actual CDH version.

      LIB_LOCATIONS

      Deleting # at the start of the row will validate the parameter to take effect. Set the path according to the actual CDH version.

      FusionInsight

      HADOOP_HOME

      Adding # to the start of the row will invalidate the parameter.

      LIB_LOCATIONS

      Adding # to the start of the row will invalidate the parameter.

      Hadoop client

      NEED_UPDATE_CLIENT

      Set to false and the following three parameters do not need to be set.

      HADOOP_CLIENT_NODE

      There is no need to set the parameter.

      HADOOP_CLIENT_HOME

      There is no need to set the parameter.

      HADOOP_CLIENT_LIB_LOCATIONS

      There is no need to set the parameter.

  6. Run the following command to install HDFS Plugin:

    node1:/temp/tools/tools # chmod 755 deployTools.sh                       
    node1:/temp/tools/tools # dos2unix deployTools.sh                        
    dos2unix: converting file deployTools.sh to UNIX format ...                   
    node1:/temp/tools/tools # ./deployTools.sh deploy      

    Enter the password of user root as prompted. When Completed is displayed, the software installation succeeds.

  7. Clear ZooKepper configurations.
    1. Log in to Cloudera Manager and choose ZooKeeper > Actions > Start.
    2. On the first Hadoop node, run the following command:

      Replace zkserverX in the command with one IP address of ZooKeeper recorded in 2.

      node1:/temp/tools/tools # zookeeper-client -server zkserver1,zkserver2,zkserver3,...:2181
      ...
      WATCHER::
      WatchedEvent state:...
      # Press Enter.
      # View existing directories.
      ls /
      ...
      # Delete the HBase root directory.
      rmr /hbase
      quit

  8. Start the Hadoop cluster and configure clients.
    1. Log in to Cloudera Manager and start the Hadoop cluster, as shown in Figure 14-21.

      Figure 14-21  Starting the Hadoop cluster

    2. Select Deploy Client Configuration. In the dialog box that is displayed, click Deploy Client Configuration.
  9. Close the unnecessary check items after OceanStor 9000 is interconnected to block related alarms.
    1. Log in to Cloudera Manager. Click Hive and then Configuration.
    2. In the search text box, enter Hive Metastore Canary.
    3. Deselect the check box in the row where Property is set to Hive Metastore Canary Health Test, type modification causes, and click Save Changes on the upper right part of the page.
    4. Click Home.
    5. Click HDFS and then Configuration.
    6. In the search text box, enter SecondaryNameNode Process Health Test.
    7. Deselect the check box in the row where Property is set to SecondaryNameNode Process Health Test, type modification causes, and click Save Changes on the upper right part of the page.
    8. In the search text box, enter Filesystem Checkpoint.
    9. Click the cell on the right of Filesystem Checkpoint Age Monitoring Tresholds. In Warning, select Never and in Critical, select Never.
    10. Click the cell on the right of Filesystem Checkpoint Transactions Monitoring Tresholds. In Warning, select Never and in Critical, select Never.
    11. Type modification causes and click Save Changes on the upper right part of the page.
  10. Restart the Hadoop cluster.

    Click Home and choose > Restart.

Translation
Download
Updated: 2019-03-30

Document ID: EDOC1000101823

Views: 16066

Downloads: 97

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next