No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R006C00 File System Feature Guide 12

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Installing HDFS Plugin for FusionInsight Hadoop Nodes and Clients

Installing HDFS Plugin for FusionInsight Hadoop Nodes and Clients

The OceanStor 9000 provides the batch deployment script. The script can be used to batch deploy HDFS Plugin to all Hadoop nodes and clients.

Prerequisites

  • All Hadoop nodes and clients can communicate with the front-end service network of OceanStor 9000.

  • The OceanStor_9000_version_HDFS.tar.gz installation package has been obtained and its integrity has been verified. In the file name, version indicates the version number.

NOTE:

For details on how to obtain and verify the version software, see the OceanStor 9000 Software Installation Guide.

Procedure

  1. Record the business IP addresses of NodeManager in the Business IP column.

    Log in to FusionInsight Hadoop Manager and choose Services > Yarn > Instances. Record the business IP addresses of NodeManager in the Business IP column, as shown in Figure 14-12.

    Figure 14-12  Viewing the business IP addresses of the Node Manager service

  2. Record the business IP addresses of ZooKeeper in the Business IP column.

    Choose Services > ZooKeeper > Instances and record all business IP addresses in the Business IP column.

  3. Use the FTP tool to upload OceanStor_9000_version_HDFS.tar.gz to the /temp/tools directory of the first Hadoop node.

    You can upload the file as user root.

    NOTE:

    You can also upload the file to another Hadoop node. For convenience purposes, this document uses the first Hadoop node as an example.

  4. Decompress the installation package.
    1. Log in to the Hadoop node as user root.
    2. Run the following command to decompress the installation package:

      node1:~ # chmod 755 /temp/tools -R                                 
      node1:~ # cd /temp/tools                                                 
      node1:/temp/tools # tar xvf OceanStor_9000_version_HDFS.tar.gz  
      tools/                                                                        
      tools/pluginPkg/                                                              
      tools/pluginPkg/ShareNASFileSystem.jar                                        
      tools/deployTools.sh                                                          
      tools/pluginConf/                                                             
      ......
      node1:/temp/tools # cd tools/                                            
      

    3. Run the following command to obtain the value of ACTIVE_OM_NODE.

      node1:/temp/tools # /opt/huawei/Bigdata/om-0.0.1/sbin/status-oms.sh
      ...
      NodeName      HostName   HaVersion    StartTime             HAActive   HAallResOK  HARunPhase
      10-10-10-11   node1      V100R001C01  2015-10-23 10:09:50   active     normal      Actived
      10-10-10-12   node2      V100R001C01  2015-10-13 20:01:55   standby     normal      Deactived

      In the row where HAActive is active, record the value of HostName. The value is the value of ACTIVE_OM_NODE to be set in the following settings.

    4. Use the vi editor to change the parameter values in /temp/tools/tools/deploy.properties. Table 14-13 describes the related parameters.

      NOTE:

      You can also download the file to the local host, change the parameter values using the wordpad, and upload it to overwrite the original file.

      Table 14-13  Parameters in deploy.properties

      Parameter

      Description

      HADOOP_NODES

      IP addresses or host names of all Hadoop nodes that need to install HDFS Plugin. Multiple records are separated by commas ,.

      If one node has multiple IP addresses that can communicate with the first Hadoop node, you only need to enter one IP address.

      ACTIVE_OM_NODE

      The parameter value is set to the value of HostName recorded in 4.c.

      WHICH_HADOOP

      The value is set to FI.

      NEED_UPDATE_CLUSTER

      Default value true is used and cannot be changed.

      NEED_DO_MOUNT

      Default value true is used and cannot be changed.

      NEED_DO_PRECREATE_DIR

      Default value true is used and cannot be changed.

      FS_DNS_IP

      IP address of InfoEqualizer DNS. The value can be viewed on To view the parameter value, log in to DeviceManager and choose Settings > Cluster Settings > InfoEqualizer > Network Settings..

      FS_DOMAIN_NAME

      Dynamic Domain Name configured during file system initialization. The value can be viewed on To view the parameter value, log in to DeviceManager and choose Settings > Cluster Settings > InfoEqualizer > Basic Information..

      If Dynamic Domain Name is not configured, enable InfoEqualizer by referring to "Initializing the System" in the OceanStor 9000 File System Administrator Guide, and configure Dynamic Domain Name and add dynamic front-end IP addresses to storage nodes. Then you can use the configured dynamic domain name.

      FS_SHARE_DIR

      Data analysis directory of the file system. That is the NFS shared directory. For example: /hadoop_dir.

      HADOOP_HOME

      The default value is used and cannot be changed.

      LIB_LOCATIONS

      The default value is used and cannot be changed.

      NEED_UPDATE_CLIENT

      If the Hadoop client is installed, the value is set to true. If the Hadoop client is not installed, the value is set to false and the following three parameters do not need to be set.

      HADOOP_CLIENT_NODE

      IP addresses or host names of all Hadoop clients that need to install HDFS Plugin. The IP addresses or host names are separated by commas ,.

      If one client has multiple IP addresses that can communicate with the first Hadoop node, you only need to enter one IP address.

      HADOOP_CLIENT_HOME

      Installation directory specified during Hadoop client installation

      HADOOP_CLIENT_LIB_LOCATIONS

      The default value is used and cannot be changed.

  5. Run the following command to install HDFS Plugin:

    node1:/temp/tools/tools # chmod 755 deployTools.sh                       
    node1:/temp/tools/tools # dos2unix deployTools.sh                        
    dos2unix: converting file deployTools.sh to UNIX format ...                   
    node1:/temp/tools/tools # ./deployTools.sh deploy      

    Enter the password of user root as prompted. When Completed is displayed, the software installation succeeds.

  6. Restart the OMS.
    1. Log in to the node to which the HostName value recorded in 4.c corresponds.
    2. Run the following command:

      node1:/temp/tools # cd /opt/huawei/Bigdata/om-0.0.1/sbin
      node1:/opt/huawei/Bigdata/om-0.0.1/sbin # ./restart-oms.sh

  7. Back up and change the interconnection parameters as shown in Table 14-14.

    Make sure that the Hadoop cluster stops running. If it does not stop running, log in to FusionInsight Hadoop Manager and choose Services > More Actions > Stop cluster.

    Table 14-14  Interconnection parameters

    Component

    Parameter

    Description

    HDFS

    fs.defaultFS

    The value is set to nas:///.

    fs.nas.impl

    Default value com.huawei.nasfilesystem.ShareNASFileSystem is used.

    fs.AbstractFileSystem.nas.impl

    Default value com.huawei.nasfilesystem.WushanFs is used.

    fs.nas.task.nodes

    The value is set to the business IP addresses of NodeManager recorded in 1. Multiple records are separated by commas ,.

    fs.nas.mount.dir

    Default value /mnt/nfsdata0 is used.

    HBase

    fs.defaultFS

    The value is set to nas:///.

    hbase.rootdir

    The value is set to nas:/hbase.

    Spark

    fs.defaultFS

    The value is set to nas:///.

    spark.eventLog.dir (in JDBCServer and SparkResource)

    The value is set to nas:/sparkJobHistory.

    SPARK_EVENTLOG_DIR

    The value is set to nas:/sparkJobHistory.

    Yarn

    fs.defaultFS

    The value is set to nas:///.

    yarn.nodemanager.container-executor.class

    The value is set to DefaultContainerExecutor.

    Hive

    fs.defaultFS

    The value is set to nas:///.

    Solr

    Mapreduce

    Changing the value of fs.defaultFS of the HDFS component is used as an example:

    1. Log in to FusionInsight Hadoop Manager.
    2. Choose Services > HDFS > Configuration.
    3. In Type, select All. In Search, enter fs.defaultFS.
    4. Manually replicate and back up the name and value of fs.defaultFS to the local host.
    5. In fs.defaultFS, enter nas:///, as shown in Figure 14-13.

      Figure 14-13  Changing the value of the interconnection parameter

      Do not select Restart the service in the dialog box. The Hadoop cluster will be restarted until the subsequent operations are all completed.

    6. Click Save Configuration and select OK in the dialog box that is displayed.
  8. Download and install the Hadoop client software.

    As instructed by the FusionInsight Hadoop documents, log in to FusionInsight Hadoop Manager, download the client software and configuration file, and install the software.

    NOTE:

    The Hadoop client software can be installed on Hadoop nodes or external servers. OceanStor 9000 can interconnect with the Hadoop client software.

    If the Hadoop client software has been installed before interconnecting with OceanStor 9000, uninstall it.

  9. Execute the HDFS Plugin installation script to interconnect with the Hadoop client.
    1. Log in to the first Hadoop node as user root.
    2. Use the vi editor to check the values of NEED_UPDATE_CLIENT, HADOOP_CLIENT_NODE, and HADOOP_CLIENT_HOME in /temp/tools/tools/deploy.properties. Set NEED_UPDATE_CLUSTER to false.

      Table 14-13 describes the related parameters. Change the parameter values if necessary.

    3. Execute the following installation script:

      node1:/temp/tools/tools # cd /temp/tools/tools
      node1:/temp/tools/tools # ./deployTools.sh deploy      

      Enter the password of user root as prompted. When Completed is displayed, the software installation succeeds.

  10. (Optional) Delete the ZooKeeper configurations.

    If the Hadoop cluster has not been started, skip this step.

    1. Log in to FusionInsight Hadoop Manager and choose Services > ZooKeeper > Status > Start Service, and click OK.
    2. On the Hadoop client, run the following command:

      Replace zkserverX in the command with the business IP addresses of ZooKeeper recorded in 2.

      # Go to the installation path of Hadoop client (for example: /opt/client), and configure environmental variables. 
      node1:~ # cd /opt/client
      node1:/opt/client # source bigdata_env
      # When FusionInsight Hadoop adopts the security mode, run kinit hbase and enter the login password. 
      
      node1:/opt/client # zkCli.sh -server zkserver1:24002,zkserver2:24002,zkserver3:24002,...
      ...
      WATCHER::
      WatchedEvent state:...
      
      # Press Enter.
      # Viewing existing directories.
      ls /
      ...
      # Delete HBase root directory.
      deleteall /hbase
      quit

  11. Start the Hadoop cluster.

    Log in to FusionInsight Hadoop Manager and choose Services > More Actions > Start cluster.

Translation
Download
Updated: 2019-06-27

Document ID: EDOC1000122519

Views: 70673

Downloads: 145

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next