No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


OceanStor 9000 V300R006C00 File System Feature Guide 12

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Adding a Hadoop Node

Adding a Hadoop Node

This section explains how to add one or more Hadoop nodes to the Hadoop cluster.


OceanStor 9000 has interconnected with the Hadoop cluster.


  1. As instructed by the Hadoop product documentation, log in to FusionInsight Hadoop Manager or Cloudera Manager and add nodes and services. Do not start the added nodes or services.
  2. On DeviceManager, add the IP address of the node to be added.
    1. Log in to DeviceManager.
    2. Choose Provisioning > Share > NFS (Linux/UNIX/MAC).
    3. Select the data analysis directory. In Client Information, click Add.
    4. In Name or IP Address, enter the IP address or IP address segment of the node to be added. Set Permission to Read-write and select Asynchronous, no_all_squash, and no_root_squash.

      If multiple IP addresses of one Hadoop node communicate with the front-end service network of OceanStor 9000, all those IP addresses need to be added to the client list.

    5. Click OK and then Close.
  3. Check the interconnection parameters.

  4. Change parameter values in
    1. Log in to the first Hadoop node as user root.
    2. Use the vi editor to change the values in /temp/tools/tools/

      The requirements are as follows:

      • HADOOP_NODES: Set the parameter value to the IP addresses of all Hadoop nodes. The IP addresses are separated by commas (,). If one node has multiple IP addresses that can communicate with the network where the first Hadoop node resides, you only need to enter one IP address.

      • NEED_UPDATE_CLUSTER: Set the parameter value to true.

      • NEED_DO_MOUNT: Set the parameter value to true.

      • NEED_DO_PRECREATE_DIR: Set the parameter value to false.

      • NEED_UPDATE_CLIENT: Set the parameter value to false.

      There is no need to change other parameter values.

  5. Run the following command to install HDFS Plugin:

    node1:~ # cd /temp/tools/tools
    node1:/temp/tools/tools # ./ deploy

  6. Restart the Hadoop cluster.

    • FusionInsight Hadoop. Log in to FusionInsight Hadoop Manager and choose Services > More Actions > Stop cluster. Then select Start cluster.

    • Cloudera Hadoop: Log in to Cloudera Manager and choose > Restart.

Updated: 2019-06-27

Document ID: EDOC1000122519

Views: 102603

Downloads: 162

Average rating:
This Document Applies to these Products

Related Version

Related Documents

Previous Next