No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R006C00 File System Feature Guide 12

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
(Optional) Migrating Service Data

(Optional) Migrating Service Data

If Cloudera Hadoop has HDFS service data before it is interconnected with the OceanStor 9000, the service data needs to be migrated to the OceanStor 9000. If Cloudera Hadoop has no HDFS service data before it is interconnected with the OceanStor 9000, skip this operation.

Prerequisites

Cloudera Hadoop has interconnected with the OceanStor 9000.

To ensure successful data migration, related service personnel are advised to confirm data correctness and integrity during verification.

If one or more HDFS, HBase, and Hive components have no service data, there is no need to perform data migration for the components.

Procedure

  1. Record the IP addresses of NameNode.

    Log in to Cloudera Manager and choose HDFS > NameNode. Click each host and record its IP, as shown in Figure 14-23.

    Figure 14-23  Viewing IP addresses of Active NameNode

  2. Log in to any Hadoop node as user root.
  3. Migrate HDFS data.

    Migrate all HDFS data. The following uses migrating data in hdfs://namenode:8020/hdfs as an example. namenode is the IP address of the Active NameNode service.

    hadoop fs -get hdfs://namenode:8020/hdfs /mnt/nfsdata0

  4. Stop the HBase and Hive services.
    1. Log in to Cloudera Manager. Click on the right of HBase. Select Stop and click Stop.
    2. Click on the right of Hive. Select Stop and click Stop.
  5. Migrate HBase data.

    Migrate all HBase data. The following uses migrating data in hdfs://namenode:8020/hbase as an example. namenode is the IP address of the Active NameNode service.

    hadoop fs -get hdfs://namenode:8020/hbase /mnt/nfsdata0

  6. Migrate Hive data.

    Migrate all Hive data. The following uses migrating data in hdfs://namenode:8020/user/hive/warehouse as an example. namenode is the IP address of the Active NameNode service.

    hadoop fs -get hdfs://namenode:8020/user/hive/warehouse /mnt/nfsdata0

  7. Change the parameter values of the Hive database.
    1. Log in as user root to the Hadoop node where Hive resides.
    2. Log in to the Hive metabase.
    3. Execute the following SQL statements:

      update DBS set DB_LOCATION_URI='nas:/user/hive/warehouse';
      update SDS set LOCATION='nas:/user/hive/warehouse/lineitem';

  8. Start the HBase and Hive services.
    1. Log in to Cloudera Manager. Click on the right of HBase. Select Start and click Start.
    2. Click on the right of Hive. Select Start and click Start.

Verification Procedure

Log in to any Hadoop node as user root and run hadoop fs -ls / to verify data correctness and integrity.

Translation
Download
Updated: 2019-06-27

Document ID: EDOC1000122519

Views: 74745

Downloads: 145

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next