No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


FusionInsight HD 6.5.0 Administrator Guide 02

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Restoring HBase or OMS Data from a Third-Party Server

Restoring HBase or OMS Data from a Third-Party Server


Restore OMS or HBase data in the cluster that employs the security mode by using lower-layer storage backup files on a third-party server.

Impact on the System

Some services of the cluster are unavailable during restoration.


  • The Loader service must be installed in the cluster and is running properly.
  • You have contacted the system administrator and obtained the address for accessing the Loader WebUI, for example
  • You have obtained user hbase and the password for accessing the Loader WebUI. Change the password upon the first login.

The preceding prerequisites are not required if only OMS data needs to be restored.


Restore OMS data.

  1. Copy the OMS backup file (for example, X.X.X_OMS_20170422110055.tar.gz) in the third-party server to the directory of the active management node.
  2. Use the LocalDir mode to restore data of Manager (including OMS and LDAP) and DBService. For details, see Recovering Data.

Restoring HBase data: On the Loader WebUI, create a job for importing data to HDFS.

  1. Enter the address for accessing the Loader WebUI and log in to the Loader WebUI as user hbase.
  2. Click New Job, and set the job property on the job configuration page, as shown in Figure 13-5.

    1. Set Name to the job name.
    2. Set Job type to Import.
    3. Click Add on the right of Connection to create a connection, as shown in Figure 13-6.
    4. Click OK. The new connection is created.
    5. Click Add next to Group, and create a Loader job group for saving the current Loader jobs. For example, in Name, enter hbase_loader, and click OK.
    6. Set Queue to the default Yarn queue default.
    7. Set Priority to NORMAL.
    8. Click Next. The job property is set.
    Figure 13-5 Setting the job type

    Figure 13-6 Creating a connection between Loader and data source

  3. Configure backup data information on the page shown in 2. From. Set Input path, File split type, File filter type, File filter, Encode type, Suffix name, and Compression.

    1. Set Input path to the SFTP import path. If the connector includes multiple paths, use semicolons (;) to separate the paths.
    2. Set File split type to the file split mode, including FILE and SIZE. FILE indicates splitting by file and SIZE indicates splitting by size. Files are split by the specified mode and become Map input files.
    3. Set File filter type to WILDCARD.
    4. Set File filter to the import path filter criterion. You can use the wildcard character (*). Files that meet the filter criterion can be imported. Enter *. If the connector includes multiple paths, use semicolons (;) to separate the paths.
    5. Set Encode type to the encode format of an imported file. You do not need to set this parameter when restoring the HBase data from a third-party server.
    6. Leave Suffix name empty, which indicates that no suffix name is added to an imported file at the data source.
    7. Set Compression to whether to use compression during SFTP transmission. Select False.
    Figure 13-7 Setting the import file

  4. On the 3. Transform page, configure data transformation and click Next.
  5. Configure restored data storage information on the page shown in 4. To. Set Storage type, File type, Compression format, Output directory, File operate type,and Extractors.

    1. Set Storage type to HDFS, which indicates that data is saved on HDFS.
    2. Set File type to BINARY_FILE.
    3. Set Compression format to the compression method. If you do not want use compression, retain the default value.
    4. Set Output directory to the export HDFS directory.
    5. File operate type: specifies the action when duplicate file names exist during the export. Set this parameter to ERROR.
    6. Extractors: indicates the number of Map jobs for executing MapReduce jobs. Extractors is set to 10. Extractor and Extractor size cannot be set at the same time.
    Figure 13-8 Configuring restored data storage

  6. After the configuration is complete, click Save and run.

Restoring HBase data: Restore the HDFS backup table data to HBase.

  1. Log in to the node where the client is installed as the hbase user and run the following command to view the imported backup file:

    hdfs dfs -ls /output

  1. Run the following command to log in to the HBase shell client:

    hbase shell

  2. Run the following command to restore data:


    backup data storage path is optional. Backup data is obtained from the default path when it is left empty.

    • No user table exists in HBase:

      restore 'backup label','backup data storage path'

      For example, restore 'full_19700101080000_20140618111903',{DATA_PATH => '/output'}.

    • User tables exist in HBase:

      restore 'backup label','backup data storage path', {FAIL_IF_TBL_EXIST => false}

      For example, restore 'full_19700101080000_20140618111903',{DATA_PATH => '/output',FAIL_IF_TBL_EXIST => false}.

Updated: 2019-05-17

Document ID: EDOC1100074522

Views: 7223

Downloads: 12

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Previous Next