No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionInsight HD V100R002C60SPC200 Product Description 06

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
HBase

HBase

Secondary Index

HBase is a distributed storage database of the KeyValue type. Data of a table is sorted in the alphabetic order based on RowKeys. If you query data based on a specified RowKey or scan data in the scale of a specified RowKey, HBase can quickly locate the data that needs to be read, enhancing the efficiency.

However, in most actual scenarios, you need to query the data of which the column value is XXX. HBase provides the Filter feature to query data with a specific column value: All data is scanned in the order of RowKey, and then the data is matched with the specific column value until the required data is found. The Filter feature scans some unnecessary data to obtain the required data. Based on the preceding description, the Filter feature cannot meet the requirements of the frequent query with high performance standards.

To address this issue, the HBase secondary index is generated. The HBase secondary index enables HBase to query data based on specific column values.

Figure 4-2 Secondary index

Removing Padding from Rowkeys

A rowkey is composed of starkey of index region, index name, indexed column(s) value(s), and user table rowkey.

Before the current version, the length of each field is fixed. The length of indexed column(s) value(s) is determined by the length of the longest value of this field. In some existing scenarios, if 90% column values are short while 10% column values are long, a large amount of storage space will be wasted. To utilize space, separators are used to separate fields, rather than padding.

Multi-point Division

When users create tables that are pre-divided by region in HBase, users may not know the data distribution trend so the division by region may be inappropriate. After the system runs for a period, regions need to be divided again to achieve better performance. Only empty regions can be divided.

The region division function delivered with HBase divides regions only when they reach the threshold. This is called single point division.

To achieve better performance when regions are divided based on users' requirements, multi-point division is developed, which is also called dynamic division. The multi-point division function pre-divides an empty region into multiple regions to prevent the performance being deteriorated due to insufficient region space.

Figure 4-3 Multi-point division

Connection Limitation

Too many sessions mean to run many query and MapReduce tasks on the HBase, which will cause Hbase performance degradation or even service rejection. You can configure parameters to limit the maximum number of sessions that can be established between the client and the HBase server to achieve HBase overload protection.

Disaster Discovery

The disaster recovery (DR) capabilities between the active and standby cluster can enhance the high availability (HA) of the HBase data. The active cluster provides data services and the standby cluster backs up data. When the active cluster is faulty, the standby cluster takes over data services. Compared with the open source Replication function, the functions are enhanced as follows:

  1. The standby cluster whitelist function is only applicable to pushing data to a specified cluster IP address.
  2. In the open source version, replication is synchronized based on WAL, and data backup is implemented by replaying WAL in the standby cluster. For bulk loads, since no WAL is generated, data will not be replicated to the standby cluster. By recording bulk load operations on the WAL and synchronizing them to the standby cluster, the standby cluster can read bulk load operation records through WAL and load HFile in the active cluster to implement data backup.
  3. In the open source version, HBase filters ACLs. Therefore, ACL information will not be synchronized to the standby cluster. By adding a filter (org.apache.hadoop.hbase.replication.SystemTableWALEntryFilterAllowACL), ACL information can be synchronized to the standby cluster. You can configure hbase.replication.filter.sytemWALEntryFilter to enable the filter and implement ACL synchronization.
  4. As for read-only restriction of the standby cluster, only super users within the standby cluster can modify the HBase of the standby cluster. In other words, HBase clients outside the standby cluster can only read the HBase of the standby cluster.

HBase MOB

In the actual application scenario, data in various size needs to be stored, for example, image data and documents. Data whose size is smaller than 10 MB can be stored in HBase. HBase can yield the best read-and-write performance for data whose size is smaller than 100 KB. If the size of data stored in HBase is greater than 100 KB or even reaches 10 MB and the same number of data files are inserted, the total data amount is large, causing frequent compaction and split, high CPU consumption, high disk I/O frequency, and low performance.

MOB data (whose size ranges from 100 KB to 10 MB) is stored in a file system (for example, HDFS) in HFile format. Tools expiredMobFileCleaner and Sweeper are used to manage HFiles and save the address and size information about the HFiles to the store of HBase as values. This greatly decreases the compaction and split frequency in HBase and improves performance.

In Figure 4-4, MOB indicates mobstore stored on HRegion. mobstore stores keyvalues. Wherein, a key is the corresponding key in HBase, and a value is the reference address and data offset stored in the file system. When reading data, mobstore uses its own scanner to read keyvalue data objects and uses the address and data size information in the value to obtain target data from the file system.

Figure 4-4 MOB data storage principle

HFS

HBase FileStream (HFS) is an independent HBase file storage module. It is used in FusionInsight HD upper-layer applications by encapsulating HBase and HDFS interfaces to provide these upper-layer applications with functions such as file storage, read, and deletion.

In the Hadoop ecosystem, the HDFS and HBase face tough problems in massive file storage in some scenarios:

  • If massive small files are stored in the HDFS, NameNode will have great pressure.
  • Some large files cannot be directly stored in the HBase due to HBase interfaces and internal mechanism.

The HFS is developed for the mixed storage of massive small files and some large files in the Hadoop. Simply speaking, massive small files (smaller than 10 MB) and some large files (greater than 10 MB) need to be stored in HBase tables.

For such a scenario, the HFS provides unified operation interfaces similar to HBase function interfaces.

Translation
Download
Updated: 2019-04-10

Document ID: EDOC1000104139

Views: 5904

Downloads: 64

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next