No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R006C00 Object Storage Service (Compatible with OpenStack Swift APIs) Administrator Guide 07

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Erasure Code

Erasure Code

The OceanStor 9000 object storage service supports Erasure Code with different protection levels such as N+1, N+2, N+3, N+2:1, and N+3:1.

Protection Levels

Table 1-3 describes the protection levels.

Table 1-3  Protection levels

Protection Level

Definition

Disk Effective Usage

Min. Nodes Needed

Alternative Protection Level (When Nodes Are Insufficient)

N+1

Each data stripe contains N original data strips and one redundant data strip which tolerates one disk failure or one node failure.

66% to 95%

3

-

N+2

Each data stripe contains N original data strips and two redundant data strips which tolerates two disk failure or two node failure.

66% to 89%

6

N+2:1

N+3

Each data stripe contains N original data strips and three redundant data strips which tolerates three disk failure or three node failure.

66% to 84%

9

N+3:1

N+2:1

Each data stripe contains N original data strips and two redundant data strips which tolerates two disk failure or one node failure.

66% to 90%

3

-

N+3:1

Each data stripe contains N original data strips and three redundant data strips which tolerates three disk failure or one node failure.

57% to 84%

3

-

Note 1: A protection level is expressed in the format of N+M or N+M:B, where N indicates the original data count (ODC), M indicates the redundant data count (RDC), and B indicates the number of redundant data nodes (RDNs).

Note 2: To prevent data loss caused by a dual-disk failure, N+1 is not recommended.

Note 3: Disk usage keeps unchanged or improves as the number of nodes increases in a node pool. A node pool is a logical group of multiple nodes.

Note 4: For C72, the minimum number of nodes as shown in the table should be reduced to half and rounded up.

Note 5: N is automatically calculated by the system based on M.

Note 6: The cluster has multiple nodes that are disconnected. If the total number of nodes whose data has not been completely reconstructed and nodes that are disconnected is greater than the maximum number of faulty nodes allowed by the data protection level, some of data is unavailable.

Setting a Protection Level

In the object storage service, Accounts are resource owners. You can use the object storage service only with an Account. A protection level is set for Accounts. All the Objects in the Accounts use this protection level.

The protection level is set during software installation based on the data redundancy ratio in the configuration file, and cannot be changed once being set.

Basic Principle

Unlike the conventional RAID technology that implements redundant data calculation and data storage together, Erasure Code enables file systems to flexibly store data based on load balancing and reliability functions.

Figure 1-5 shows the working principle of Erasure Code.

Figure 1-5  Working principle of Erasure Code

Data Discretization and Faulty Domain Isolation

Data discretization is used to improve system reliability. Its working principles are as follows:

  • When the number of nodes in a node pool is not less than N+M, N+M disks reside separately on N+M nodes.

  • When the number of owning cabinets of nodes in a node pool is not less than N+M, N+M disks reside separately on N+M cabinets.

Faulty domain isolation is used to minimize the adverse impact of faults. Its working principles are as follows:

  • Data in a chunk is stored only to a disk group consisting of N+M disks.
  • When a disk or node in a node pool fails, node pool division ensures that reliability of data in other node pools are not affected, and I/O performance of other node pools is not compromised during data reconstruction.
Translation
Download
Updated: 2019-04-28

Document ID: EDOC1000122524

Views: 9137

Downloads: 86

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next