No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


OceanStor 2600 V3 Storage System V300R005 HyperMetro Feature Guide 06

"This document describes the implementation principles and application scenarios of theHyperMetro feature. Also, it explains how to configure and manage HyperMetro."
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
HyperMetro I/O Processing Mechanism

HyperMetro I/O Processing Mechanism

HyperMetro uses the dual-write and DCL technologies to synchronize data changes between two data centers, ensuring data consistency. In addition, HyperMetro enables the storage arrays in the two data centers to concurrently provide services for hosts.

Basic Concepts

You are advised to read about the key concepts of HyperMetro before reading about the I/O processing mechanism. For details, see Terminology.

Write I/O Process

  • Dual-write and DCL technologies have two ways of synchronizing data while services are running. Dual-write enables the delivery of I/O requests from application servers to both local and remote caches, ensuring data consistency between caches. If the storage system in one data center malfunctions, the DCL records data changes in a data center. After the storage system recovers, the data changes are synchronized to the storage system, ensuring data consistency across data centers.
  • Two storage systems with HyperMetro installed can process I/O requests concurrently. The locking mechanism prevents access conflicts that occur when different hosts access the same storage system at the same time. Data can be written to a storage system only after the locking mechanism grants permission to the storage system. If the locking mechanism does not grant priority, the storage system must wait until the previous I/O is complete. It must then obtain write permission after the locking mechanism releases the previous storage system.

Figure 1-3 shows the write I/O process of an application host delivering an I/O request and causing data changes.


In the following figure, the write I/O accesses the local storage system, and the local storage system writes data to the remote storage system for dual-write purposes.

Figure 1-3 Write I/O process

  1. An application host delivers a write I/O to the HyperMetro management module.
  2. A log is recorded.
  3. HyperMetro writes the write I/O to both the local and remote caches concurrently.
  4. The local and remote caches return the write I/O result to HyperMetro.
  5. The system determines whether dual-write is successful.
    • If writing to both caches is successful, the log is deleted.
    • If writing to either cache fails, the system:
      1. Converts the log into a DCL, which records the differential data between the local and remote LUNs. After conversion, the orginal log is cleared.
      2. Suspends the HyperMetro pair. The status of the HyperMetro pair becomes To be synchronized. I/Os are only written to the successful storage system. The failed storage system stops providing host services.

    In the background, the storage systems use the DCL to synchronize data. Once the data on the local and remote LUNs is identical, HyperMetro services are restored.

  6. The HyperMetro management module returns the write result to the host.

Read I/O Process

Data on the LUNs of both storage systems is synchronized in real time and both are accessible to hosts. If the storage system in one data center malfunctions, the storage system in the other data center continues providing host services alone.

  • UltraPath is recommended for HyperMetro. Huawei UltraPath to meet HyperMetro requirements . After the optimization, UltraPath can identify host locations so that hosts can access the nearest storage array, reducing cross-site accesses and latency while improving access efficiency and storage performance.
  • If the customer needs to use non-Huawei multipathing software on the application server, the function of Uses third-party multipath software for initiators must be enabled on Huawei storage. After this function is enabled, third-party multipathing software can identify and aggregate LUNs for servers to access to ensure normal operation of server services.

Figure 1-4 shows the read I/O process.

Figure 1-4 Read I/O process

  1. An application host applies for read permission from the HyperMetro management module.

    If the link between the storage arrays in the two data centers is down, the quorum server determines which storage array continues providing services for hosts.

  2. The HyperMetro management module enables the local storage system to respond to the read I/O request made by the host.
  3. If the local storage system is operating properly, it returns data to the HyperMetro management module. If not, the HyperMetro management module enables the host to read data from the remote storage system.
  4. If the local storage array is working improperly, the HyperMetro management module enables the host to read data from the remote storage array. The remote storage array returns data to the HyperMetro management module.
  5. The read I/O request made by the host is processed successfully.
Updated: 2018-09-03

Document ID: EDOC1000106183

Views: 28525

Downloads: 271

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Previous Next