No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor BCManager 6.5.0 eReplication User Guide 02

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Active-Passive DR Solution

Active-Passive DR Solution

This section describes the characteristics, DR technologies, and principles of the active-passive DR solution.

Solution Introduction

This solution is applicable to local and remote DR. The production center and DR center work in active-passive mode. If a disaster occurs, the DR center quickly takes over services from the production center, ensuring optimal service continuity. The replication relationship is established between storage systems used by host services for remote DR. In addition, this solution can protect HyperVault snapshots of protected objects for rolling back data, preventing data loss and data corruption caused by viruses.

Context
As IT services being increasingly used, continuous and secure running of IT systems becomes critical for normal enterprise operation. IT systems in use face many problems:
  • Device faults: IT systems cannot work properly. Replacing devices may interrupt services and result in a long recovery period.
  • Regional outages: The entire IT system cannot work, interrupting all services.
  • Fires occurred on the data center. All IT systems are damaged, data loose, and all services are interrupted.
  • Natural disasters such as fire, earthquake, and mudflow may cause imperative damages on IT systems, interrupting all services and leaving no data.
To enhance the IT system availability to address the preceding problems, Huawei launched the active-passive DR solution.
Solution Description
This solution enhances IT system availability by establishing a same-city or remote DR center. If a disaster occurs on the production center, the DR center quickly takes over production services to minimize service download, thereby reducing customer loss.
  • Host service DR: The production center and DR center work in active-passive mode. If a disaster occurs, the DR center quickly takes over services from the production center. It is applicable to local and remote DR.
  • VM DR: The underlying storage interworks with the upper-layer virtualization platform to implement VM DR.
The following lists the supported DR technologies:
  • Synchronous replication (SAN)
  • Asynchronous replication (SAN)
  • Asynchronous replication (NAS)
  • Asynchronous replication (VIS)
  • Mirroring (VIS) and asynchronous replication (SAN)
  • Asynchronous replication of integrated SAN and NAS
  • Host replication
  • HyperVault
NOTE:

Storage replication is applied in synchronous replication (SAN), asynchronous replication (SAN), asynchronous replication (NAS), mirroring (VIS) and asynchronous replication (SAN), and asynchronous replication of integrated SAN and NAS.

Deployment Modes
Centralized and distributed deployment modes are supported:
  • Centralized deployment: One eReplication Server is deployed on a server at the DR center to provide DR management for production and DR resources. The eReplication Server must communicate with the production center properly over networks.
  • Distributed deployment: Two eReplication Servers are respectively deployed on the servers at the production and DR centers. Each provides DR management for resources of the owning center. The network between the two eReplication Servers must be connected properly.
NOTE:
Compared with centralized deployment, distributed deployment is more secure and provides shorter latency during daily production center maintenance. Up to two eReplication Servers can be deployed.
Both local and remote DR modes are supported.
  • Local DR: The production and DR centers are located in the same data center and deployed in centralized mode.
  • Remote DR: The production and DR centers are located in different data centers and can be deployed in either centralized or distributed mode.
Array-Based Replication DR

DR solutions using this technology can implement minute-level Recovery Point Objective (RPO) and hour-level Recovery Time Objective (RTO). The specific RPO and RTO vary with site conditions.

This technology utilizes the remote replication function provided by storage systems to replicate application data from the production center to the DR center, to replicate and protect data of the production center. For details about the implementation principles of array-based replication, see the Array-Based Replication DR Principles.

Application scenarios:
  • Huawei SAN is used and production and DR centers are connected over an IP network.
  • The distance between production and DR centers is flexible.
  • VMs are migrated across different centers as planned.
  • Service continuity must be ensured.
  • VMs are restored according to their priorities or dependencies.
  • DR tests have higher priorities.

The typical DR networks in distributed mode are shown in the following figures.

  • Synchronous replication (SAN) DR and asynchronous replication (SAN) DR are same in network, though their implementation principles are different. Figure 1-2 shows the network.
    Figure 1-2  Asynchronous replication (SAN) DR in distributed mode
  • Figure 1-3 shows a network of mirroring (VIS) and asynchronous replication (SAN) DR applicable to application hosts.
    Figure 1-3  Mirroring (VIS) and asynchronous replication (SAN) DR in distributed mode
  • Figure 1-4 shows a network of asynchronous replication (SAN) DR applicable to VMs
    Figure 1-4  VM DR in distributed mode
Host-Based Replication DR

The host-based replication DR used by Production application host VMs provides RPO in seconds and RTO in minutes. The specific RPO and RTO vary with site conditions.

Host replication DR uses the host-based replication function to remotely replicate VM data from storage devices at the production center to the DR center. Host replication DR also uses Production application host to replicate VM configuration data (including VM CPU, memory, network adapter, and disk attributes) and manage DR plans. For details about its working principles, see Host-Based Replication DR Principles.

The host-based replication DR applies to non-key services of small and medium-sized enterprises (SMEs), such as Enterprise Resource Planning (ERP), email servers, and desktop clouds.

Figure 1-5 shows the network of host-based replication (SAN) DR in distributed mode.

Figure 1-5  Host replication DR in distributed mode used by FusionSphere VMs
HyperVault DR

The HyperVault DR provides RPO in seconds and RTO in minutes. The specific RPO and RTO vary with site conditions.

This technology utilizes the snapshot and remote replication functions provided by storage systems to replicate application data from the production center to the DR center, to replicate and protect data of the production center.

Figure 1-6 shows a network of HyperVault DR in distributed mode.

Figure 1-6  HyperVault DR in distributed mode
DR Management
Production application host provides the following functions:
  • Automatically identify the storage LUN and remote replication relationship by detecting applications on hosts and the storage system of applications.
  • Automatically aware VMs running virtualization infrastructure (FusionManager or FusionCompute) being added.
  • Protect service hosts and VMs based on preset DR policies.
  • Support one-click tests to verify data availability and estimate the recovery time objective (RTO) in routine O&M.
  • If a planned disaster (such as an outage or system maintenance) is to happen:
    • Migrate services from the production center to the DR center through one-click scheduled migration.
    • Use the re-protection function to protect data at the DR center after the service switchover is complete.
  • If an unplanned disaster (such as an earthquake, fire, or flood) occurs,

    Restore services by using backup data in the remote DR center by one click.

Array-Based Replication DR Principles

The array-based replication technology uses the remote replication function provided by the storage systems to replicate the service data from the production data center (DC) to a remote disaster recovery (DR) DC.

Array-Based Replication Principles

Storage replication uses the synchronous/asynchronous remote replication functions provided by the storage systems to replicate service data from the production DC to a remote DR DC, thereby implementing replication and protection for the production DC data.

Protection Implemented by Array-Based Replication
Figure 1-7 shows the protection implemented by array-based replication.
Figure 1-7  Protection implemented by array-based replication
  1. At the production DC, create logical unit numbers (LUNs) and plan the to-be-DR-protected primary LUN.
  2. Migrates the to-be-protected application data to the planned primary LUN through the storage system.
  3. At the DR site, create LUNs and plan a secondary LUN with the same size as the primary LUN.
  4. Configure a connection between the two storage systems.

    Configure the LUNs' remote replication relationship and consistency groups.

  5. Configure site information.

    Configure the mapping between inter-site resources.

  6. Register storage devices to discover remote LUNs and consistency groups.
  7. Create a storage protected group, select the to-be-protected hosts, and specify a protection policy.
  8. Synchronize the DR protection-related configurations.
  9. Create a recovery plan according to the protection policy.

Storage-based replication uses various DR technologies. Some DR technologies such as synchronous replication (SAN), asynchronous replication (SAN), and mirroring (VIS) and asynchronous replication (SAN) will be described in this section. Other DR technologies such as asynchronous replication (NAS), asynchronous replication (VIS), and SAN&NAS will not be described in this section; for their principles, refer to the asynchronous replication (SAN) principles.

Synchronous Replication (SAN)
Synchronous replication (SAN) principles are described as follows:
  1. After a remote synchronous replication relationship is set up between a primary LUN at the production site and a secondary LUN at the DR site, an initial synchronization is implemented to replicate all the data from the primary LUN to the secondary LUN.
  2. The primary LUN receives a write request from the host during the initial synchronization, and checks the synchronization progress.
    • If the data block has not been synchronized to the secondary LUN, data is written to the primary LUN and the primary LUN returns a completion response to the host. Later, a synchronization task is performed to synchronize the entire data block to the secondary LUN.
    • If the data block has been synchronized, the new data block must be written to both the primary and secondary LUNs.
    • If the data block is being synchronized, the new data block will not be written to the primary and secondary LUNs until the data block is completely copied.
  3. After the initial synchronization is completed, data on the primary LUN and on the secondary LUN are the same. If the primary LUN receives a write request from the production host later, the I/O will be processed as shown in Figure 1-8.
    Figure 1-8  I/O processing during synchronous replication
    1. The primary LUN receives a write request from a production host and sets the differential log value to Differential for the I/O-specific data block.
    2. The data of the write request is written to both the primary and secondary LUNs. When writing data to the secondary LUN, the production site sends the data to the DR site over a preset link.
    3. If data is successfully written to both the primary and secondary LUNs, the corresponding differential log value is changed to Non-differential. Otherwise, the value remains Differential, and the data block will be copied again in the next synchronization.
    4. The primary LUN returns a write completion acknowledgement to the production host.
Asynchronous Replication (SAN)
Asynchronous replication (SAN) principles are described as follows:
  1. After a remote asynchronous replication relationship is set up between a primary LUN at the production site and a secondary LUN at the DR site, an initial synchronization is implemented.
  2. If the primary LUN receives a write request from a production host during the initial synchronization, data is written only to the primary LUN.
  3. After the initial synchronization, the secondary LUN data status changes to Synchronized or Consistent (if the host sends no write request during the initial synchronization, the secondary LUN data status is synchronized; otherwise, the secondary LUN data status is Consistent). Then, I/O requests are processed as shown in Figure 1-9.
    Figure 1-9  I/O processing during asynchronous replication
    1. The primary LUN receives a write request from a production host.
    2. After data is written to the primary LUN, a write completion response is immediately returned to the host.
    3. Incremental data is automatically synchronized from the primary LUN to the secondary LUN every synchronization period (If the synchronization type is Manual, users need to trigger the synchronization manually). Before the synchronization, the storage system creates snapshots for the primary and secondary LUNs.
      • The snapshot generated for the primary LUN ensures data consistency between the primary LUN and the secondary LUN during data synchronization.
      • The snapshot for the secondary LUN provides a backup for the data on the secondary LUN before synchronization, thereby preventing the data on the secondary LUN from becoming unusable upon an exception during synchronization.
    4. During the synchronization, data is read from the snapshot of the primary LUN and copied to the secondary LUN.
    5. After the synchronization is completed, the snapshots of the primary and secondary LUNs are removed, and later the next synchronization period starts.
Mirroring (VIS) and Asynchronous Replication (SAN)
Mirroring (VIS) and asynchronous replication (SAN) functions are available only if VIS6600T is deployed at the production site. Mirroring (VIS) technologies are shown in Figure 1-10.
Figure 1-10  Mirroring (VIS) principles
  1. The production host writes new data to the VIS6600T.
  2. The VIS6600T simultaneously writes data to the production volume and the mirrored volume.
  3. After data is successfully written to the production volume and the mirrored volume, the two volumes each return a write success response to the VIS6600T.
  4. The VIS6600T returns a write completion response to the host.
  5. A remote asynchronous replication relationship is set up between the primary LUN at the production site and the secondary LUN at the DR site. For asynchronous replication principles, refer to the heading Asynchronous Replication (SAN) in this section.

Host-Based Replication DR Principles

Host replication uses the host I/O replication function to duplicate the virtual machine (VM) data stored in the production data center to the remote disaster recovery (DR) center.

Host-Based Replication Principles

Host replication has the following key replication principles:

  • Uses the host I/O replication function to replicate VM data from the production center to the DR center, thereby providing data replication and protection for the DR VMs.
  • Uses the DR management software to register the VMs that are on the DR storage system at the DR site with the virtualization platform and to automatically start VMs.
Protection Implemented by Host-Based Replication
Figure 1-11 shows the protection implemented by host-based replication.
Figure 1-11  Protection implemented by host-based replication

Protection Implemented by Host-Based Replication is follows.

  1. Create the to-be-protected VMs and deploy the required software and data.
  2. Configure site information.

    Configure the mapping between inter-site resources.

  3. Configure the to-be-replicated VMs on the virtual replication gateway (VRG).
  4. Placeholder VMs will be automatically generated to back up DR VMs' data so that the data can be used for data recovery upon a DR VM fault.
  5. Create a VM protected group, select the to-be-protected VMs, and specify a protection policy.
  6. Synchronize the DR protection-related configurations.
  7. Create a recovery plan according to the protection policy and configure the VM startup sequence.
  8. Create VM snapshots for the placeholder VMs according to the protection policy.

HyperVault DR Principles

HyperVault can be worked together with snapshot and remote replication to implement DR protection.

HyperVault can quickly back up data to the production center or DR center by snapshot or remote replication. If data in the production center becomes unavailable, snapshots can be used for recovery.
  • If data of the file system in the production center is damaged, HyperVault first selects a snapshot from local snapshots in the DR center to implement local recovery. If local snapshots are unavailable, HyperVault selects a remote backup snapshot to implement full recovery.
  • HyperVault implements second-level local recovery using the snapshot rollback technology of file systems and remote full recovery to ensure data reliability and accuracy using the remote replication technologies of file systems.
Figure 1-12 shows the DR protection principles of HyperVault.
Figure 1-12  DR principles of HyperVault
  1. Based on the specified policy, a new snapshot of the protected file system is created in the local storage system and the old one is deleted at the same time.
  2. Differential data between the latest two snapshots are obtained and copied to the storage system in the DR center.
  3. A snapshot is created and saved in the storage system in the DR center.
  4. A remote snapshot at a specified time point is obtained from the storage system in the DR center.
  5. The snapshot is copied to the storage system in the production center. In this manner, data in the file system is recovered to the data at the specified time point.
Translation
Download
Updated: 2019-05-21

Document ID: EDOC1100075861

Views: 17869

Downloads: 76

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next