No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor BCManager 6.5.0 eReplication User Guide 02

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
DR (HyperMetro Expansion)

DR (HyperMetro Expansion)

This section describes how to expand the HyperMetro DC Solution (VIS or SAN) into the Geo-Redundant Solution for DR.

Test Report Management of Recovery Plans

This section describes how to view the test reports of recovery plans to know the test details.

Context

NOTE:

HyperMetro (NAS) + asynchronous replication (NAS) does not support the test operation.

Procedure

  1. Perform either of the following operations to go to the Recovery Plan Test Report page.

    • On the menu bar, choose Monitor > Report > Recovery Plan Test Report.
    • On the home page, click More in the lower right corner of the Recovery Plan Test Duration (Latest 3 Months) or Recovery Plan Test Status (Latest 3 Months) area.

  2. Select the recovery plan whose details you want to view. Table 6-10 lists the report parameters.

    Table 6-10  Parameters of a recovery plan test report

    Parameter

    Description

    Name

    Indicates the name of the recovery plan.

    Production Site

    Indicates the site where the protected objects in the group reside (the protected site). Service systems of enterprises are running on the site.

    DR Site

    Indicates the site running the DR system. This site provides DR backup services for the protected objects. If a disaster occurs, it can be used to recover services.

    Protected Object Type

    Indicates the types of protected objects in a protected group of the recovery plan. eReplication enables you to create protected groups based on the following applications:
    • Oracle
    • IBM DB2
    • Microsoft SQL Server
    • Microsoft Exchange Server
    • Local File system
    • NAS File System
    • VMware VM
    • LUN
    • FusionSphere VM

    Avg. Test Time

    Indicates the average time required by tests of the recovery plan. If two tests have been conducted on the recovery plan, the value is the average value of the time required by the tests.

    If reprotection has been performed on the recovery plan, time values collected after the latest reprotection must be used.

    Max. Test Time

    Indicates the maximum time required by a test of the recovery plan. If two tests have been conducted on the recovery plan, the value is the maximum value of the time required by the tests.

    If reprotection has been performed on the recovery plan, time values collected after the latest reprotection must be used.

    Min. Test Time

    Indicates the minimum time required by a test of the recovery plan. If two tests have been conducted on the recovery plan, the value is the minimum value of the time required by the tests.

    If reprotection has been performed on the recovery plan, time values collected after the latest reprotection must be used.

    Execution Result

    Indicates the execution result of a recovery plan test.

  3. Optional: Click Export All to export the test reports and save them to a local computer.

Self-defining Startup Parameters for a Protected Object

If upper-layer services have special requirements on the startup sequence of databases or VMs, or on the recovery network of VMs, you can modify startup parameters based on the existing network conditions before performing DR. The aim of doing this is to ensure that databases or VMs can start services after DR. This operation can be performed only in the DR management system at the DR site.

Prerequisites

  • You have logged in to the remote DR management system at the DR site.
  • The status of the recovery plan is Ready, Planned migration failed, Fault recovery failed, Reprotection completed, Clearing completed, Rollback failed, or Rollback completed.

Procedure

  1. On the menu bar, select Utilization > Data Recovery.
  2. When the protected objects are Oracle, IBM DB2, or Microsoft SQL Server databases, you can define the startup sequence of databases during DR.
    1. Select a recovery plan whose protected objects are databases and click the Protected Object tab.
    2. Click Startup Settings.

    3. If you choose to start databases during DR testing, fault recovery, or planned migration, you can set the database startup sequence.

      When multiple databases in a protected group are mutually dependent or upper-layer services have requirements on database startup sequence, you can set the startup sequence of databases. The default startup sequence number is 10. The database with a smaller number starts earlier. A smaller sequence number indicates a higher startup priority. Databases with the same priority start randomly.

    4. Click OK.
  3. When the protected objects are VMware VMs or FusionSphere VMs, you can define the startup sequence and DR network of VMs.
    1. Select a recovery plan whose protected objects are VMs and click the Protected Object tab.
    2. Click Startup Settings.



    3. Choose whether to start VMs during DR testing, fault recovery, or planned migration.

      • If you select Yes, you can set the VM startup sequence. The default startup sequence number is 10. The startup sequence number for a VM ranges from 1 to 512. A smaller sequence number indicates a higher startup priority. VMs with higher priorities start earlier. VMs with the same priority start randomly. Template VMs cannot be started during recovery. By default, they cannot be started or modified.
      • If you select No, the VMs cannot be started during DR testing, fault recovery, or planned migration.

    4. If you want to modify the configurations for VM DR network, select a VM to be modified and click Test Network Settings in the Operation column of the VM. In the Test Network Settings dialog box that is displayed, modify the configurations.

      • FusionSphere VMs

        If you specify a recovery network when recovering a VM, the system automatically assigns this network to the VM. If you do not specify a recovery network, the VM keeps original network settings after being recovered.

        • In the FusionManager scenario:



        • In the FusionCompute scenario:



        NOTE:
        • Before configuring this parameter for a VM, ensure that the VM at the production site has been configured with specifications attributes, or the template for creating the VM has been configured with specifications attributes. Only a VM that meets the prior requirements supports IP address customization. For details, see sections Configuring Specifications Attribute Customization for a Linux VM and Configuring Specifications Attribute Customization for a Windows VM in the document based on the actual version of FusionSphere.
          • If the version of FusionSphere is FusionSphere V100R005C00, see FusionCompute V100R005C00 Virtual Machine Management Guide.

          • If the version of FusionSphere is FusionSphere V100R005C10, see FusionCompute V100R005C10 Virtual Machine Management Guide.

          • If the version of FusionSphere is FusionSphere V100R006C00, see FusionSphere Product Documentation (Server Virtualization,FusionCompute V100R006C00U1).

        • In the FusionCompute scenario, you can modify configurations of the VM network in a batch.
          1. Click Export IP Configuration to export the configuration template.
          2. Modify the configurations of the VM network. If Enable IPv4 Address Settings is set to N, IPv4 address settings are disabled. For details about how to modify the configuration information, see Note in the configuration template.
          3. Click Import IP Configuration to import the configuration template.


        • Currently, IPv4 and IPv6 address settings are supported.
        • Before configuring the DR network for a Linux VM, go to /etc/init.d and check whether the passwd_conf and setpasswd.sh files exist. If yes, delete the files and then select the network adapter whose recovery network you want to configure and configure an IPv4 address or elastic IP address.
        • Before configuring the DR network for a Windows VM, go to the root directory of drive C and check whether the setpass.vbs and passwd.inf files exist. If yes, delete the files and then select the network adapter whose recovery network you want to configure and configure an IPv4 address or elastic IP address.
      • VMware VMs

        When recovering a VM, if you have specified a recovery network, the system automatically assigns this network to the VM. If you do not specify a recovery network, the VM keeps original network settings after being recovered.

        NOTE:

        Only Windows and Linux VMware VMs can be configured with the network recovery function. For details about supported versions of the two operating systems, see the Guest OS Customization Support Matrix, which can be obtained from the VMware official website.

        • For VMs running Windows, first enter the administrator authentication information about the VM operating system, then select a network adapter whose recovery network you want to configure, and choose to manually specify or automatically obtain the IP address/DNS server addresses based on your requirements.

          If you choose the latter, the system automatically assigns the IP address/DNS server addresses of the production network to the DR network.



        • For VMs running Linux, first select a network adapter whose recovery network you want to configure, then choose to manually specify or automatically obtain the IP address/DNS server addresses based on your requirements, and finally configure the global DNS server address.

          If you choose the latter, the system automatically assigns the IP address/DNS server addresses of the production network to the DR network.



DR Testing

This section describes how to test data in the remote DR center. During DR tests, snapshot mappings in the DR center are used to verify usability of data or snapshots replicated to the DR center. The test process has no adverse impact on the production site. After the test is complete, the test data generated at the DR site needs to be cleared and resources need to be restored to the status before the test to facilitate future DR or planned migration.

Prerequisites

  • The production center communicates with the same-city and remote DR centers properly.
  • At least one recovery plan has been created on eReplication.
  • For details about requirements on storage licenses for the tested recovery plan, see Checking License Files.
  • To database applications, all check items related to DR environments are passed. For details about the check items, see Oracle, IBM DB2, and Microsoft SQL Server.
  • If the protected objects are FusionSphere VMs:
    • If storage array-based replication is used for DR, secondary LUNs of the remote replication pairs of the recovery plan have been mapped to VMs at the DR site.
    • If networks of the production site and DR site are not isolated, you can configure recovery IP addresses different from those of the production VMs for the test VMs on the Protected Object tab page to avoid IP address conflicts and ensure service continuity at the production site.
    • After adding or removing disks for a protected VM, refresh the information about the VM and manually enable DR for the protected group where the VM resides in time.
  • VMware VMs:
    • The name of a protect VM cannot contain pound signs (#). The VM configuration file cannot be modified during the recovery plan test if the protected VM name contains such signs.
    • If the networks of the ESXi clusters or hosts at the production site and DR site are not isolated, you can configure different recovery IP addresses for test VMs and production VMs on the Protected Object tab page of the recovery plan to avoid IP address conflicts and ensure service continuity at the production site.
  • If application data is automatically replicated by storage instead of being periodically replicated based on the schedule specified upon the protected group creation, the data replication must be stopped before you start the DR test to prevent possible test failures. You can use either of the following methods to stop storage-based replication on the device management software:
    • When the remote replication pair for the protected applications is in the synchronized state and the data is consistent, split the remote replication pair to temporarily stop data replication.
    • Change the remote replication policy to manual synchronization for the protected applications.
  • If the information about storage devices, hosts, or VMs is modified at the production or DR site, manually refresh the information. For details, see Refreshing Resource Information.
NOTE:

HyperMetro (NAS) + asynchronous replication (NAS) does not support DR testing.

Context

Data testing in the DR center is used to check data availability.
  • You are advised to configure application-based protection policies for Oracle, SQL Server, DB2, VMware vSphere VMs, and FusionSphere VMs to support one-click testing.
  • You are advised to configure LUN-based protection policies for other applications to enable automatic test configuration on the storage system. For this purpose, you need to manually or use self-defined scripts to start and test applications.

In the DR test, snapshots at the DR site can be mapped only to initiators.

Procedure

  1. Log in to Fibre Channel switches one by one, check the information about each Fibre Channel port, and calculate the BER. If the BER is larger than 0.1%, check links and rectify link faults.

    BER = Total number of errors/(In bytes + Out bytes) x 100%

    NOTE:
    A large BER may result in a remote replication failure in a specified window or an unexpected remote replication disconnection.


  2. Log in to the eReplication DR management server in the remote DR center.
  3. Test the recovery plan.
    1. On the menu bar, select Utilization > Data Recovery.
    2. Select the recovery plan to be tested, and click Test in the Operation area.
    3. Click different button in the Operation area for different protected objects.

      NOTE:
      If Huawei UltraPath has been installed on the Linux-based DR host, ensure that I/O suspension time is not 0 and all virtual devices generated by UltraPath have corresponding physical devices. For details, see the OceanStor UltraPath for Linux xxx User Guide.
      • If the protected object type is Oracle, IBM DB2, or Microsoft SQL Server, perform the following:
        1. Select a DR site.
        2. Select a DR host or host group.
          NOTE:
          • If a T series V2 or later storage array is deployed at the DR site, the selected host that you want to recover can only belong to one host group on the storage array, and the host group can only belong to one mapping view. Moreover, the storage LUN used by protected applications and its corresponding remote replication secondary LUN must belong to one LUN group, and the LUN group must reside in the same mapping view as the host group. If the storage array version is 6.5.0, deselect the option for enabling hosts' in-band commands.
          • If the storage array is T series V2R2 or later, or 18000 series, automatic host adding and storage mapping are provided. Ensure that the storage is connected to hosts' initiators properly. In this manner, the system can automatically create hosts, host groups, LUN groups, and mapping views on the storage.
        3. Click Test.
        4. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation..
      • If the type of protected objects is VMware VM, perform the following steps:
        1. Select a test cluster.

          VMs will be recovered in the test cluster. Select Test Site, Test vCenter, and Test Cluster.

          NOTE:

          Upon the first test network selection, you need to set the test cluster information.

        2. Select a test network.

          The default test network is the network for resource mapping. If you want to change the default network, plan or select another network based on site requirements.

          NOTE:

          If Production Resource and DR Resource are not paired, select Production Resource and DR Resource, and click Add to the mapping view to pair them.

        3. Select non-critical VMs.

          In the Available VMs list, select non-critical VMs to stop them to release computing resources.

        4. Click Test.
        5. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation.
        6. Click OK.
      • If the type of protected objects is FusionSphere VM (non-OpenStack architecture), perform the following steps:
        1. Select a cluster to be tested.

          VMs will be recovered in the test cluster. Set Test Site.

          NOTE:

          Upon the first test network selection, you need to set the test cluster information.

        2. Select a testing network.

          The default test network is the network for resource mapping. If you want to change the default network, plan or select another network based on site requirements.

        3. Select an available powered-on host.

          The available powered-on host can provide resources for VMs.

        4. Select non-critical VMs.

          In the Available VMs list, select non-critical VMs you want to stop to release computing resources.

        5. Click Test.
        6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation.
        7. Click OK.

  4. After a test is complete, verify that applications are started in the remote DR center.

    Verify that applications are started and accessed successfully. If an application fails to be started or cannot execute read and write operations, contact Huawei technical support.

    • If the protection policies are based on applications, check whether the applications are started successfully and data can be read and written correctly.
    • If the protection policies are based on LUNs, log in to the application host in the DR center, scan for disks, and start applications. Then check whether the applications are started successfully and data can be read and written correctly.
      NOTE:
      You can use self-developed scripts to scan for disks, start applications, and test applications.

  5. The test data generated at the DR site needs to be cleared and resources need to be restored to the status before the test to facilitate future DR or planned migration.

    • If the protected group contains FusionSphere VMs and deleting a datastore fails during test data clearing, check whether non-DR VMs or disks exist on the datastore. If non-DR VMs or disks exist, migrate or delete them from the datastore.
    • If the information about storage devices, hosts, or VMs is modified at the production or DR site, manually refresh the information. For details, see Refreshing Resource Information.

    1. Select the recovery plan whose data needs to be cleared, and click More > Clear on the Operation list.
    2. Click OK.

Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center

To implement a DR drill, migrate services from an HyperMetro data center to the remote DR center based on the recovery plan.

Prerequisites

  • Recovery plans have been created for protected groups.
  • Data has been tested and the data generated during tests has been cleared in the remote DR center.

Context

During a planned migration, services in an HyperMetro data center are migrated to the DR center by one click after the HyperMetro data center stops working. Then reprotection is performed for the services. Planned migration must be implemented if data or applications in the HyperMetro data center need to be migrated to the DR center due to non-disaster reasons such as power failure, upgrade, or maintenance. After the HyperMetro data center recovers, the services must be switched back to it.
  • You are advised to configure application-based protection policies for Oracle, SQL Server, DB2, VMware vSphere VMs, FusionSphere VMs and NAS File System to support one-click Planned Migration.
  • You are advised to configure LUN-based protection policies for other applications to enable automatic Planned Migration configuration on the storage system. You need to manually or use customized scripts to start and test the applications.
Figure 6-7 shows the state of data replication between storage arrays before the planned migration.
Figure 6-7  Data replication before the planned migration

Procedure

  1. Perform pre-migration configurations.

    • When the protected objects are databases, pre-migration configurations can be performed in the following two modes.
      • Method one: Manually stop applications and delete mapping views.
        1. Manually stop the service system and database applications and uninstall disks on the production host.
        2. Delete LUN mappings (to application hosts at the production sites) from the storage array in the production center.
          NOTE:
          • In the SQL Server cluster, the cluster must be in Maintenance mode before deleting the mapped LUNs.
          • In the asynchronous replication scenario of the HANA database, you need to log in to the database to create a snapshot for restoring the database before stopping the service system and database applications. For details, seeCreating a HANA Snapshot File.
        3. Log in to eReplication at the DR site, click Resources, select the site and the storage device, and refresh the storage device status, to ensure that all LUN mappings are removed.
      • Method two: Edit planned migration procedures. Before the planned migration, applications on hosts are automatically stopped and mapping views are automatically deleted.
        1. Log in to eReplication at the DR site, and on the menu bar select On the menu bar, select Utilization > Data Recovery.
        2. Select the recovery plan and click the Procedure tab.
        3. Click Edit.
        4. From the drop-down list, select Planned Migration.
        5. Click Stop production service and click Start the step.
        6. Click Apply.
    • When the type of protected objects is VMware VMs, perform the following configurations:
      • If any VM names contain pound signs (#), change them and avoid the use of pound signs (#). Otherwise, VM configuration files cannot be modified during the planned migration.
      • Without configuration, the VM IP address for the planned migration is the same as that in the production center. You can configure one for the planned migration on the Protected Object tab page of the recovery plan. For details, see Defining Startup Parameters for a Protected Object.
      • When the HyperMetro (NAS) and asynchronous replication (NAS) DR solution is deployed, you need to create a share and configure permissions on DeviceManager of the storage array at the DR site. Permissions must be the same as those in the production center.
        NOTE:

        If you fail to create a share and configure permissions, the planned migration will fail.

  2. Perform planned migration.
    1. On the menu bar, select Utilization > Data Recovery.
    2. Select the recovery plan used to perform the planned migration and click More > Planned Migration on the Operation list.
    3. Perform the planned migration based on protected objects.

      NOTE:
      If Huawei UltraPath has been installed on the Linux-based DR host, ensure that I/O suspension time is not 0 and all virtual devices generated by UltraPath have corresponding physical devices. For details, see the OceanStor UltraPath for Linux xxx User Guide.
      • If the protected object type is Oracle, IBM DB2, or Microsoft SQL Server, perform the following operations:
        1. Select a DR site.
        2. Select a DR host or host group.
          NOTE:
          • If a T series V2 or later storage array is deployed at the DR site, the selected host that you want to recover can only belong to one host group on the storage array, and the host group can only belong to one mapping view. Moreover, the storage LUN used by protected applications and its corresponding remote replication secondary LUN must belong to one LUN group, and the LUN group must reside in the same mapping view as the host group. If the storage array version is 6.5.0, deselect the option for enabling hosts' in-band commands.
          • If the storage array is T series V2R2 or later, or 18000 series, automatic host adding and storage mapping are provided. Ensure that the storage is connected to hosts' initiators properly. In this manner, the system can automatically create hosts, host groups, LUN groups, and mapping views on the storage.
        3. Click Planned Migration.
        4. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation.Click OK.
      • If the type of protected objects is VMware VM, perform the following steps:
        1. Select a recovery cluster.

          VMs will be recovered to the recovery cluster. Select DR Site, DR vCenter, and DR Cluster.

          NOTE:

          Upon the first network recovery, you need to set the cluster information.

        2. Select a recovery network.

          The recovery network is used to access recovered VMs.

          NOTE:

          If Production Resource and DR Resource are not paired, select Production Resource and DR Resource, and click Add to the mapping view to pair them.

        3. Set the Access Settings parameter.

          Set Logical Port IP Address to recover hosts in the cluster to access DR file systems over the logical port.

          NOTE:

          In scenarios where the HyperMetro (NAS) and asynchronous(NAS) replication DR solution is deployed, you need to set Access Settings.

        4. Stop non-critical VMs when executing recovery.

          In the Available VMs list, select non-critical VMs to stop them to release computing resources.

        5. Click Planned Migration.
      • If the protected object type is FusionSphere VM, and NAS File System. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation. Then click OK.

    Stop services in the HyperMetro data center and migrate them to the remote DR center. Figure 6-8 shows the state of data replication between storage arrays after the migration.

    Figure 6-8  Data replication after the planned migration

  3. After the migration is complete, verify that applications are started and data is consistent in the DR center.

    After the planned migration is complete, check whether the applications and data are normal. If an application or data encounters an exception, contact Huawei technical support.

    • Note the following when checking the startup status of applications.
      • If the protection policies are based on applications, check whether the applications are started successfully and data can be read and written correctly.
      • If the protection policies are based on LUNs, log in to the application host in the DR center, scan for disks, and start applications. Then check whether the applications are started successfully and data can be read and written correctly.
        NOTE:
        You can use self-developed scripts to scan for disks, start applications, and test applications.
    • You can check data consistency by viewing the last entry of data written to the production and DR centers. If the last entry of data written to the production and DR centers is the same, the data consistency is ensured.

  4. Delete data after migration.

    If storage array-based remote replication DR is used, snapshots are created automatically on the storage array at the DR site to back up DR data during the planned migration. If snapshots are not automatically deleted after the planned migration is complete, manually delete them to release storage space.

    NOTE:

    In HyperMetro (NAS) and asynchronous replication (NAS) scenarios, the reprotection operation is not supported delete data after migration.

    NOTE:

    A snapshot name is a string of 31 characters named in the following format: DRdata_LUNID_YYYYMMDDHHMMSS_BAK, where YYYYMMDDHHMMSS is the backup time and LUNID may be the snapshot ID (a number ranging from 1 to 65535). This naming format enables you to quickly find the snapshots that you want to delete from the storage array at the DR site.

  5. Check the environment before perform reprotection.

    • Databases:

      Ensure that underlying storage links, remote replication pairs, and consistency groups have been recovered.

    • VMware VMs:
      • Ensure that all VM names contain no number sign (#). Otherwise, the VM configuration file cannot be modified during the testing of the recovery plan.
      • Ensure that underlying storage links, remote replication pairs, and consistency groups have been recovered.
      • On the original production array, unmap the volumes corresponding to the applications to be restored.
    • FusionSphere VMs:
      • FusionCompute, FusionManager, storage devices, and VRGs at the production site are in the normal state, and the production site and DR site communicate with each other correctly.
      • In scenarios that uses storage array-based replication, ensure that storage configurations between the original production and DR sites are correct.
        • Ensure that all underlying storage units corresponding to protected groups have remote replication pairs.
        • Ensure that all remote replication pairs have secondary LUNs that belong to storage devices at the original production site.
        • If there are multiple remote replication pairs, they must belong to the same consistency group.
      • After adding or removing disks for a protected VM, refresh the information about the VM and manually enable DR for the protected group where the VM resides in time.
    • NAS file system:

      Ensure that underlying storage links and remote replication have been recovered.

  6. Perform reprotection to protect services migrated to the remote DR center.

    After the planned migration is complete, the service system is working in the remote DR center and protected groups become Invalid. You must perform reprotection to recover the replication relationship between the remote DR center and the HyperMetro data center and synchronize data from the remote DR center to the HyperMetro data center.

    To ensure the normal running of protected groups and recovery plans after reprotection, the system automatically clears protected and recovered configurations, including startup configurations of protection policies and recovery plans, self-defined execution scripts, and self-defined execution steps. In addition, re-configuration of protection and recovery policies is recommended to ensure the continuity of DR services.

    NOTE:
    In active-active (VIS) and asynchronous replication (SAN) scenarios, reprotection is not supported after services are migrated to the remote DR center.

    1. On the menu bar, select Utilization > Data Recovery.
    2. Select the recovery plan and click More > Reprotection on the Operation list.

      If the protected objects are VMware VMs and services are recovered through a planned migration, perform the following steps to clear redundant and incorrect data in the virtualization environment before and after the reprotection.

      1. If reprotection was performed, use vSphere Client to log in to vCenter at site B. Click Storage list, and click Rescan All of storage devices one by one, to ensure that no datastore exists on ESXi hosts. Otherwise, skip this step.
      2. On eReplication, perform reprotection.
      3. Log in to the in the vCenter server at site A using vSphere Client. In the Storage list, right-click storage devices and select Rescan All from the drop-down list one by one, to ensure that no residuals exist on ESXi hosts in the cluster where the migrated VMs reside.
      4. Return to eReplication, and update vCenter servers and storage resources of site A to obtain the latest VM environment information.

    3. Carefully read the content of the Confirm dialog box that is displayed and click OK to confirm the information.

      NOTE:

      If Save user configuration data is selected, self-defined protection policies and recovery settings, such as self-defined recovery steps, will be retained. Ensure that the configuration data has no adverse impact on service running after reprotection.

Result

Figure 6-9 shows the state of data replication between storage arrays after the reprotection.
Figure 6-9  Data replication after the reprotection

Performing Planned Failback of Services in the Remote DR Center

After services are migrated from an HyperMetro data center to the remote DR center through planned migration, the services must be migrated back to the HyperMetro data center based on a DR testing plan.

Prerequisites

  • Planned migration and reprotection have been successfully performed for service systems according to a recovery plan.
  • The production hosts are working properly and the service system in the original production center is in the standby state.

Context

Figure 6-10 shows the state of data replication between storage arrays before failback.
Figure 6-10  Data replication before failback

Procedure

  • In active-active (VIS) and asynchronous replication (SAN) scenarios, reprotection using eReplication is not supported after services are migrated to the remote DR center. You need to manually migrate services back to the original active-active data centers as follows:
    1. Synchronize service data from storage arrays in the remote DR center to those in active data center B.

      1. Log in to a DR host, shut down the service system, and unmount disks.
      2. Log in to a storage array in the remote DR center and select consistency groups of asynchronous remote replication created for the service system.
      3. Synchronize the consistency groups of asynchronous remote replication.
      4. After synchronization, split the consistency groups of asynchronous remote replication.
      5. Deselect Enable write protection for secondary LUN of the consistency groups of asynchronous remote replication.
      6. Perform primary/secondary switchover for the consistency groups of asynchronous remote replication.

    2. Log in to the VIS in an active-active data center and delete original mirrored volumes.
    3. Log in to a storage array in active data center B and set up a mapping view between service LUNs and the VIS.
    4. Log in to the VIS in the active-active data center and restore the mapping relationship between the VIS and production hosts.

      1. Scan for logical disks and obtain service LUNs mapped from the storage array in active data center B.
      2. Use service LUNs mapped from the storage array in active data center B as the source LUNs and create mirrored volumes.
      3. In Mirror Management, add mirrored LUNs (the LUNs mapped by the storage array in data center A to the VIS) for the service LUNs.
      4. Map the mirrored volumes to the production hosts.

    5. Log in to a production host, mount disks, and start the service system.
    6. Perform service data read and write operations to test services.
    7. Synchronize the service data from the active-active data centers to the remote DR center.

      1. Log in to the storage array in active data center B.
      2. Select the consistency groups of asynchronous remote replication and enable write protection for the secondary LUNs.
      3. Synchronize the consistency groups of asynchronous remote replication.

  • In HyperMetro (SAN) and asynchronous replication (SAN) scenarios, you can use eReplication to migrate back services as follows:
    1. On eReplication in the remote DR center, select a recovery plan used to perform a failback, test the recovery plan, and clear data generated during the test.

      Before services are migrated back, perform a DR test to verify data availability to ensure service failback success rate. After the test, clear test data to avoid failback failures caused by the test data. For details, see DR Testing.

    2. On eReplication in the remote DR center, select the recovery plan to perform a planned migration.

      Migrate services back to the HyperMetro data centers. After the migration, check data and clear test data. For details, see 1 to 4 in Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center.

      NOTE:
      • When the protected object type is Oracle, IBM DB2, or Microsoft SQL Server, select hosts or host groups in the original production center as recovery targets.
      • If the protected object type is VMware VM, select vCenter and clusters in the original production center as recovery targets.
      • You can select Synchronize HyperMetro to enable HyperMetro synchronization.
      Figure 6-11 shows the state of data replication between storage arrays after the migration.
      Figure 6-11  Data replication after the migration

    3. On eReplication at the remote DR center, select the recovery plan and perform reprotection.

      After the planned migration is complete, application systems are working in HyperMetro data centers and protected groups become Invalid. To ensure that services migrated back to the original HyperMetro data centers can be recovered at the remote DR center after an event (planned or unplanned) happens, reprotection must be performed and the replication status from HyperMetro data centers to the remote DR center must be recovered to synchronize data generated at the HyperMetro data centers to the remote DR site and to ensure that services are protected. For details, see 5 to 6 in Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center.

      Figure 6-12 shows the state of data replication between storage arrays after the reprotection.
      Figure 6-12  Data replication after the reprotection

    4. Ensure that communication among sites is normal.

      1. Log in to the storage array management software at active data center B, and ensure that the status of the HyperMetro pair between active data center B and active data center A is normal and that asynchronous replication links between active data center B and site C are normal.
      2. Log in to eReplication at active data center B, and ensure that status of HyperMetro (SAN) and asynchronous replication (SAN) protected groups is normal and the topology structure has been restored to the original networking status.

  • In HyperMetro (NAS) and asynchronous replication (NAS) scenarios, you can use eReplication to migrate back services as follows:

    1. On eReplicationin the remote DR center, select the recovery plan to perform a planned migration.

      Migrate services back to the HyperMetro data centers. After the migration, check data and clear test data. For details, see1 to 4 in Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center.

      Figure 6-13 shows the state of data replication between storage arrays after the migration.
      Figure 6-13  Data replication after the migration
    2. On eReplication at the remote DR center, select the recovery plan and perform reprotection.

      After the planned migration is complete, application systems are working in HyperMetro data centers and protected groups become Invalid. ensure that services migrated back to the original HyperMetro data centers can be recovered at the remote DR center after an event (planned or unplanned) happens, reprotection must be performed and the replication status from HyperMetro data centers to the remote DR center must be recovered to synchronize data generated at the HyperMetro data centers to the remote DR site and to ensure that services are protected. For details, see 5 to 6 in Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center.

      Figure 6-14 shows the state of data replication between storage arrays after the reprotection.
      Figure 6-14  Data replication after the reprotection
    3. Ensure that communication among sites is normal.
      1. Log in to the storage array management software at active data center B, and ensure that the status of the HyperMetro pair between active data center B and active data center A is normal and that asynchronous replication links between active data center B and site C are normal.
      2. Log in to eReplicationat active data center B, and ensure that status of HyperMetro (NAS) and asynchronous replication (NAS) protected groups is normal and the topology structure has been restored to the original networking status.

Migrating Services to the Remote DR Center upon a Fault Occurring in HyperMetro Data Centers

If data or applications in HyperMetro data centers become unavailable due to disasters or faults, quickly recover the data or applications to and start services in the remote DR center.

Prerequisites

  • Application-based protection policies and recovery plans have been configured for database applications or VMware vSphere and FusionSphere deployed on physical machines.
  • LUN-based protection policies and recovery plans have been configured for other applications.
  • For protected groups that use asynchronous remote replication, there is at least one copy of intact service data in remote DR center.

Context

When data or applications become unavailable due to disasters or faults in the HyperMetro data centers, you need to quickly migrate services to and start them in the remote DR center. Figure 6-15 shows the state of data replication between storage arrays before the migration.
Figure 6-15  Data replication before the migration

Procedure

  1. Log in to eReplication in the remote DR center.
  2. Perform pre-recovery configurations.

    • When the type of protected objects is FusionSphere VMs, perform the following configurations:
      • Without configuration, the VM IP address for fault recovery is the same as that in the production center. You can change it for the planned VM migration on the Protected Object tab page of the recovery policy. For details, see Defining Startup Parameters for a Protected Object.
      • After adding or removing disks for a protected VM, refresh the information about the VM and manually enable DR for the protected group where the VM resides in time.
    • When the type of protected objects is VMware VMs, perform the following configurations:
      • If any VM names contain number signs (#), change them and avoid the use of number signs (#). Otherwise, VM configuration files cannot be modified during the fault recovery.
      • Without configuration, the VM IP address for fault recovery is the same as that in the production center. You can configure one for fault recovery on the Protected Object tab page of the recovery plan. For details, see Defining Startup Parameters for a Protected Object.
      • When the HyperMetro (NAS) and asynchronous replication (NAS) DR solution is deployed, you need to create a share and configure permissions on DeviceManager of the storage array at the DR site. Permissions must be the same as those in the production center.
        NOTE:

        If you fail to create a share and configure permissions, faults cannot be rectified.

  3. Perform a fault recovery.

    NOTE:
    If Huawei UltraPath has been installed on the Linux-based DR host, ensure that I/O suspension time is not 0 and all virtual devices generated by UltraPath have corresponding physical devices. For details, see the OceanStor UltraPath for Linux xxx User Guide.

    1. On the menu bar, select Utilization > Data Recovery.
    2. Select the recovery plan used for fault recovery and click More > Fault Recovery on the Operation list.
    3. Perform fault recovery based on the protected object type.

      • If the protected object type is Oracle, IBM DB2, or Microsoft SQL Server, perform the following:
        1. Select a DR site.
        2. Select a DR host or host group.
          NOTE:
          • If a T series V2 or later storage array is deployed at the DR site, the selected host that you want to recover can only belong to one host group on the storage array, and the host group can only belong to one mapping view. Moreover, the storage LUN used by protected applications and its corresponding remote replication secondary LUN must belong to one LUN group, and the LUN group must reside in the same mapping view as the host group. If the storage array version is 6.5.0, deselect the option for enabling hosts' in-band commands.
          • If the storage array is T series V2R2 or later, or 18000 series, automatic host adding and storage mapping are provided. Ensure that the storage is connected to hosts' initiators properly. In this manner, the system can automatically create hosts, host groups, LUN groups, and mapping views on the storage.
        3. Click Fault Recovery.
        4. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation.Click OK.
      • If the type of protected objects is VMware VM, perform the following steps:
        1. Select a recovery cluster.

          VMs will be recovered to the cluster. Select DR Site, DR vCenter, and DR Cluster.

          NOTE:

          Upon the first network recovery, you need to set the cluster information.

        2. Select a recovery network.

          The network is used to access recovered VMs.

          NOTE:

          If Production Resource and DR Resource are not paired, select Production Resource and DR Resource, and click Add to the mapping view to pair them.

        3. Set the Access Settings parameter.

          Set Logical Port IP Address to recover hosts in the cluster to access DR file systems over the logical port.

          NOTE:

          In scenarios where the HyperMetro (NAS) and asynchronous(NAS) replication DR solution is deployed, you need to set Access Settings.

        4. Stop non-critical VMs when executing recovery.

          In the Available VMs list, select non-critical VMs to stop them to release computing resources.

        5. Click Fault Recovery.
      • If the protected object type is FusionSphere VM and NAS file system. In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation. Then click OK.

  4. In the production center, check the application startup status.

    After the fault recovery is complete, check whether the applications and data are normal. If an application or data encounters an exception, contact Huawei technical support.

    • Note the following when checking the startup status of applications.
      • If the protection policies are based on applications, check whether the applications are started successfully and data can be read and written correctly.
      • If the protection policies are based on LUNs, you need to log in to the application host in the disaster recovery center, scan for disks, and start applications. Then check whether the applications are started successfully and data can be read and written correctly.
        NOTE:
        You can use self-developed scripts to scan for disks, start applications, and test applications.

Result

Figure 6-16 shows the state of data replication between storage arrays after the migration.
Figure 6-16  Data replication after the migration

Migrating Data from the DR Center back to HyperMetro Data Centers

After HyperMetro data centers recover from a fault, services will be migrated back to them.

Prerequisites

  • Application-based protection policies and recovery plans have been configured for database applications or VMware vSphere and FusionSphere deployed on physical machines.
  • LUN-based protection policies and recovery plans have been configured for other applications.
  • For protected groups that use asynchronous remote replication, the DR center has at least one copy of intact service data.
NOTE:

In HyperMetro (NAS) and asynchronous replication (NAS) scenarios, the reprotection operation is not supported on eReplication after services are switched to the remote DR center.

Context

Services are migrated from HyperMetro data centers to the DR center due to a recoverable fault such as an unexpected power failure. After the HyperMetro data centers recover from the fault, you must synchronize data generated during the DR from the DR center to the HyperMetro data centers and migrate services back to the HyperMetro data centers. Figure 6-17 shows the state of data replication between storage arrays before failback.
Figure 6-17  Data replication before failback

Procedure

  • In active-active (VIS) and asynchronous replication (SAN) scenarios, reprotection using eReplication is not supported after services are migrated to the remote DR center. You need to manually migrate services back to the original active-active data centers as follows:
    1. Log in to storage arrays in the remote DR center and reconstruct remote replication and protected consistency groups between the DR storage and production storage.
    2. Perform 1 to 7 in Performing Planned Failback of Services in the Remote DR Center.
    3. Verify that services are running properly on production hosts.
  • In HyperMetro (SAN) and asynchronous replication (SAN) scenarios, you can use eReplication to migrate back services as follows:
    1. Check the environment before starting the reprotection.

      1. Log in to a service host in an HyperMetro data center and stop services.
        • For databases: Manually stop applications in the production center and unmount host disks. Then log in to the storage array management software of the HyperMetro data centers, and remove mappings of LUNs used by applications from hosts.
        • For VMware VMs: Use vSphere Client to connect to VMware vCenter in the production center. Power off protected VMs, remove them from the list, and uninstall datastores used by the VMs.
        • For FusionSphere VMs: Log in to the FusionCompute, power off protected VMs, and uninstall datastores used by the VMs.
      2. Suspend HyperMetro pairs of storage arrays in the HyperMetro data centers.
        Figure 6-18 shows the state of data replication between storage arrays before the planned migration.
        Figure 6-18  Data replication before the planned migration

    2. Perform reprotection to protect services migrated to the remote DR center.

      After the planned migration is complete, the application system is working at the remote DR site and protected groups become Invalid. You must perform reprotection to recover the replication status and synchronize data from the remote DR center to the HyperMetro data centers.

      To ensure the normal running of protected groups and recovery plans after reprotection, the system automatically clears protected and recovered configurations, including startup configurations of protection policies and recovery plans, self-defined execution scripts, and self-defined execution steps. In addition, reconfiguration of protection and recovery policies is recommended to ensure the continuity of DR services.

      1. On the menu bar, select Utilization > Data Recovery.
      2. Select the recovery plan and click More > Reprotection on the Operation list.
        If the protected objects are VMware VMs and services are recovered from site A to site B, perform the following steps to clear redundant and incorrect data in the virutalization environment before and after the reprotection:
        1. Log in to the vCenter server at site A using vSphere Client.
        2. Close and migrate all VMs that are recovered to site B by recovery plan registration.
        3. On ESXi hosts in the cluster from which VMs are migrated, uninstall datastores used by the VMs one by one.
        4. For LUNs used by the uninstalled datastores, detach the LUNs from the datastores.
        5. Click Storage. All storage devices are displayed. In the row where each storage device resides, click Rescan All to ensure that no datastore exists on ESXi hosts.
        6. Return to eReplication, and refresh vCenter servers and storage resources on both sites to obtain the latest VM environment information.
        7. Perform reprotection.
      3. Carefully read the content in the Confirm dialog box that is displayed and click OK to confirm the information.
        NOTE:

        If Save user configuration data is selected, self-defined protection policies and recovery settings, such as self-defined recovery steps, will be retained. Ensure that the configuration data has no adverse impact on service running after reprotection.

      Figure 6-19 shows the state of data replication between storage arrays after the reprotection.
      Figure 6-19  Data replication after the reprotection

    3. Test recovery plans and clear data generated during tests.

      Before services are migrated back, perform a DR test to verify data availability to ensure service failback success rate. After the test, clear test data to avoid failback failures caused by the test data. For details, see DR Testing.

    4. Perform planned migration.

      Migrate services back to the HyperMetro data centers. After the migration, check data and clear test data. For details, see 1 to 4 in Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center.

      NOTE:
      • When the protected object type is Oracle, IBM DB2, or Microsoft SQL Server, select hosts or host groups in the original production center as recovery targets.
      • If the protected object type is VMware VM, select vCenter and clusters in the original production center as recovery targets.
      • You can select Synchronize HyperMetro to enable HyperMetro synchronization.
      Figure 6-20 shows the state of data replication between storage arrays after the planned migration.
      Figure 6-20  Data replication after the planned migration

    5. Perform reprotection again.

      After the planned migration is complete, application systems are working in HyperMetro data centers and protected groups become Invalid. To ensure that services migrated back to the original HyperMetro data centers can be recovered at the remote DR center after an event (planned or unplanned) happens, reprotection must be performed and the replication status from HyperMetro data centers to the remote DR center must be recovered to synchronize data generated at the HyperMetro data centers to the remote DR site and to ensure that services are protected. For details, see 5 to 6 in Performing Planned Service Migration from an HyperMetro Data Center to the Remote DR Center.

      Figure 6-21 shows the state of data replication between storage arrays after the reprotection.
      Figure 6-21  Data replication after the reprotection

Translation
Download
Updated: 2019-05-21

Document ID: EDOC1100075861

Views: 10418

Downloads: 55

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next