No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor BCManager 6.5.0 eReplication User Guide 02

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Checking the DR Environment

Checking the DR Environment

Before the DR service configuration, you need to check the application environment and storage at the production and DR sites where the protected objects reside, to ensure the normal running of subsequent configuration jobs.

Oracle

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where Oracle databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End
The following configuration items must be checked and configured on databases at both the production and DR ends.

When the Oracle RHCS is used at the production end and the logical volume is managed in CLVM, the standalone deployment mode is not supported at the DR end.

NOTE:

If the host runs AIX and the user runs the rendev command to change the name of a physical volume at the production end to start with /dev/hdisk, the name of the device in the production end may change after DR testing, planned migration, or fault recovery.

  1. When an Oracle database runs Windows, check whether the password of the Oracle database meets the following requirement:

    A password contains only letters, digits, tildes (~), underscores (_), hyphens (-), pounds (#), dollar signs ($), and asterisks (*).

  2. When the host where an Oracle database resides runs Linux, AIX, or HP-UX, the mount points of the file system used by the database cannot be nested. For example, mount points /testdb/ and /testdb/database1/ are nested.
  3. When a database environment is configured at the production end where hosts running Red Hat Linux are deployed and the hosts are connected to storage arrays through Fibre Channel links, check whether systool is installed on the production host and DR host.

    Run the rpm -qa | grep sysfsutil command to check whether the systool software has been installed on the host.

    • If the command output contains sysfsutils, for example, sysfsutils-2.0.0-6, the systool software has been installed on the host.
    • If the command output does not contain sysfsutils the systool software is not installed on the host. In this case, obtain the sysfsutils software package from the installation CD-ROM of Red Hat Linux.
      NOTE:

      RedHat Linux 6.3 for x86_64 and the sysfsutils software package sysfsutils-2.1.0-6.1.el6.x86_64.rpm are used as examples. The installation command is subject to the actual environment.

  4. Check and configure environment variables of the databases.

    The eReplication Agent starts to protect the data consistency of an Oracle database only after you have configured environment variables for the Oracle database. Configure environment variables prior to the installation of the eReplication Agent. Otherwise, restart the eReplication Agent after you configure environment variables to make them effective. For details about how to restart the eReplication Agent, see Starting the eReplication Agent.

    • To configure environment variables for an Oracle database in Windows, perform the following steps:
      1. Log in to the application server as an administrator.
      2. Check whether environment variables have been configured.
        Right-click My Computer and choose Properties from the drop-down list. On the Advanced tab page, click Environment Variables. If environment variables have been configured, a dialog box similar to Figure 5-7 is displayed. Check whether the PATH variable is the bin directory of the Oracle database.
        Figure 5-7  Environment variables
    • To configure environment variables for an Oracle database running a non-Windows operating system, perform the following steps:
      1. The following uses the Oracle 12gR1 database as an example to describe how to configure environment variables in Linux. Log in to the application server as user root.
      2. Run the su -oracle command to switch to user oracle.
      3. Run the echo $ORACLE_BASE command to ensure that ORACLE_BASE has been configured.
        oracle@linux:~> echo $ORACLE_BASE 
        /u01/app/oracle
      4. Run the echo $ORACLE_HOME command to ensure that ORACLE_HOME has been configured.
        oracle@linux:~> echo $ORACLE_HOME 
        /u01/app/oracle/product/12.1.0/dbhome_1
      5. Run the echo $PATH command to ensure that ORACLE_HOME/bin has been added in the PATH variable.
        oracle@linux:~> echo $PATH /u01/app/oracle/product/12.1.0/dbhome_1/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin 
      6. Run the exit command to log user oracle out.
      7. Run the su -grid command to switch to user grid.
      8. Run the echo $ORACLE_HOME command to ensure that ORACLE_HOME has been configured.
        grid@linux:~> echo $ORACLE_HOME 
        /u01/app/12.1.0/grid/
        NOTE:
        In Oracle 11g R2 or later, ORACLE_HOME must be configured in the environment variables of user grid.
      9. Run the echo $PATH command to ensure that ORACLE_HOME/bin has been added in the PATH variable.
        grid@linux:~> echo $PATH /u01/app/12.1.0/grid//bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin
      10. Run the exit command to log user grid out.
      11. Run the su - rdadmin command to switch to user rdadmin.
      12. Run the echo $SHELL command to view the default shell type of user rdadmin.
        The shell type can be bash, ksh, or csh. Table 5-76 describes configuration files that need to be modified based on shell types of operating systems.
        NOTE:

        Before configuring environment variables, check the default shell type of user rdadmin on the eReplication Agent. If the shell type is bash, modify the .profile or .bash_profile file under the rdadmin home directory. If the shell type is csh, modify the . cshrc file under the rdadmin home directory. This document uses shell type bash as an example.

        Table 5-76  Configuration files that need to be modified for different shell types of operating systems

        Operating System

        Shell Type

        Configuration File

        Configuration Approach

        Linux

        sh/bash (default, recommended)

        • SUSE/Rocky: .profile
        • Redhat: .bash_profile

        export name=value

        csh

        .cshrc

        setenv name value

        ksh

        .profile

        export name=value

        AIX/Solaris

        sh/ksh (default, recommended)

        .profile

        export name=value

        csh

        .cshrc

        setenv name value

        bash

        .profile

        export name=value

        HP-UX

        sh/POSIX Shell (default, recommended)

        .profile

        export name=value

        csh

        .cshrc

        setenv name value

        ksh

        .profile

        export name=value

        bash

        .profile

        export name=value

      13. Run the vi ~/.profile command to open the .profile file in the rdadmin home directory.
      14. Press i to enter the editing mode and edit the .profile file.
      15. Add the following content to the .profile file. Table 5-77 describes the related parameters.
        ORA_CRS_HOME=/opt/oracle/product/10g/cluster
        PATH=$PATH:/bin:/sbin:/usr/sbin 
        export ORA_CRS_HOME PATH 
        Table 5-77  System variables

        Name

        Value

        Description

        ORA_CRS_HOME

        /opt/oracle/product/10g/cluster

        Indicates the installation directory for the CRS software of an Oracle database. This environment variable needs to be configured only when the version of the RAC environment is earlier than Oracle 11g R2 and Clusterware has been installed.

        PATH

        PATH=$PATH:/bin:/sbin:/usr/sbin

        Indicates the command directory of the operating system on the host where the Oracle database resides.

      16. After you have modified the file, press Esc and enter :wq! to save the changes and close the file.

  5. If the host where the Oracle database resides runs Linux, check the disk mapping mode using udev.

    When udev is used to map disks, the restrictions on disk mapping modes are as follows:
    • Disks can only be mapped through disk partitioning.
    • When udev is used to map ASM disks, use udev disks to serve as the member disks of the ASM disk group.
    • The udev configuration rules at the production and DR ends must be in the 99-oracle-asmdevices.rules rule file (save path: /etc/udev/rules.d).
      Two mapping modes are available. The following uses SUSE Linux 10 (disks asm1 and asm2 involved) as an example to describe how to configure both mapping modes:
      • Renaming disks
        KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id
        -g -u -s /block/$parent", RESULT=="36200bc71001f375519f5d2c0000000f9",
        NAME="asm1", OWNER="oracle",  GROUP="dba", MODE="0660"
        or
        KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent",
        RESULT=="36200bc71001f375519f5d2c0000000f9", NAME="asm2", OWNER="oracle",
         GROUP="dba", MODE="0660"
      • Creating disk links
        KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent",
        RESULT=="36200bc71001f375519f5d2c0000000f9", SYMLINK+="asm1", OWNER="oracle",GROUP="dba", MODE="0660"
        or
        KERNEL=="sd?1", BUS=="scsi",
        PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="36200bc71001f375519f5d2c0000000f9",
        SYMLINK+="asm2", OWNER="oracle",  GROUP="dba", MODE="0660" 

        The KERNEL parameter must be specified using wildcards (for example, KERNEL=="sd*1" or KERNEL=="sd?1") and cannot be a fixed device partition name (for example: KERNEL="sda"). Otherwise, udev configuration rules cannot take effect.

  6. Check the authentication mode of the database.

    During the creation of the Oracle protected group, you can specify different authentication modes for different protected objects and RAC hosts. Currently, database authentication and operating system authentication are supported. Table 5-78 describes the configuration details.

    Table 5-78  Configuration requirements for database authentication modes

    Authentication Mode

    Requirements

    Database authentication

    Authentication modes at the production and DR ends must be the same.

    In a cluster, authentication modes of all hosts must be the same.

    In a protected group, authentication modes of protected objects must be the same.

    During protected group creation, authentication mode specified on the eReplication must be the same as that used by the database.

    Operating system authentication

    In an Oracle RAC cluster deployed on ASM, the operating system authentication must be enabled so that the cluster at the DR end can be started normally upon DR.

    For Oracle single-instance databases deployed on ASM, the following requirements must be met:
    • For Unix-like operating systems, operating system authentication must be enabled if the password files of Oracle databases are configured to save in the ASM disk group, operating system authentication must be enabled. Alternatively, you can save the password files on the local file system. if neither method is adopted, the recovery plan corresponding to the Oracle protected group cannot be executed for testing, planning migration, and fault recovery.
    • For Windows, operating system authentication must be enabled. Otherwise, the recovery plan corresponding to the Oracle protected group cannot be executed for testing, planned migration, and fault recovery.

Production End
  1. Check the running mode of databases at the production end.

    The eReplication Agent can ensure consistency between Oracle databases only when the databases are running in archive mode. Perform the following steps to check the operation mode of a database.

    • To configure the running mode of an Oracle database in Windows, perform the following steps:
      1. Run the sqlplus command to log in to the Oracle database. In the example provided here, the user name is sys, the password is abc, and the instance name is db01.

        The following shows the command format and output:

        C:\Documents and Settings\Administrator>sqlplus /nolog
        
        SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 3 26 18:09:00 2010
        
        Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
        
        SQL> conn sys/abc@db01 as sysdba
        Connected.
      2. Run the archive log list command to check whether the database is in archive mode.

        The following shows the command format and output:

        SQL> archive log list;
        Database log mode            No Archive Mode
        Automatic archival           Enabled
        Archive destination          G:\oracle10g\product\10.2.0\db_1\RDBMS
        Oldest online log sequence   7
        Current log sequence         9
        
        NOTE:
        • If Database log mode is Archive Mode, the database is in archive mode.
        • If Database log mode is No Archive Mode, follow instructions in related Oracle database documents to change the operation mode to archive.
    • To configure the running mode of an Oracle database running a non-Windows operating system, perform the following steps:
      1. Run the sqlplus command to log in to the Oracle database. In the example provided here, the user name is sys, the password is oracle, and the instance name is verify.

        The following shows the command format and output:

        [oracle@rhcs218 ~]$ sqlplus /nolog
        
        SQL*Plus: Release 11.2.0.3.0 - Production on Fri Oct 23 10:30:34 2015
        
        Copyright (c) 1982, 2002, Oracle.  All rights reserved.
        
        SQL> conn sys/oracle@verify as sysdba
        Connected.
      2. Run the archive log list command to check whether the database is in archive mode.

        The following shows the command format and output:

        SQL> archive log list;
        Database log mode             No Archive Mode
        Automatic archival            Enabled
        Archive destination           /oracle/archive
        Oldest online log sequence    7793
        Next log sequence to archive  7795  
        Current log sequence          7795
        
        NOTE:
        • If Database log mode is Archive Mode, the database is in archive mode.
        • If Database log mode is No Archive Mode, follow instructions in related Oracle database documents to change the operation mode to archive.

  2. Check the database files at the production end.

    Check that the data files, log files, and control files of a database are stored on LUNs. If those files are not stored on LUNs, DR cannot be performed for the database. You are advised to store temporary tablespaces on the same LUN where data files, log files, or control files resides or store temporary tablespaces a LUN different from the LUNs where other database files are stored.

DR End
  1. Check the database environment on the DR end.

    NOTE:

    If the database environments at the DR and production ends are different, modify the database environment at the DR end to be the same as that at the production end.

    Table 5-79 lists the environmental requirements of databases at the DR end.

    Table 5-79  Environmental requirements of databases at the DR end

    Check Item

    Requirements

    Oracle installation

    Oracle databases on the DR and production ends must run the same operating system and of the same version.

    Versions of Oracle databases on the DR and production ends must be the same.

    The installation location of Oracle databases on the DR end must be consistent with that on the production end.

    Database

    The database authentication modes on the DR and production ends must be the same.

    Names, user names, and passwords of Oracle databases on the DR and production ends must be the same.

    non-Windows operating system (file system)

    IDs and residing group IDs of Oracle users on the DR and production end must be the same.

    Oracle RAC (Oracle 11g R2)

    scan name of SCAN-IP on the production end must be consistent with that on the production end.

  2. Set up database DR environment at the DR end.

    The following shows two methods to set up database DR environment after Oracle database software has been installed on DR hosts.

    Table 5-80 shows database directories in Windows and UNIX-like operating systems.
    Table 5-80  Oracle database directories in different operating systems of hosts

    Operating System of the Host

    Root Directory of the Database

    Installation Directory of the Database

    Query Method

    Windows

    %ORACLE_BASE%

    %ORACLE_HOME%

    • Run the echo %ORACLE_BASE% command to query the root directory of the Oracle database.
    • Run the echo %ORACLE_HOME% command to query the installation directory of the Oracle database.

    UNIX-like operating system

    $ORACLE_BASE

    $ORACLE_HOME

    • Run the echo $ORACLE_BASE command to query the root directory of the Oracle database.
    • Run the echo $ORACLE_HOME command to query the installation directory of the Oracle database.
    • Method 1: Manually replicate necessary files from the production end to the DR end (suppose the database instance name is db001).
      NOTE:

      To ensure the file permissions on the production and DR ends are the same, replicate necessary files from the Oracle user's directory on the production center to that at the DR end.

      1. Replicate db001-related files under the Oracle user-specified directory on the production center to the corresponding directory on the DR center.
        • In Windows, the specific directory is %ORACLE_HOME%\database.
        • In a Unix-like operating system, the specific directory is $ORACLE_HOME/dbs.
      2. Under the specific directory at the DR end, create folder db001. Under the created folder, create the directory structure according to that on the production end.
        • In Windows, the specific directory is %ORACLE_BASE%\ADMIN.
          • For Oracle 10g, create the alert_%SIDNAME%.ora file under the %ORACLE_BASE%\ADMIN\%DBName%\bdump directory.
          • For Oracle 11g or later, create level-2 directories under the %ORACLE_BASE%\diag\rdbms according to the directory structure on the production end, for example, %ORACLE_BASE%\diag\rdbms\level-1 directory\level-2 directory.
        • In a Unix-like operating system, the specific directory is $ORACLE_BASE/admin.
          • For Oracle 10g, create the alert_%SIDNAME%.ora file under the $ORACLE_BASE/admin/$dbName/bdump directory.
          • For Oracle 11g or later, create level-2 directories under the $ORACLE_BASE/admin/diag/rdbms according to the directory structure on the production end, for example, $ORACLE_BASE/diag/rdbms/level-1 directory/level-2 directory.
      3. If databases at the DR end are clustered, you need to run the following two commands to register the databases and instances.
        • srvctl add database -d {DATABASENAME} -o {ORACLE_HOME}
        • srvctl add instance -d {DATABASENAME} -i {INSTANCENAME} -n {HOSTNAME}

        DATABASENAME is the database name, ORACLE_HOME is the installation directory of the database, INSTANCENAME is the database instance name, and HOSTNAME is the name of the host where the database resides.

      4. You need to configure the pfile file in the following scenarios. For details about how to configure the file, see Oracle Configuration (Configuring the pfile File).
        • If there is an Oracle RAC at the production end but only a standalone Oracle database at the DR end, copy the pfile file of the Oracle database at the production end, after you have created databases at the DR end. And replace the cluster configuration information in the file with the information about the deployment mode of the standalone Oracle database at the DR end. Then use the copy of pfile to generate a spfile file and put the file on the local disk at the DR end.
        • If the save paths of the spfile files of both the production end and DR end are the same and the storage devices used by both spfile files are under DR protection (for example, the save paths of the spfile files of the production and DR ends are the same, and the spfile file and data files of the database are saved in the same ASM disk group), synchronizing data will cause the spfile file of the DR end to be overwritten by the spfile file of the production end. As a result, partial data of the DR end will be lost. In this manner, you are advised to save the spfile file of the DR end to the file system running on the DR host.
        NOTE:
        • In Windows, the spfile file is under the %ORACLE_HOME%\database directory.
        • In a Unix-like operating system, the spfile file is under the $ORACLE_HOME/dbs directory.
    • Method 2: Create databases to generate files necessary for DR at the DR end, by performing the following steps at the DR end:
      1. Map empty LUNs to DR hosts and scan for mapped LUNs on DR hosts.
      2. If databases are installed on the file system at the production end, create a file system the same as the one at the production end, and mount the file system. If the production databases use ASM, create a ASM disk group the same as that at the production end.
      3. Create Oracle databases the same as those at the DR end.
      4. After databases are created, shut down them, and remove mapping relationships of file systems or ASM.
        • If databases use ASM, shut down ASM instances after you shut down the databases.
        • If the DR end is in the RAC environment, run a command to shut down the cluster, including database instances and ASM instances. Then check whether database instances on each node in the cluster have been shut down and whether ASM instances have been shut down. The bonding of raw devices must be canceled.
      5. Remove LUN mappings from the DR end.
      6. You need to configure the pfile file in the following scenarios. For details about how to configure the file, see Oracle Configuration (Configuring the pfile File).
        • If there is an Oracle RAC at the production end but only a standalone Oracle database at the DR end, copy the pfile file of the Oracle database at the production end, after you have created databases at the DR end. And replace the cluster configuration information in the file with the information about the deployment mode of the standalone Oracle database at the DR end. Then use the copy of pfile to generate a spfile file and put the file on the local disk at the DR end.
        • If the save paths of the spfile files of both the production end and DR end are the same and the storage devices used by both spfile files are under DR protection (for example, the save paths of the spfile files of the production and DR ends are the same, and the spfile file and data files of the database are saved in the same ASM disk group), synchronizing data will cause the spfile file of the DR end to be overwritten by the spfile file of the production end. As a result, partial data of the DR end will be lost. In this manner, you are advised to save the spfile file of the DR end to the file system running on the DR host.
        NOTE:
        • In Windows, the spfile file is under the %ORACLE_HOME%\database directory.
        • In a Unix-like operating system, the spfile file is under the $ORACLE_HOME/dbs directory.
      7. After creating a database at the DR end, check the names of the spfile files at both the production and DR ends.

        If the names are not the same, change the name of the spfile file at the DR end to that at the production end or save the spfile file at the DR end to a local disk, in order to prevent the database at the DR end from being overwritten after DR. For details, see Oracle Configuration (Configuring the pfile File).

Storage End

In the active-active (VIS) and asynchronous replication (SAN) DR scenario, if multiple archive logs are configured for a database, at least one archive log must use storage resources that are configured with remote replication or consistency group. In addition, the consistency group must be a different one from that of data files, control files, and online log files.

In the HyperMetro (SAN) and asynchronous replication (SAN) DR scenario, the archive logs and database files must be configured with a HyperMetro consistency group. If multiple archive logs are configured for a database, at least one archive log must use storage resources that are configured with remote replication or consistency group. In addition, the consistency group must be a different one from that of data files, control files, and online log files. And the archive logs must be configured with a HyperMetro consistency group.

For the HyperMetro (SAN) + asynchronous replication (SAN) DR technology with DR Star, archive logs and database documents must be configured in one HyperMetro consistency group and one DR Star. If multiple groups of archive logs are configured in the database, remote replication or remote replication consistency groups must be configured for the storage resources used by at least one group of archive logs, and cannot be configured in the same remote replication consistency group with data files, control files, and online log files. In addition, archive logs must be stored in one HyperMetro consistency group and one DR Star.

In the HyperMetro (SAN) and synchronous replication (SAN) DR scenario, the archive logs and database files must be configured with a HyperMetro consistency group.

Table 5-81 describes the storage requirements of disaster recovery solutions.
Table 5-81  Requirements of disaster recovery solutions on storage

Technology

Restriction and Requirement

  • Parallel replication (synchronous and asynchronous)
  • Cascading replication (synchronous and asynchronous)
  • Parallel replication (asynchronous and asynchronous)
  • Cascading replication (asynchronous and asynchronous)
  • All requirements of geo-redundant mode disaster recovery are met: parallel replication (synchronous and asynchronous), cascading replication (synchronous and asynchronous), parallel replication (asynchronous and asynchronous) or cascading replication (asynchronous and asynchronous).
  • Before using remote replication to protect data build remote replication the storage array have create consistency group:
    • If protect data use storage array, ensure all LUNs have created remote replication and the remote replication is normal.
    • If protect data use multiple LUNs, ensure that the remote replications of all LUNs reside in the same consistency group. If only one LUN, the remote replications do not add to the consistency group.
    • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
    • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
    • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.
    • If T series V200R001C00 is deployed, after creating mapping view, you cannot select the Enable Inband Command to modify the properties of the mapping view.
    • In the DR Star network scenario, LUNs, pairs, or consistency groups where remote replication relationships have been established must be added to DR Star.
  • After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files, control files, and log files of the application.
  • Check whether all file systems used by the database to be recovered are unmounted from the DR host.
  • On the DeviceManager, check that secondary LUNs of the remote replications used by DR databases are not mapped to any host, host group, or mapping view.
  • For the AIX DR end host, you need to check the following items:
    • Check whether all logical volumes used by the database to be recovered and volume groups to which the logical volumes belong are deleted from the DR host.
    • Check whether all physical volumes used by the database to be recovered and devices (hdiskx) corresponding to the physical volumes are deleted on the DR host.

Active-active (VIS) and asynchronous replication (SAN)

  • All requirements of active-active disaster recovery are met.
  • Before using remote replication to protect data build remote replication the storage array have create consistency group:
    • If protect data use storage array, ensure all LUNs have created remote replication and the remote replication is normal.
    • If protect data use multiple LUNs, ensure that the remote replications of all LUNs reside in the same consistency group. If only one LUN, the remote replications do not add to the consistency group.
    • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
    • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
    • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.
    • If T series V200R001C00 is deployed, after creating mapping view, you cannot select the Enable Inband Command to modify the properties of the mapping view.
  • After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files, control files, and log files of the application.
  • Check whether all file systems used by the database to be recovered are unmounted from the DR host.
  • On the DeviceManager, check that secondary LUNs of the remote replications used by DR databases are not mapped to any host, host group, or mapping view.
  • For the AIX DR end host, you need to check the following items:
    • Check whether all logical volumes used by the database to be recovered and volume groups to which the logical volumes belong are deleted from the DR host.
    • Check whether all physical volumes used by the database to be recovered and devices (hdiskx) corresponding to the physical volumes are deleted on the DR host.
  • HyperMetro (SAN) and asynchronous replication (SAN)
  • HyperMetro (SAN) and synchronous replication (SAN)
  • All requirements of HyperMetro disaster recovery are met.
  • Before using remote replication to protect data build remote replication the storage array have create consistency group:
    • If protect data use storage array, ensure all LUNs have created remote replication and the remote replication is normal.
    • If protect data use multiple LUNs, ensure that the remote replications of all LUNs reside in the same consistency group. If only one LUN, the remote replications do not add to the consistency group.
    • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
    • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
    • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.
    • If T series V200R001C00 is deployed, after creating mapping view, you cannot select the Enable Inband Command to modify the properties of the mapping view.
    • In the DR Star network scenario, LUNs, pairs, or consistency groups where remote replication relationships have been established must be added to DR Star.
  • After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files, control files, and log files of the application.
  • Check whether all file systems used by the database to be recovered are unmounted from the DR host.
  • On the DeviceManager, check that secondary LUNs of the remote replications used by DR databases are not mapped to any host, host group, or mapping view.
  • For the AIX DR end host, you need to check the following items:
    • Check whether all logical volumes used by the database to be recovered and volume groups to which the logical volumes belong are deleted from the DR host.
    • Check whether all physical volumes used by the database to be recovered and devices (hdiskx) corresponding to the physical volumes are deleted on the DR host.

IBM DB2

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where DB2 databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End

Check the database environments of the production end and the DR end where DB2 databases reside.

When the DB2 RHCS is used at the production end and the logical volume is managed in CLVM, a single machine is used at the DR end is not supported.

  1. When a database environment is configured at the production end where hosts running Linux, AIX or HP-UX are deployed, the mount points of the file system used by the database cannot be nested. For example, mount points /testdb/ and /testdb/database1/ are nested.
  2. Before configuring a DB2 database, you must obtain the instance name, user name, and password of the DB2 database. The instance name is the DB2 database instance name. The user name is that of the system user of the DB2 database instance. Usually, the user name is the same as the instance name. The password is that of the system user. The DB2 database password do not support special characters !;"'(),·=\', otherwise, database authentication will fail when creating a protected group.
  3. For the DB2 PowerHA cluster, you need to ensure that the host names of the hosts where the production and DR end DB2 database resides must be same as the specified node name in PowerHA cluster. Otherwise, protected groups will fail to be created.
  4. In AIX, the logical volumes used by the file systems of DB2 databases and those used by raw devices of tablespace cannot belong to one volume group.
  5. Check and configure environment variables.

    eReplication Agent starts to protect data consistency between DB2 databases only after you have configured environment variables for DB2 databases at the production end respectively. You are advised to configure environment variables before installing the eReplication Agent. If you have installed the eReplication Agent, restart it after configuring environment variables. For details, see Starting the eReplication Agent.

    NOTE:
    Before configuring environment variables, check the default shell type of user rdadmin on eReplication Agent.
    • In AIX, if the shell type is bash, modify the .profile file under the rdadmin home directory. If the shell type is csh, modify the .cshrc file under the rdadmin home directory.
    • In Linux, if the shell type is bash, modify the .bashrc file under the rdadmin home directory. If the shell type is csh, modify the .cshrc file under the rdadmin home directory.
    • In HP-UX, if the shell type is bash, modify the .profile file under the rdadmin home directory. If the shell type is csh, modify the .cshrc file under the rdadmin home directory.

    This document uses shell type bash in AIX to modify the .profile file as an example.

    1. Use PuTTY to log in as root to the application server where the eReplication Agent is installed.
    2. Run the TMOUT=0 command to prevent PuTTY from exiting due to session timeout.
      NOTE:

      After you run this command, the system continues to run when no operation is performed, resulting a risk. For security purposes, you are advised to run exit to exit the system after completing your operations.

    3. Run the su - rdadmin command to switch to user rdadmin.
    4. Run the vi ~/.profile command to open the .profile file under the rdadmin directory.
    5. Press i to go to the edit mode and edit the .profile file.
    6. Add the following variables to the .profile file. Table 5-82 describes related environment variables.
      DB2_HOME=/home/db2inst1/sqllib
      PATH=$PATH:$DB2_HOME/bin:/usr/sbin:/sbin
      DB2INSTANCE=db2inst1
      INSTHOME=/home/db2inst1
      export DB2_HOME PATH DB2INSTANCE INSTHOME
      
      NOTE:

      In Linux, you need to add VCS script path in PATH variable, for example PATH=$PATH:$DB2_HOME/bin:/usr/sbin:/opt/VRTS/bin:/sbin.

      Table 5-82  System Variables

      Variable

      Value

      Example

      DB2_HOME

      Installation directory of a DB2 instance

      DB2_HOME=/home/db2inst1/sqllib

      PATH

      Indicates the bin folder under the home directory of a DB2 instance user

      PATH=$PATH:$DB2_HOME/bin:/usr/sbin:/sbin

      DB2INSTANCE

      Name of a DB2 instance

      DB2INSTANCE= db2inst1

      INSTHOME

      Home directory of a DB2 instance user

      INSTHOME=/home/db2inst1

    7. After you have successfully modified the .profile file, press Esc and enter :wq! to save the changes and close the file.

Production End
  1. Check the database configuration at the production end.

    eReplication of the current version supports DR for DB2 databases on AIX, Linux and HP-UX. To enable eReplication to perform DR for DB2 databases, the DB2 databases must meet the following requirements:

    • Installation directories of DB2 instances must reside on local disks or independent storage devices and cannot reside on the same devices as databases for which DR is intended.
    • Data and log files in the DB2 databases for which DR and protection are intended must be stored on LUNs provided by Huawei storage devices.
    • If remote replication is performed on a DB2 database, LUNs used by the database must be in the same consistency group. If the database uses only one LUN, the remote replication of the LUN does not need to be added to a consistency group.
    • If the DR Star network is used, add the storage used for creating the database to DR Star when the preceding requirements are met.

DR End
  1. Check the database environment at the DR end.

    NOTE:

    If the database environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same database environments.

    Table 5-83 describes the lists configuration requirements.

    Table 5-83  Configuration Requirement of the Database on the DR End

    Check Item

    Configuration Requirement

    Installation

    The operating system and its edition run by DB2 databases are the same as that at production end.

    The versions of DB2 databases are the same as that at the production end.

    Configuration

    The cluster where the DB2 databases reside at the DR end has the same configuration as that at the production end. The two clusters must have the same resource configuration and dependency among resources.

    The environment variable configurations are the same as that at the production end.

    Instance

    The instance names, user names, and passwords of databases are the same as that at the production end.

    The user groups to which users of DB2 database instances belong are the same as that at the production end.

    The installation directories of DB2 instances at the DR end reside on local disks or independent storage devices.

    The DB2 databases to be recovered are created under DB2 instances at the DR end and ensure that the created databases meet the following requirements:
    • Names of databases are the same at the DR and production ends.
    • Storage paths (file systems) used by databases or names of devices (raw devices) are the same at the DR and production ends.
    • Names of logical volumes used by databases and volume groups to which logical volumes belong are the same at the DR and production ends.
    • Tablespaces used by databases, log names, and storage configurations are the same at the DR and production ends. For example, tablespace tp1 in production database DB1 uses /db2data/db1 (specified when the database is created) and tablespace tp2 uses /dev/rtdd1, the same tablespaces tp1 and tp2 must be created under /db2data/db1 and /dev/rtdd1 in the DR database DB1 respectively.

  2. Close databases on the DR end.

    After checking the database environments, close databases on the DR end before creating a recovery plan.
    • For a single DB2 device, close the database directly.
    • For a DB2 cluster, take the cluster resources or resource groups of the database offline.

Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage configurations of the DR end.
    1. Check whether all file systems used by the database to be recovered are unmounted from the DR host.
    2. On the DeviceManager, check that secondary LUNs of the remote replications used by DR databases are not mapped to any host, host group, or mapping view.
    3. For the AIX or HP-UX DR end host, you need to check the following items:

      • Check whether all logical volumes used by the database to be recovered and volume groups to which the logical volumes belong are deleted from the DR host.
      • Check whether all physical volumes used by the database to be recovered and devices (hdiskx) corresponding to the physical volumes are deleted on the DR host.

    4. For the Linux end host, check whether all volume groups of the logical volumes used by the database to be recovered are deactivated and exported from the DR host.
    5. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

Microsoft SQL Server

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where SQL Server databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End

Check the database environments of the production end and the DR end where databases reside.

  1. For the SQL Server cluster, to ensure the uniqueness please set the different SQL Server network name when creating production cluster and DR cluster.
  2. In a WSFC cluster, you need to add Authenticated Users permission to SQL Server database.
    1. On the database management page, select Security > Logins, right-click New Login.
    2. In the dialog box that is displayed, click Search.
    3. In the Select User or Group dialog box that is displayed, select Authenticated Users.
    4. Click OK.
Production End
  1. Check the authentication method of SQL Server database at the production end.

    Hybrid authentication must be adopted for SQL Server database, or the connection to the database fails.

  2. Ensure that the name, user name, and password of the SQL Server database at the production end meet the following requirements:

    • The database name can contain only letters, digits, and special characters including _ - @ # $ *
    • The database user name can contain only letters, digits, and special characters including _ - @ # $ *
    • The database password can contain only letters, digits, and special characters including ~ ! % _ - @ # $ *

  3. Check the VSS service status of the application host where the SQL Server database at the production end resides.

    eReplication Agent uses the VSS service to ensure consistency between SQL Server databases. Therefore, the VSS service must be enabled when eReplication Agent is working.

  4. Check database files at the production end.

    • Data files and log files in databases must be stored on storage array LUNs.
    • The disk resources where database file resides must be in Maintenance mode in the cluster manager before the database test or recovery, or the disk resources may fail to be mounted when the database starts.

  5. Grant the connect permission to database user guest.

    NOTE:

    The permission granting is applicable only to the production environment where SQL Server 2012 (both single node and cluster) is in use.

    1. On the database management page, select the database that you want to configure and click Properties.
    2. In the Database Properties dialog box that is displayed, click Permissions.
    3. Click Search.
    4. In the Select Users or Roles dialog box that is displayed, enter guest and click Check Names to check the name validity.
    5. Click OK.
    6. On the Explicit tab page, select Connect.
DR End
  1. Check the database environment at the DR end.

    NOTE:

    If the database environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same database environments.

    Table 5-84 describes the lists configuration requirements.

    Table 5-84  Configuration Requirement of the Database on the DR End

    Check Item

    Configuration Requirement

    Installation

    The operating system and its edition run by SQL Server databases are the same as that at the production end.

    The versions of SQL Server databases are the same as that at the production end.

    Database

    The names, instance names, user names, and passwords of databases are the same as that at the production end.

    The locations that data files and log files in databases are stored are the same as that at the production end.

    SQL Server cluster

    The Always On failover cluster instance is running.

    The names of the resource group and the disks in the resource group at the DR end are the same as those at the production end.

    The failover clusters of the same name do not exist on the same network.

    The disk resources where database file resides must be in Maintenance mode in the cluster manager before the database test or recovery.

    The disk resources where database file resides must be in Maintenance mode in the cluster manager before the database test or recovery, or the disk resources may fail to be mounted when the database starts.

    After the DR cluster host restarts or resets, or cluster services reset, check the status of all resources in the failover cluster manager. For disk resources whose status is Offline and database disk resources for which database DR will be performed, set the status to Maintenance.

  2. Set databases on the DR end offline.

    Do not close instances to which the databases belong.

Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage configurations of the DR end.
    1. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.

      NOTE:

      In the SQL Server cluster, the cluster must be in Maintenance mode before deleting the LUNs mapped.

    2. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

Microsoft Exchange Server

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where Exchange email databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End
  1. Ensure that the name of the Exchange email database at the production end meets naming requirements.

    The database name can contain only letters, digits, and special characters including _ - @ # $ *.

  2. Check the VSS service status of the application host where the Exchange email database at the production end resides.

    eReplication Agent uses the VSS service to ensure consistency between Exchange applications. Therefore, the VSS service must be enabled when eReplication Agent is working.

  3. Check Exchange data files at the production end.

    • Data files and log files in databases used by Exchange must be stored on storage array LUNs.
    • The Exchange 2013 email database cannot include public folders.

  4. The protected databases must be in the mount state to ensure consistency.
DR End
  1. Check the Exchange environment at the DR end.

    NOTE:

    If the Exchange 2007 environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same Exchange environments.

    1. Check whether the operating system and its edition run by Exchange 2007, Exchange 2010, or Exchange 2013 are the same as that at the production end.
    2. Check whether the versions of Exchange 2007, Exchange 2010, or Exchange 2013 are the same as that at the production end.
    3. Check whether names of storage groups used by Exchange 2007 are the same as that at the production end.
    4. Check whether the locations that data files and log files in databases used by Exchange 2007 are stored are the same as that at the production end.
  2. Before performing a DR test or recovery, you need to disable the Exchange service at the DR end for Exchange 2007 and enable the Exchange service at the DR end for Exchange 2010 and Exchange 2013.
Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage environments of the DR end.
    1. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.
    2. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

VMware VMs

Before configuring DR services, check the virtualization environments of the production end and the DR end, and the storage end environment. If any environment does not meet requirements, modify its configuration.

Common Check Items

Check and configure the following items in the VMware VM environment on the production and DR ends.

  1. Check the version of vCenter Server installed at the production end and the DR end.

    eReplication is compatible with VMware vSphere 5.0, 5.1, 5.5, 6.0, and 6.5. Check whether the vCenter Server version is applicable before configuring the DR. Install the same version of vCenter Server on both production and DR ends, because different versions of vCenter Server may cause incompatibility with the preinstalled VMware Tools or Open VM Tools, resulting in unexpected failures.

    1. Double-click the vSphere Client icon and enter vCenter Server IP address and its administrator account. Click Login.

      NOTE:
      You will be prompted by an alarm about installing a certificate upon the login. Install a certificate as instructed or ignore the alarm.
      An administrator account of the vSphere vCenter Server is required for logging in to and later locating the vSphere vCenter Server. For vSphere vCenter Server 5.0 and 5.1, the default account is administrator. For vSphere vCenter 5.5, 6.0, 6.5, after vSphere SSO is installed, the default account is administrator@vsphere.local.


    2. On the menu bar, choose Help > About VMware vSphere and view the vCenter Server version.
  2. When an ESXi cluster is used, ensure that Distributed Resource Scheduler (DRS) is enabled and the automation level of Distributed Resource SchedulerDRS is not Manual.
    1. Choose Inventory > Hosts and Clusters. In the navigation tree on the left, right-click the ESXi cluster and choose Edit and Configure from the shortcut menu.
    2. Click Cluster function and select Open VMware DRS.
    3. Click VMware DRS and do not select Manual in Automation level.
  3. When recovering VMware VMs from a disaster, ensure that the VM names do not contain pound signs (#).

    Otherwise, you may fail to modify the VMs configuration file while testing a recovery plan, executing planned migration, or executing fault recovery.

  4. Confirm that virtual disks used by the protected VM are not in the independent mode (persistent or non-persistent).
Production End

Before performing DR protection for or recovering VMs, ensure that VMware Tools or Open VM Tools has been installed on the VMware VMs. The VMware Tools version or Open VM Tools needs to match the vSphere vCenter version. Unexpected failures may occur if an incompatible version is used. For details about how to obtain and install VMware Tools or Open VM Tools, see the related VMware documentation.

  1. Check whether VMware Tools or Open VM Tools has been installed on the VMs that require DR protection at the production end and whether the VMware Tools or Open VM Tools version is correct.

    Choose Inventory > Hosts and Clusters. In the navigation tree, click a VM that requires DR protection. In the function pane, click the Summary tab and view information about VMware Tools in the General area. If VMware Tools or Open VM Tools is not installed on the VM, install it. If the VMware Tools or Open VM Tools version is not correct, install a correct version.

    • If VMware Tools or Open VM Tools is not installed, information shown in Figure 5-8 or Figure 5-9 is displayed.
      Figure 5-8  VMware Tools not installed

      Figure 5-9  Open VM Tools not installed

    • If VMware Tools or Open VM Tools has been installed but its version is incorrect, information shown in Figure 5-10 is displayed.
      Figure 5-10  Incorrect version of VMware Tools or Open VM Tools installed

    • If VMware Tools has been installed and its version is correct, information shown in Figure 5-11 or Figure 5-12 is displayed.
      Figure 5-11  Correct version of VMware Tools installed

      Figure 5-12  Correct version of Open VM Tools installed

Storage End
Table 5-85 describes the storage requirements of DR solutions.
Table 5-85  Requirements of DR solutions on storage

Technology

Restriction and Requirement

  • Cascading replication (synchronous and asynchronous)
  • Parallel replication (synchronous and asynchronous)
  • Cascading replication (asynchronous and asynchronous)
  • Parallel replication (asynchronous and asynchronous)
  • Operating systems of VMs can only be installed on virtual disks.
  • Disks, except those where operating systems of VMs reside, can be RDM disks. If VMs use disks of the virtual RDM type, you can create quiesced snapshots for VMs to ensure VM consistency and recover VMs using the snapshots.
  • Check whether remote replication pairs or consistency groups of the arrays are correctly set up. If datastores used by a VM occupy multiple LUNs, ensure that the remote replication pairs of all LUNs reside in the same consistency group. In the DR Star network scenario, the remote replication or consistency group must be added to one DR Star.
  • Check whether information about initiators of production/DR ESXi hosts and test hosts (clusters) is displayed on the array.
  • HyperMetro (SAN) and asynchronous replication (SAN)
  • HyperMetro (SAN) and synchronous replication (SAN)
  • VMs cannot use RDM virtual disks.
  • Clients' operating systems cannot be deployed on the RDM storage.
  • Check whether remote replication pairs or consistency groups of the arrays are correctly set up. If datastores used by a VM occupy multiple LUNs, ensure that the remote replication pairs of all LUNs reside in the same consistency group.
  • Check whether information about initiators of production/DR ESXi hosts and test hosts (clusters) is displayed on the array.
  • Check whether Huawei storage devices provide datastores for VMs in DR protection. In the DR Star network scenario, the remote replication or consistency group must be added to one DR Star.
  • Check whether Huawei storage devices meet the networking requirements for HyperMetro (SAN).

FusionSphere VMs

Before configuring disaster recovery (DR) services, check the FusionManager and FusionCompute environments of the production end and the DR end, and the storage end environment. If the FusionManager and FusionCompute environments do not meet requirements, modify the FusionManager and FusionCompute configurations.

Production End and DR End
NOTE:
  • In the FusionCompute VM environment, FusionCompute must be able to create datastores using the storage resources provided by FC SAN, IP SAN, Advanced SAN storage, FusionStorage or NAS devices, create VMs, and provide clusters or hosts, ports group resources, and creates the northbound interfaces available to users.
  • In the FusionManager VM environment, FusionManager must be able to create VMs together with the FusionCompute environment and provide network and security group resources through VPC users.
  • The FusionManager and FusionCompute environment configurations include the cascade configuration of FusionManager and FusionCompute, configuration of FusionManager providing network resources and security group resources, configuration of FusionCompute and storage arrays, and configuration of FusionCompute providing clusters or host resources and port group resources. For details, see the FusionCompute V100R005Cxx Product Documentation and FusionManager V100R005Cxx Product Documentation.
  1. Check and configure the FusionCompute environments at the production end and the DR end.

    After FusionCompute is installed, you need to configure the FusionCompute environment, for example, configure clusters/hosts in FusionCompute, FusionCompute and storage arrays, and network resources in FusionCompute.

    NOTE:
    All the following configurations are basic configurations of FusionCompute. For details, see the FusionCompute V100R005Cxx Product Documentation.

    1. Check the clusters and hosts in FusionCompute.

      1. Log in to FusionCompute as an administrator.
      2. Check whether any cluster or host resource is available. If no such a resource exists, create a resource.
      NOTE:
      If you want to use a cluster to manage resources, ensure that the cluster contains available hosts.

    2. Check FusionCompute and storage arrays.

      1. Log in to FusionCompute as an administrator.
      2. Check whether any IP SAN, FC SAN, Advanced SAN, NAS or FusionStorage storage is available. If no such storage system exists, add a storage system.
      3. In the host resources of FusionCompute, select ports to which you want to connect so that the service ports are added for service communication between the storage and FusionCompute.

    3. Optional: Check the storage plane configuration of the virtualized SAN.

      NOTE:

      If there is no storage plane, you need to disable the function.

      1. Log in to FusionCompute as an administrator.
      2. On the menu, choose Storage Pool > Data Store.
      3. Click Configure Virtualized SAN Storage Plane, go to the Configure Virtualized SAN Storage Plane page and check that the storage plane configuration of the virtualized SAN has been disabled.

    4. Check network resources in FusionCompute.

      1. Log in to FusionCompute as an administrator.
      2. Check whether any port group is available in the network pool and ensure that the network pool is associated with the host. If no such a port group exists, create a port group.

    5. Check interconnection users of FusionCompute.

      1. Log in to FusionCompute as an administrator.
      2. Check whether Interface interconnection user exists. If no such user exists, create one and ensure that roles of administrator are selected.

  2. Check and configure the FusionManager environments at the production end and the DR end.

    Create a FusionCompute VM and install FusionManager on the VM. After installing FusionManager, install Tools and configure the FusionManager primary node.

    NOTE:
    All the following configurations are basic configurations of FusionManager. For details, see the FusionManager V100R005Cxx Product Documentation.

    1. Configure virtual data center (VDC) and virtual private cloud (VPC) users.

      1. Log in to FusionManager as an administrator.
      2. Create a VDC and add users to it.
      3. Login as the newly created VDC user and add a VPC.
      4. Optional: In the VPC list, add routers to the desired VPC User for establishing a routed network.
      5. Add direct, internal, and routed networks to the desired VPC user.
      6. Create a security group for the desired VPC user.

    2. Open the northbound simple object access protocol (SOAP) interface.

      The northbound SOAP interface of the FusionManager VM is closed in default, you need to open the northbound SOAP interface to make the RD service normal.

      1. Use the PuTTY to log in to the FusionManager VM as galaxmanager user.
      2. Run TMOUT=0 to prevent the automatic exit of PuTTY due to timeout.
        NOTE:

        After you run this command, the system continues to run when no operation is performed, resulting a risk. For security, run exit to exit the system.

      3. Run north soap open command to open the northbound SOAP interface.
      4. Run uportal restart command to restart the interface.

Production End
  1. Check whether datastores have been created on FusionCompute at the production end.

    NOTE:
    When creating a datastore, use a virtualized method and ensure that formatting has been performed.

    1. Log in to FusionCompute as an administrator.
    2. Check whether datastores have been created. If no datastore has been created, perform the following steps to create one:
    3. Click Add Data Store to go to the datastore addition page.
    4. Select IP SAN to create datastores.
  2. Check whether VMs have been created on FusionCompute at the production end.

    • Check whether VMs have been created on FusionCompute.
      1. Log in to FusionCompute as an administrator.
      2. Choose VM and Template > VM and check whether VMs that require DR protection have been created. If no VMs are created, follow the wizard to select an appropriate customized datastore to create VMs.
    • Check whether VMs have been created on FusionManager.
      1. Log in to FusionManager as the VPC user.
      2. Follow the wizard to create FusionManager VMs.
      3. Log in to FusionCompute as an administrator and migrate the newly created VMs to desired customized datastores.
    NOTE:
    Ensure that virtualized datastores are configured for disaster recovery VMs.

Storage End
Table 5-86 describes the storage requirements of disaster recovery solutions.
NOTE:
Before using eReplication, refer to the remote replication configuration section in the related storage device document.

During the initial synchronization of remote replication, pay attention to the time spent on the synchronization. The larger the amount of data is, the longer it takes to synchronize the data. Therefore, if a large amount of data is synchronized in a short time, for example, 20 seconds, the data is probably not synchronized. In this case, delete the remote synchronization and start the initial synchronization over.

Table 5-86  Requirements of disaster recovery solutions on storage

Technology

Restriction and Requirement

Active-active (VIS) and asynchronous replication (SAN)

  • Check whether all requirements of active-active (VIS) disaster recovery are met.
  • Check whether storage devices on the arrays used by VIS devices have created remote replication or consistency groups. The VIS devices are only allowed to use one of the two arrays to create remote replication or consistency groups.
  • Check whether secondary LUNs on the arrays are correctly mapped to production/DR hosts (groups). Especially for a cluster consisting of CNAs, secondary LUNs must be mapped only to CNA hosts in the cluster, and only map to one host or host group.
  • Plan VMs that need DR protection on the same datastore if possible.
  • If a datastore cannot provide sufficient storage for the VMs that need DR protection, create the VMs on datastores of different LUNs on the same storage array.
  • If datastores of the protected VMs are in different LUNs, you need to use the corresponding LUNs of the datastores as primary LUNs to create remote replications. Then add the remote replications to the same consistency group. During remote replication creation, select Initial synchronization when setting Pair Properties.
  • Ensure that the LUNs planned for DR at the DR end are not used to create datastores. Otherwise, the DR environment may become unavailable. During DR, the system will automatically create datastores using the DR LUNs at the DR end.
  • HyperMetro (SAN) and asynchronous replication (SAN)
  • HyperMetro (SAN) and synchronous replication (SAN)
  • Check whether Huawei storage devices provide datastores for VMs in disaster recovery protection.
  • Check whether all requirements of active-active (VIS) disaster recovery are met.
  • Check whether remote replication or consistency groups of the arrays are correctly set up. If datastores used by VMs use multiple LUNs, ensure that the remote replications of all LUNs reside in the same consistency group. In the DR Star network scenario, the remote replication or consistency group must be added to one DR Star.
  • Check whether secondary LUNs on the arrays are correctly mapped to production/DR hosts (groups). Especially for a cluster consisting of CNAs, secondary LUNs must be mapped only to CNA hosts in the cluster, and only map to one host or host group.
  • If T series V200R001C00 is deployed, after creating mapping view, you cannot select the Enable Inband Command to modify the properties of the mapping view.
  • Plan VMs that need DR protection on the same datastore if possible.
  • If a datastore cannot provide sufficient storage for the VMs that need DR protection, create the VMs on datastores of different LUNs on the same storage array.
  • If datastores of the protected VMs are in different LUNs, you need to use the corresponding LUNs of the datastores as primary LUNs to create remote replications. Then add the remote replications to the same consistency group. During remote replication creation, select Initial synchronization when setting Pair Properties.
  • Ensure that the LUNs planned for DR at the DR end are not used to create datastores. Otherwise, the DR environment may become unavailable. During DR, the system will automatically create datastores using the DR LUNs at the DR end.
  • Cascading replication (synchronous and asynchronous)
  • Parallel replication (synchronous and asynchronous)
  • Cascading replication (asynchronous and asynchronous)
  • Parallel replication (asynchronous and asynchronous)
  • Check whether Huawei storage devices provide datastores for VMs in disaster recovery protection.
  • Check whether remote replication or consistency groups of the arrays are correctly set up. If datastores used by VMs use multiple LUNs, ensure that the remote replications of all LUNs reside in the same consistency group. In the DR Star network scenario, the remote replication or consistency group must be added to one DR Star.
  • Check whether secondary LUNs on the arrays are correctly mapped to production/DR hosts (groups). Especially for a cluster consisting of CNAs, secondary LUNs must be mapped only to CNA hosts in the cluster, and only map to one host or host group.
  • If T series V200R001C00 is deployed, after creating mapping view, you cannot select the Enable Inband Command to modify the properties of the mapping view.
  • Plan VMs that need DR protection on the same datastore if possible.
  • If a datastore cannot provide sufficient storage for the VMs that need DR protection, create the VMs on datastores of different LUNs on the same storage array.
  • If datastores of the protected VMs are in different LUNs, you need to use the corresponding LUNs of the datastores as primary LUNs to create remote replications. Then add the remote replications to the same consistency group. During remote replication creation, select Initial synchronization when setting Pair Properties.
  • Ensure that the LUNs planned for DR at the DR end are not used to create datastores. Otherwise, the DR environment may become unavailable. During DR, the system will automatically create datastores using the DR LUNs at the DR end.

Local File Systems (NTFS)

Before configuring disaster recovery (DR) services, check the server environments of the production end and the DR end where NTFS local file systems reside. If the server environments do not meet requirements, modify server configurations.

Check Items of NTFS File Systems

The following check items need to be checked and configured on file systems on the production end.

  1. On the production end, file systems must be created using simple volumes.
  2. On the production end, file systems are created on storage array LUNs.
Storage End
Table 5-87 describes the storage requirements of disaster recovery solutions.
Table 5-87  Requirements of disaster recovery solutions on storage

Technology

Restriction and Requirement

  • Cascading replication (synchronous and asynchronous)
  • Parallel replication (synchronous and asynchronous)
  • Cascading replication (asynchronous and asynchronous)
  • Parallel replication (asynchronous and asynchronous)
  • Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • If multiple NTFS file systems are created on the primary LUN of the same remote replication or consistency group (NTFS are created using simple volumes), the NTFS file systems must be created in the same protected group.
  • In the DR Star network scenario, if multiple NTFS file systems exist in one DR Star, these NTFS file systems must be created in one protected group.
  • Check the server environment of the DR end. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.

NAS File Systems

Before configuring disaster recovery (DR) services, check the storage environments of the production end and the DR end where NAS file systems reside. If the storage environments do not meet requirements, modify storage configurations.

Common Check Items
  1. Ensure that all NAS file systems of vStores are paired in HyperMetro relationship.
  2. Ensure that a HyperMetro vStore pair has been created for the local vStore and remote vStore and the pair is normal.
  3. Ensure that the NAS file systems on the storage array are working properly.

LUN

Before configuring disaster recovery (DR) services, check the storage environments of the production end and the DR end where LUNs reside. If the storage environments do not meet requirements, modify storage configurations.

Production End and DR End

Check the storage environments of the production end and the DR end.

Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.

DR End
  1. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.
  2. Check the storage configurations of the DR end.

    • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
    • If the DR storage is T series V2 or later or 18800 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

Translation
Download
Updated: 2019-05-21

Document ID: EDOC1100075861

Views: 12230

Downloads: 64

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next