No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor BCManager 6.5.0 eReplication User Guide 02

Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Checking the DR Environment

Checking the DR Environment

Before the DR service configuration, you need to check the application environment and storage at the production and DR sites where the protected objects reside, to ensure the normal running of subsequent configuration jobs.

Oracle

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where Oracle databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End
The following configuration items must be checked and configured on databases at both the production and DR ends.

When the Oracle RHCS is used at the production end and the logical volume is managed in CLVM, the standalone deployment mode is not supported at the DR end.

NOTE:

If the host runs AIX and the user runs the rendev command to change the name of a physical volume at the production end to start with /dev/hdisk, the name of the device in the production end may change after DR testing, planned migration, or fault recovery.

  1. When an Oracle database runs Windows, check whether the password of the Oracle database meets the following requirement:

    A password contains only letters, digits, tildes (~), underscores (_), hyphens (-), pounds (#), dollar signs ($), and asterisks (*).

  2. When the host where an Oracle database resides runs Linux, AIX, or HP-UX, the mount points of the file system used by the database cannot be nested. For example, mount points /testdb/ and /testdb/database1/ are nested.
  3. When a database environment is configured at the production end where hosts running Red Hat Linux are deployed and the hosts are connected to storage arrays through Fibre Channel links, check whether systool is installed on the production host and DR host.

    Run the rpm -qa | grep sysfsutil command to check whether the systool software has been installed on the host.

    • If the command output contains sysfsutils, for example, sysfsutils-2.0.0-6, the systool software has been installed on the host.
    • If the command output does not contain sysfsutils the systool software is not installed on the host. In this case, obtain the sysfsutils software package from the installation CD-ROM of Red Hat Linux.
      NOTE:

      RedHat Linux 6.3 for x86_64 and the sysfsutils software package sysfsutils-2.1.0-6.1.el6.x86_64.rpm are used as examples. The installation command is subject to the actual environment.

  4. Check and configure environment variables of the databases.

    The eReplication Agent starts to protect the data consistency of an Oracle database only after you have configured environment variables for the Oracle database. Configure environment variables prior to the installation of the eReplication Agent. Otherwise, restart the eReplication Agent after you configure environment variables to make them effective. For details about how to restart the eReplication Agent, see Starting the eReplication Agent.

    • To configure environment variables for an Oracle database in Windows, perform the following steps:
      1. Log in to the application server as an administrator.
      2. Check whether environment variables have been configured.
        Right-click My Computer and choose Properties from the drop-down list. On the Advanced tab page, click Environment Variables. If environment variables have been configured, a dialog box similar to Figure 5-25 is displayed. Check whether the PATH variable is the bin directory of the Oracle database.
        Figure 5-25  Environment variables
    • To configure environment variables for an Oracle database running a non-Windows operating system, perform the following steps:
      1. The following uses the Oracle 12gR1 database as an example to describe how to configure environment variables in Linux. Log in to the application server as user root.
      2. Run the su -oracle command to switch to user oracle.
      3. Run the echo $ORACLE_BASE command to ensure that ORACLE_BASE has been configured.
        oracle@linux:~> echo $ORACLE_BASE 
        /u01/app/oracle
      4. Run the echo $ORACLE_HOME command to ensure that ORACLE_HOME has been configured.
        oracle@linux:~> echo $ORACLE_HOME 
        /u01/app/oracle/product/12.1.0/dbhome_1
      5. Run the echo $PATH command to ensure that ORACLE_HOME/bin has been added in the PATH variable.
        oracle@linux:~> echo $PATH /u01/app/oracle/product/12.1.0/dbhome_1/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin 
      6. Run the exit command to log user oracle out.
      7. Run the su -grid command to switch to user grid.
      8. Run the echo $ORACLE_HOME command to ensure that ORACLE_HOME has been configured.
        grid@linux:~> echo $ORACLE_HOME 
        /u01/app/12.1.0/grid/
        NOTE:
        In Oracle 11g R2 or later, ORACLE_HOME must be configured in the environment variables of user grid.
      9. Run the echo $PATH command to ensure that ORACLE_HOME/bin has been added in the PATH variable.
        grid@linux:~> echo $PATH /u01/app/12.1.0/grid//bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin
      10. Run the exit command to log user grid out.
      11. Run the su - rdadmin command to switch to user rdadmin.
      12. Run the echo $SHELL command to view the default shell type of user rdadmin.
        The shell type can be bash, ksh, or csh. Table 5-186 describes configuration files that need to be modified based on shell types of operating systems.
        NOTE:

        Before configuring environment variables, check the default shell type of user rdadmin on the eReplication Agent. If the shell type is bash, modify the .profile or .bash_profile file under the rdadmin home directory. If the shell type is csh, modify the . cshrc file under the rdadmin home directory. This document uses shell type bash as an example.

        Table 5-186  Configuration files that need to be modified for different shell types of operating systems

        Operating System

        Shell Type

        Configuration File

        Configuration Approach

        Linux

        sh/bash (default, recommended)

        • SUSE/Rocky: .profile
        • Redhat: .bash_profile

        export name=value

        csh

        .cshrc

        setenv name value

        ksh

        .profile

        export name=value

        AIX/Solaris

        sh/ksh (default, recommended)

        .profile

        export name=value

        csh

        .cshrc

        setenv name value

        bash

        .profile

        export name=value

        HP-UX

        sh/POSIX Shell (default, recommended)

        .profile

        export name=value

        csh

        .cshrc

        setenv name value

        ksh

        .profile

        export name=value

        bash

        .profile

        export name=value

      13. Run the vi ~/.profile command to open the .profile file in the rdadmin home directory.
      14. Press i to enter the editing mode and edit the .profile file.
      15. Add the following content to the .profile file. Table 5-187 describes the related parameters.
        ORA_CRS_HOME=/opt/oracle/product/10g/cluster
        PATH=$PATH:/bin:/sbin:/usr/sbin 
        export ORA_CRS_HOME PATH 
        Table 5-187  System variables

        Name

        Value

        Description

        ORA_CRS_HOME

        /opt/oracle/product/10g/cluster

        Indicates the installation directory for the CRS software of an Oracle database. This environment variable needs to be configured only when the version of the RAC environment is earlier than Oracle 11g R2 and Clusterware has been installed.

        PATH

        PATH=$PATH:/bin:/sbin:/usr/sbin

        Indicates the command directory of the operating system on the host where the Oracle database resides.

      16. After you have modified the file, press Esc and enter :wq! to save the changes and close the file.

  5. If the host where the Oracle database resides runs Linux, check the disk mapping mode using udev.

    When udev is used to map disks, the restrictions on disk mapping modes are as follows:
    • Disks can only be mapped through disk partitioning.
    • When udev is used to map ASM disks, use udev disks to serve as the member disks of the ASM disk group.
    • The udev configuration rules at the production and DR ends must be in the 99-oracle-asmdevices.rules rule file (save path: /etc/udev/rules.d).
      Two mapping modes are available. The following uses SUSE Linux 10 (disks asm1 and asm2 involved) as an example to describe how to configure both mapping modes:
      • Renaming disks
        KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id
        -g -u -s /block/$parent", RESULT=="36200bc71001f375519f5d2c0000000f9",
        NAME="asm1", OWNER="oracle",  GROUP="dba", MODE="0660"
        or
        KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent",
        RESULT=="36200bc71001f375519f5d2c0000000f9", NAME="asm2", OWNER="oracle",
         GROUP="dba", MODE="0660"
      • Creating disk links
        KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent",
        RESULT=="36200bc71001f375519f5d2c0000000f9", SYMLINK+="asm1", OWNER="oracle",GROUP="dba", MODE="0660"
        or
        KERNEL=="sd?1", BUS=="scsi",
        PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="36200bc71001f375519f5d2c0000000f9",
        SYMLINK+="asm2", OWNER="oracle",  GROUP="dba", MODE="0660" 

        The KERNEL parameter must be specified using wildcards (for example, KERNEL=="sd*1" or KERNEL=="sd?1") and cannot be a fixed device partition name (for example: KERNEL="sda"). Otherwise, udev configuration rules cannot take effect.

  6. Check the authentication mode of the database.

    During the creation of the Oracle protected group, you can specify different authentication modes for different protected objects and RAC hosts. Currently, database authentication and operating system authentication are supported. Table 5-188 describes the configuration details.

    Table 5-188  Configuration requirements for database authentication modes

    Authentication Mode

    Requirements

    Database authentication

    Authentication modes at the production and DR ends must be the same.

    In a cluster, authentication modes of all hosts must be the same.

    In a protected group, authentication modes of protected objects must be the same.

    During protected group creation, authentication mode specified on the eReplication must be the same as that used by the database.

    Operating system authentication

    In an Oracle RAC cluster deployed on ASM, the operating system authentication must be enabled so that the cluster at the DR end can be started normally upon DR.

    For Oracle single-instance databases deployed on ASM, the following requirements must be met:
    • For Unix-like operating systems, operating system authentication must be enabled if the password files of Oracle databases are configured to save in the ASM disk group, operating system authentication must be enabled. Alternatively, you can save the password files on the local file system. if neither method is adopted, the recovery plan corresponding to the Oracle protected group cannot be executed for testing, planning migration, and fault recovery.
    • For Windows, operating system authentication must be enabled. Otherwise, the recovery plan corresponding to the Oracle protected group cannot be executed for testing, planned migration, and fault recovery.

Production End
  1. Check the running mode of databases at the production end.

    The eReplication Agent can ensure consistency between Oracle databases only when the databases are running in archive mode. Perform the following steps to check the operation mode of a database.

    • To configure the running mode of an Oracle database in Windows, perform the following steps:
      1. Run the sqlplus command to log in to the Oracle database. In the example provided here, the user name is sys, the password is abc, and the instance name is db01.

        The following shows the command format and output:

        C:\Documents and Settings\Administrator>sqlplus /nolog
        
        SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 3 26 18:09:00 2010
        
        Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
        
        SQL> conn sys/abc@db01 as sysdba
        Connected.
      2. Run the archive log list command to check whether the database is in archive mode.

        The following shows the command format and output:

        SQL> archive log list;
        Database log mode            No Archive Mode
        Automatic archival           Enabled
        Archive destination          G:\oracle10g\product\10.2.0\db_1\RDBMS
        Oldest online log sequence   7
        Current log sequence         9
        
        NOTE:
        • If Database log mode is Archive Mode, the database is in archive mode.
        • If Database log mode is No Archive Mode, follow instructions in related Oracle database documents to change the operation mode to archive.
    • To configure the running mode of an Oracle database running a non-Windows operating system, perform the following steps:
      1. Run the sqlplus command to log in to the Oracle database. In the example provided here, the user name is sys, the password is oracle, and the instance name is verify.

        The following shows the command format and output:

        [oracle@rhcs218 ~]$ sqlplus /nolog
        
        SQL*Plus: Release 11.2.0.3.0 - Production on Fri Oct 23 10:30:34 2015
        
        Copyright (c) 1982, 2002, Oracle.  All rights reserved.
        
        SQL> conn sys/oracle@verify as sysdba
        Connected.
      2. Run the archive log list command to check whether the database is in archive mode.

        The following shows the command format and output:

        SQL> archive log list;
        Database log mode             No Archive Mode
        Automatic archival            Enabled
        Archive destination           /oracle/archive
        Oldest online log sequence    7793
        Next log sequence to archive  7795  
        Current log sequence          7795
        
        NOTE:
        • If Database log mode is Archive Mode, the database is in archive mode.
        • If Database log mode is No Archive Mode, follow instructions in related Oracle database documents to change the operation mode to archive.

  2. Check the database files at the production end.

    Check that the data files, log files, and control files of a database are stored on LUNs. If those files are not stored on LUNs, DR cannot be performed for the database. You are advised to store temporary tablespaces on the same LUN where data files, log files, or control files resides or store temporary tablespaces a LUN different from the LUNs where other database files are stored.

DR End
  1. Check the database environment on the DR end.

    NOTE:

    If the database environments at the DR and production ends are different, modify the database environment at the DR end to be the same as that at the production end.

    Table 5-189 lists the environmental requirements of databases at the DR end.

    Table 5-189  Environmental requirements of databases at the DR end

    Check Item

    Requirements

    Oracle installation

    Oracle databases on the DR and production ends must run the same operating system and of the same version.

    Versions of Oracle databases on the DR and production ends must be the same.

    The installation location of Oracle databases on the DR end must be consistent with that on the production end.

    Database

    The database authentication modes on the DR and production ends must be the same.

    Names, user names, and passwords of Oracle databases on the DR and production ends must be the same.

    non-Windows operating system (file system)

    IDs and residing group IDs of Oracle users on the DR and production end must be the same.

    Oracle RAC (Oracle 11g R2)

    scan name of SCAN-IP on the production end must be consistent with that on the production end.

  2. Set up database DR environment at the DR end.

    The following shows two methods to set up database DR environment after Oracle database software has been installed on DR hosts.

    Table 5-190 shows database directories in Windows and UNIX-like operating systems.
    Table 5-190  Oracle database directories in different operating systems of hosts

    Operating System of the Host

    Root Directory of the Database

    Installation Directory of the Database

    Query Method

    Windows

    %ORACLE_BASE%

    %ORACLE_HOME%

    • Run the echo %ORACLE_BASE% command to query the root directory of the Oracle database.
    • Run the echo %ORACLE_HOME% command to query the installation directory of the Oracle database.

    UNIX-like operating system

    $ORACLE_BASE

    $ORACLE_HOME

    • Run the echo $ORACLE_BASE command to query the root directory of the Oracle database.
    • Run the echo $ORACLE_HOME command to query the installation directory of the Oracle database.
    • Method 1: Manually replicate necessary files from the production end to the DR end (suppose the database instance name is db001).
      NOTE:

      To ensure the file permissions on the production and DR ends are the same, replicate necessary files from the Oracle user's directory on the production center to that at the DR end.

      1. Replicate db001-related files under the Oracle user-specified directory on the production center to the corresponding directory on the DR center.
        • In Windows, the specific directory is %ORACLE_HOME%\database.
        • In a Unix-like operating system, the specific directory is $ORACLE_HOME/dbs.
      2. Under the specific directory at the DR end, create folder db001. Under the created folder, create the directory structure according to that on the production end.
        • In Windows, the specific directory is %ORACLE_BASE%\ADMIN.
          • For Oracle 10g, create the alert_%SIDNAME%.ora file under the %ORACLE_BASE%\ADMIN\%DBName%\bdump directory.
          • For Oracle 11g or later, create level-2 directories under the %ORACLE_BASE%\diag\rdbms according to the directory structure on the production end, for example, %ORACLE_BASE%\diag\rdbms\level-1 directory\level-2 directory.
        • In a Unix-like operating system, the specific directory is $ORACLE_BASE/admin.
          • For Oracle 10g, create the alert_%SIDNAME%.ora file under the $ORACLE_BASE/admin/$dbName/bdump directory.
          • For Oracle 11g or later, create level-2 directories under the $ORACLE_BASE/admin/diag/rdbms according to the directory structure on the production end, for example, $ORACLE_BASE/diag/rdbms/level-1 directory/level-2 directory.
      3. If databases at the DR end are clustered, you need to run the following two commands to register the databases and instances.
        • srvctl add database -d {DATABASENAME} -o {ORACLE_HOME}
        • srvctl add instance -d {DATABASENAME} -i {INSTANCENAME} -n {HOSTNAME}

        DATABASENAME is the database name, ORACLE_HOME is the installation directory of the database, INSTANCENAME is the database instance name, and HOSTNAME is the name of the host where the database resides.

      4. You need to configure the pfile file in the following scenarios. For details about how to configure the file, see Oracle Configuration (Configuring the pfile File).
        • If there is an Oracle RAC at the production end but only a standalone Oracle database at the DR end, copy the pfile file of the Oracle database at the production end, after you have created databases at the DR end. And replace the cluster configuration information in the file with the information about the deployment mode of the standalone Oracle database at the DR end. Then use the copy of pfile to generate a spfile file and put the file on the local disk at the DR end.
        • If the save paths of the spfile files of both the production end and DR end are the same and the storage devices used by both spfile files are under DR protection (for example, the save paths of the spfile files of the production and DR ends are the same, and the spfile file and data files of the database are saved in the same ASM disk group), synchronizing data will cause the spfile file of the DR end to be overwritten by the spfile file of the production end. As a result, partial data of the DR end will be lost. In this manner, you are advised to save the spfile file of the DR end to the file system running on the DR host.
        NOTE:
        • In Windows, the spfile file is under the %ORACLE_HOME%\database directory.
        • In a Unix-like operating system, the spfile file is under the $ORACLE_HOME/dbs directory.
    • Method 2: Create databases to generate files necessary for DR at the DR end, by performing the following steps at the DR end:
      1. Map empty LUNs to DR hosts and scan for mapped LUNs on DR hosts.
      2. If databases are installed on the file system at the production end, create a file system the same as the one at the production end, and mount the file system. If the production databases use ASM, create a ASM disk group the same as that at the production end.
      3. Create Oracle databases the same as those at the DR end.
      4. After databases are created, shut down them, and remove mapping relationships of file systems or ASM.
        • If databases use ASM, shut down ASM instances after you shut down the databases.
        • If the DR end is in the RAC environment, run a command to shut down the cluster, including database instances and ASM instances. Then check whether database instances on each node in the cluster have been shut down and whether ASM instances have been shut down. The bonding of raw devices must be canceled.
      5. Remove LUN mappings from the DR end.
      6. You need to configure the pfile file in the following scenarios. For details about how to configure the file, see Oracle Configuration (Configuring the pfile File).
        • If there is an Oracle RAC at the production end but only a standalone Oracle database at the DR end, copy the pfile file of the Oracle database at the production end, after you have created databases at the DR end. And replace the cluster configuration information in the file with the information about the deployment mode of the standalone Oracle database at the DR end. Then use the copy of pfile to generate a spfile file and put the file on the local disk at the DR end.
        • If the save paths of the spfile files of both the production end and DR end are the same and the storage devices used by both spfile files are under DR protection (for example, the save paths of the spfile files of the production and DR ends are the same, and the spfile file and data files of the database are saved in the same ASM disk group), synchronizing data will cause the spfile file of the DR end to be overwritten by the spfile file of the production end. As a result, partial data of the DR end will be lost. In this manner, you are advised to save the spfile file of the DR end to the file system running on the DR host.
        NOTE:
        • In Windows, the spfile file is under the %ORACLE_HOME%\database directory.
        • In a Unix-like operating system, the spfile file is under the $ORACLE_HOME/dbs directory.
      7. After creating a database at the DR end, check the names of the spfile files at both the production and DR ends.

        If the names are not the same, change the name of the spfile file at the DR end to that at the production end or save the spfile file at the DR end to a local disk, in order to prevent the database at the DR end from being overwritten after DR. For details, see Oracle Configuration (Configuring the pfile File).

Storage End
  • Check the storage environments of the production end and the DR end.
    1. Before using remote replication to protect data, on the DeviceManager, check that the remote replication is normal.

      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.
      • Before using remote replication to protect data of the Oracle database at the production end, build remote replication between the production end and the DR end.
      • If the storage array is FusionStorage V100R006C30SPC100, mount a linked clone volume of a FusionStorage snapshot to another FusionStorage storage device. The two storage devices must use the same block storage client.

    2. Check the remote replication between the production end and the DR end.

      Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.

    3. Check the storage environments of the DR end.

      1. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files, control files, and log files of the application.
      2. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files, control files, and log files of the application.
      3. Check whether all file systems used by the database to be recovered are unmounted from the DR host.
      4. For the AIX DR end host, you need to check the following items:
        • Check whether all logical volumes used by the database to be recovered and volume groups to which the logical volumes belong are deleted from the DR host.
        • Check whether all physical volumes used by the database to be recovered and devices (hdiskx) corresponding to the physical volumes are deleted on the DR host.
      5. If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.

IBM DB2

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where DB2 databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End

Check the database environments of the production end and the DR end where DB2 databases reside.

When the DB2 RHCS is used at the production end and the logical volume is managed in CLVM, a single machine is used at the DR end is not supported.

  1. When a database environment is configured at the production end where hosts running Linux, AIX or HP-UX are deployed, the mount points of the file system used by the database cannot be nested. For example, mount points /testdb/ and /testdb/database1/ are nested.
  2. Before configuring a DB2 database, you must obtain the instance name, user name, and password of the DB2 database. The instance name is the DB2 database instance name. The user name is that of the system user of the DB2 database instance. Usually, the user name is the same as the instance name. The password is that of the system user. The DB2 database password do not support special characters !;"'(),·=\', otherwise, database authentication will fail when creating a protected group.
  3. For the DB2 PowerHA cluster, you need to ensure that the host names of the hosts where the production and DR end DB2 database resides must be same as the specified node name in PowerHA cluster. Otherwise, protected groups will fail to be created.
  4. In AIX, the logical volumes used by the file systems of DB2 databases and those used by raw devices of tablespace cannot belong to one volume group.
  5. Check and configure environment variables.

    eReplication Agent starts to protect data consistency between DB2 databases only after you have configured environment variables for DB2 databases at the production end respectively. You are advised to configure environment variables before installing the eReplication Agent. If you have installed the eReplication Agent, restart it after configuring environment variables. For details, see Starting the eReplication Agent.

    NOTE:
    Before configuring environment variables, check the default shell type of user rdadmin on eReplication Agent.
    • In AIX, if the shell type is bash, modify the .profile file under the rdadmin home directory. If the shell type is csh, modify the .cshrc file under the rdadmin home directory.
    • In Linux, if the shell type is bash, modify the .bashrc file under the rdadmin home directory. If the shell type is csh, modify the .cshrc file under the rdadmin home directory.
    • In HP-UX, if the shell type is bash, modify the .profile file under the rdadmin home directory. If the shell type is csh, modify the .cshrc file under the rdadmin home directory.

    This document uses shell type bash in AIX to modify the .profile file as an example.

    1. Use PuTTY to log in as root to the application server where the eReplication Agent is installed.
    2. Run the TMOUT=0 command to prevent PuTTY from exiting due to session timeout.
      NOTE:

      After you run this command, the system continues to run when no operation is performed, resulting a risk. For security purposes, you are advised to run exit to exit the system after completing your operations.

    3. Run the su - rdadmin command to switch to user rdadmin.
    4. Run the vi ~/.profile command to open the .profile file under the rdadmin directory.
    5. Press i to go to the edit mode and edit the .profile file.
    6. Add the following variables to the .profile file. Table 5-191 describes related environment variables.
      DB2_HOME=/home/db2inst1/sqllib
      PATH=$PATH:$DB2_HOME/bin:/usr/sbin:/sbin
      DB2INSTANCE=db2inst1
      INSTHOME=/home/db2inst1
      export DB2_HOME PATH DB2INSTANCE INSTHOME
      
      NOTE:

      In Linux, you need to add VCS script path in PATH variable, for example PATH=$PATH:$DB2_HOME/bin:/usr/sbin:/opt/VRTS/bin:/sbin.

      Table 5-191  System Variables

      Variable

      Value

      Example

      DB2_HOME

      Installation directory of a DB2 instance

      DB2_HOME=/home/db2inst1/sqllib

      PATH

      Indicates the bin folder under the home directory of a DB2 instance user

      PATH=$PATH:$DB2_HOME/bin:/usr/sbin:/sbin

      DB2INSTANCE

      Name of a DB2 instance

      DB2INSTANCE= db2inst1

      INSTHOME

      Home directory of a DB2 instance user

      INSTHOME=/home/db2inst1

    7. After you have successfully modified the .profile file, press Esc and enter :wq! to save the changes and close the file.

Production End
  1. Check the database configuration at the production end.

    eReplication of the current version supports DR for DB2 databases on AIX, Linux and HP-UX. To enable eReplication to perform DR for DB2 databases, the DB2 databases must meet the following requirements:

    • Installation directories of DB2 instances must reside on local disks or independent storage devices and cannot reside on the same devices as databases for which DR is intended.
    • Data and log files in the DB2 databases for which DR and protection are intended must be stored on LUNs provided by Huawei storage devices.
    • If remote replication is performed on a DB2 database, LUNs used by the database must be in the same consistency group. If the database uses only one LUN, the remote replication of the LUN does not need to be added to a consistency group.

DR End
  1. Check the database environment at the DR end.

    NOTE:

    If the database environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same database environments.

    Table 5-192 describes the lists configuration requirements.

    Table 5-192  Configuration Requirement of the Database on the DR End

    Check Item

    Configuration Requirement

    Installation

    The operating system and its edition run by DB2 databases are the same as that at production end.

    The versions of DB2 databases are the same as that at the production end.

    Configuration

    The cluster where the DB2 databases reside at the DR end has the same configuration as that at the production end. The two clusters must have the same resource configuration and dependency among resources.

    The environment variable configurations are the same as that at the production end.

    Instance

    The instance names, user names, and passwords of databases are the same as that at the production end.

    The user groups to which users of DB2 database instances belong are the same as that at the production end.

    The installation directories of DB2 instances at the DR end reside on local disks or independent storage devices.

    The DB2 databases to be recovered are created under DB2 instances at the DR end and ensure that the created databases meet the following requirements:
    • Names of databases are the same at the DR and production ends.
    • Storage paths (file systems) used by databases or names of devices (raw devices) are the same at the DR and production ends.
    • Names of logical volumes used by databases and volume groups to which logical volumes belong are the same at the DR and production ends.
    • Tablespaces used by databases, log names, and storage configurations are the same at the DR and production ends. For example, tablespace tp1 in production database DB1 uses /db2data/db1 (specified when the database is created) and tablespace tp2 uses /dev/rtdd1, the same tablespaces tp1 and tp2 must be created under /db2data/db1 and /dev/rtdd1 in the DR database DB1 respectively.

  2. Close databases on the DR end.

    After checking the database environments, close databases on the DR end before creating a recovery plan.
    • For a single DB2 device, close the database directly.
    • For a DB2 cluster, take the cluster resources or resource groups of the database offline.

Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage configurations of the DR end.
    1. Check whether all file systems used by the database to be recovered are unmounted from the DR host.
    2. On the DeviceManager, check that secondary LUNs of the remote replications used by DR databases are not mapped to any host, host group, or mapping view.
    3. For the AIX or HP-UX DR end host, you need to check the following items:

      • Check whether all logical volumes used by the database to be recovered and volume groups to which the logical volumes belong are deleted from the DR host.
      • Check whether all physical volumes used by the database to be recovered and devices (hdiskx) corresponding to the physical volumes are deleted on the DR host.

    4. For the Linux end host, check whether all volume groups of the logical volumes used by the database to be recovered are deactivated and exported from the DR host.
    5. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

Microsoft SQL Server

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where SQL Server databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End

Check the database environments of the production end and the DR end where databases reside.

  1. For the SQL Server cluster, to ensure the uniqueness please set the different SQL Server network name when creating production cluster and DR cluster.
  2. In a WSFC cluster, you need to add Authenticated Users permission to SQL Server database.
    1. On the database management page, select Security > Logins, right-click New Login.
    2. In the dialog box that is displayed, click Search.
    3. In the Select User or Group dialog box that is displayed, select Authenticated Users.
    4. Click OK.
Production End
  1. Check the authentication method of SQL Server database at the production end.

    Hybrid authentication must be adopted for SQL Server database, or the connection to the database fails.

  2. Ensure that the name, user name, and password of the SQL Server database at the production end meet the following requirements:

    • The database name can contain only letters, digits, and special characters including _ - @ # $ *
    • The database user name can contain only letters, digits, and special characters including _ - @ # $ *
    • The database password can contain only letters, digits, and special characters including ~ ! % _ - @ # $ *

  3. Check the VSS service status of the application host where the SQL Server database at the production end resides.

    eReplication Agent uses the VSS service to ensure consistency between SQL Server databases. Therefore, the VSS service must be enabled when eReplication Agent is working.

  4. Check database files at the production end.

    • Data files and log files in databases must be stored on storage array LUNs.
    • The disk resources where database file resides must be in Maintenance mode in the cluster manager before the database test or recovery, or the disk resources may fail to be mounted when the database starts.

  5. Grant the connect permission to database user guest.

    NOTE:

    The permission granting is applicable only to the production environment where SQL Server 2012 (both single node and cluster) is in use.

    1. On the database management page, select the database that you want to configure and click Properties.
    2. In the Database Properties dialog box that is displayed, click Permissions.
    3. Click Search.
    4. In the Select Users or Roles dialog box that is displayed, enter guest and click Check Names to check the name validity.
    5. Click OK.
    6. On the Explicit tab page, select Connect.
DR End
  1. Check the database environment at the DR end.

    NOTE:

    If the database environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same database environments.

    Table 5-193 describes the lists configuration requirements.

    Table 5-193  Configuration Requirement of the Database on the DR End

    Check Item

    Configuration Requirement

    Installation

    The operating system and its edition run by SQL Server databases are the same as that at the production end.

    The versions of SQL Server databases are the same as that at the production end.

    Database

    The names, instance names, user names, and passwords of databases are the same as that at the production end.

    The locations that data files and log files in databases are stored are the same as that at the production end.

    SQL Server cluster

    The Always On failover cluster instance is running.

    The names of the resource group and the disks in the resource group at the DR end are the same as those at the production end.

    The failover clusters of the same name do not exist on the same network.

    The disk resources where database file resides must be in Maintenance mode in the cluster manager before the database test or recovery.

    The disk resources where database file resides must be in Maintenance mode in the cluster manager before the database test or recovery, or the disk resources may fail to be mounted when the database starts.

    After the DR cluster host restarts or resets, or cluster services reset, check the status of all resources in the failover cluster manager. For disk resources whose status is Offline and database disk resources for which database DR will be performed, set the status to Maintenance.

  2. Set databases on the DR end offline.

    Do not close instances to which the databases belong.

Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage configurations of the DR end.
    1. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.

      NOTE:

      In the SQL Server cluster, the cluster must be in Maintenance mode before deleting the LUNs mapped.

    2. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

Microsoft Exchange Server

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where Exchange email databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End
  1. Ensure that the name of the Exchange email database at the production end meets naming requirements.

    The database name can contain only letters, digits, and special characters including _ - @ # $ *.

  2. Check the VSS service status of the application host where the Exchange email database at the production end resides.

    eReplication Agent uses the VSS service to ensure consistency between Exchange applications. Therefore, the VSS service must be enabled when eReplication Agent is working.

  3. Check Exchange data files at the production end.

    • Data files and log files in databases used by Exchange must be stored on storage array LUNs.
    • The Exchange 2013 email database cannot include public folders.

  4. The protected databases must be in the mount state to ensure consistency.
DR End
  1. Check the Exchange environment at the DR end.

    NOTE:

    If the Exchange 2007 environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same Exchange environments.

    1. Check whether the operating system and its edition run by Exchange 2007, Exchange 2010, or Exchange 2013 are the same as that at the production end.
    2. Check whether the versions of Exchange 2007, Exchange 2010, or Exchange 2013 are the same as that at the production end.
    3. Check whether names of storage groups used by Exchange 2007 are the same as that at the production end.
    4. Check whether the locations that data files and log files in databases used by Exchange 2007 are stored are the same as that at the production end.
  2. Before performing a DR test or recovery, you need to disable the Exchange service at the DR end for Exchange 2007 and enable the Exchange service at the DR end for Exchange 2010 and Exchange 2013.
Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage environments of the DR end.
    1. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.
    2. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage. In a cluster, add the logical host to the host group.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

VMware VMs

Before configuring DR services, check the virtualization environments of the production end and the DR end, and the storage end environment. If any environment does not meet requirements, modify its configuration.

Common Check Items

Check and configure the following items in the VMware VM environment on the production and DR ends.

  1. Check the version of vCenter Server installed at the production end and the DR end.

    eReplication is compatible with VMware vSphere 5.0, 5.1, 5.5, 6.0, and 6.5. Check whether the vCenter Server version is applicable before configuring the DR. Install the same version of vCenter Server on both production and DR ends, because different versions of vCenter Server may cause incompatibility with the preinstalled VMware Tools or Open VM Tools, resulting in unexpected failures.

    1. Double-click the vSphere Client icon and enter vCenter Server IP address and its administrator account. Click Login.

      NOTE:
      You will be prompted by an alarm about installing a certificate upon the login. Install a certificate as instructed or ignore the alarm.
      An administrator account of the vSphere vCenter Server is required for logging in to and later locating the vSphere vCenter Server. For vSphere vCenter Server 5.0 and 5.1, the default account is administrator. For vSphere vCenter 5.5, 6.0, 6.5, after vSphere SSO is installed, the default account is administrator@vsphere.local.


    2. On the menu bar, choose Help > About VMware vSphere and view the vCenter Server version.
  2. When an ESXi cluster is used, ensure that Distributed Resource Scheduler (DRS) is enabled and the automation level of Distributed Resource SchedulerDRS is not Manual.
    1. Choose Inventory > Hosts and Clusters. In the navigation tree on the left, right-click the ESXi cluster and choose Edit and Configure from the shortcut menu.
    2. Click Cluster function and select Open VMware DRS.
    3. Click VMware DRS and do not select Manual in Automation level.
  3. When recovering VMware VMs from a disaster, ensure that the VM names do not contain pound signs (#).

    Otherwise, you may fail to modify the VMs configuration file while testing a recovery plan, executing planned migration, or executing fault recovery.

  4. Confirm that virtual disks used by the protected VM are not in the independent mode (persistent or non-persistent).
Production End

Before performing DR protection for or recovering VMs, ensure that VMware Tools or Open VM Tools has been installed on the VMware VMs. The VMware Tools version or Open VM Tools needs to match the vSphere vCenter version. Unexpected failures may occur if an incompatible version is used. For details about how to obtain and install VMware Tools or Open VM Tools, see the related VMware documentation.

  1. Check whether VMware Tools or Open VM Tools has been installed on the VMs that require DR protection at the production end and whether the VMware Tools or Open VM Tools version is correct.

    Choose Inventory > Hosts and Clusters. In the navigation tree, click a VM that requires DR protection. In the function pane, click the Summary tab and view information about VMware Tools in the General area. If VMware Tools or Open VM Tools is not installed on the VM, install it. If the VMware Tools or Open VM Tools version is not correct, install a correct version.

    • If VMware Tools or Open VM Tools is not installed, information shown in Figure 5-26 or Figure 5-27 is displayed.
      Figure 5-26  VMware Tools not installed

      Figure 5-27  Open VM Tools not installed

    • If VMware Tools or Open VM Tools has been installed but its version is incorrect, information shown in Figure 5-28 is displayed.
      Figure 5-28  Incorrect version of VMware Tools or Open VM Tools installed

    • If VMware Tools has been installed and its version is correct, information shown in Figure 5-29 or Figure 5-30 is displayed.
      Figure 5-29  Correct version of VMware Tools installed

      Figure 5-30  Correct version of Open VM Tools installed

Storage End
Table 5-194 describes the storage requirements of DR solutions.
Table 5-194  Requirements of disaster recovery solutions on storage

Technology

Restriction and Requirement

Snapshot (SAN)

  • VMs cannot use RDM virtual disks.
  • Check whether datastores used by the VMs that require DR protection are LUNs belong to Huawei storage arrays.

Snapshot (NAS)

  • VMs cannot use RDM virtual disks.
  • VMs' virtual disks cannot reside in different storage devices and datastores used by VMs must be created by Huawei NFS file systems.
  • File systems can be shared by NFSv3 only.
  • Check whether remote replication pairs or consistency groups of the arrays are correctly set up in the asynchronous replication DR.
  • Check whether the shared file systems on the arrays have the following permissions:
    • Read and write
    • no_all_squash
    • root_squash

Local File Systems (NTFS)

Before configuring disaster recovery (DR) services, check the server environments of the production end and the DR end where NTFS local file systems reside. If the server environments do not meet requirements, modify server configurations.

Check Items of NTFS File Systems

The following check items need to be checked and configured on file systems on the production end.

  1. On the production end, file systems must be created using simple volumes.
  2. On the production end, file systems are created on storage array LUNs.
Storage End
Table 5-195 describes the storage requirements of disaster recovery solutions.
Table 5-195  Requirements of disaster recovery solutions on storage

Technology

Restriction and Requirement

Snapshot (SAN)

Check the server environment of the DR end. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.

LUN

Before configuring disaster recovery (DR) services, check the storage environments of the production end and the DR end where LUNs reside. If the storage environments do not meet requirements, modify storage configurations.

Production End and DR End

Check the storage environments of the production end and the DR end.

Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.

DR End
  1. Delete the LUNs mapped to the DR host or test host at the DR end and ensure that the corresponding drive letters are not occupied by other LUNs.
  2. Check the storage configurations of the DR end.

    • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
    • If the DR storage is T series V2 or later or 18800 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

SAP HANA

Before configuring disaster recovery (DR) services, check the database environments of the production end and the DR end where SAP HANA databases reside, and the storage end environment. If the database environments do not meet requirements, modify database configurations.

Production End and DR End

Check the database environments of the production end and the DR end where SAP HANA databases reside.

  1. When a database environment is configured at the production end where hosts running Suse11, Suse12 or RedHat7 are deployed, the mount points of the file system used by the database cannot be nested. For example, mount points /testdb/ and /testdb/database1/ are nested.
  2. Before configuring a SAP HANA database, you must obtain the instance name, user name, and password of the SAP HANA database.

    The instance name is the SAP HANA database instance name. The user name is that of the system user of the SAP HANA database instance. Usually, the user name is the same as the instance name. The password is that of the system user.

  3. Check whether the password of the SAP HANA database meets the requirements for input characters. If other special characters are entered, the database will fail to be authenticated during creating a protected group.

    Supported characters: only letters, digits, and special characters including _#%^+-.,~@$.

Production End
  1. Check the database configuration at the production end.

    eReplication of the current version supports DR for SAP HANA databases on Suse11, Suse12 and RedHat7. To enable eReplication to perform DR for SAP HANA databases, the SAP HANA databases must meet the following requirements:

    • Installation directories of SAP HANA instances must reside on local disks or independent storage devices and cannot reside on the same devices as databases for which DR is intended.
    • Data files in the SAP HANA databases for which DR and protection are intended must be stored on LUNs provided by Huawei storage devices.

DR End
  1. Check the database environment at the DR end.

    NOTE:

    If the database environments at the DR end and production end are not the same, the user should deal with it to make sure that the DR end and production end have the same database environments.

    Table 5-196 describes the lists configuration requirements.

    Table 5-196  Configuration Requirement of the Database on the DR End

    Check Item

    Configuration Requirement

    Installation

    The operating system and its edition run by SAP HANA databases are the same as that at production end.

    The versions of SAP HANA databases are the same as that at the production end.

    Configuration

    The cluster where the SAP HANA databases reside at the DR end has the same configuration as that at the production end. The two clusters must have the same resource configuration and dependency among resources.

    Instance

    The instance names, numbers, user names, and passwords of databases are the same as that at the production end.

    The user groups to which users of SAP HANA database instances belong are the same as that at the production end.

    The installation directories of SAP HANA instances at the DR end reside on local disks or independent storage devices.

    The SAP HANA databases to be recovered are created under SAP HANA instances at the DR end and ensure that the created databases meet the following requirements:
    • Names of databases are the same at the DR and production ends.
    • Storage paths (file systems) used by databases are the same at the DR and production ends.
    • Names of logical volumes used by databases and volume groups to which logical volumes belong are the same at the DR and production ends.

  2. Close databases on the DR end.

    After checking the database environments, close databases on the DR end before creating a recovery plan.

    For a single SAP HANA device, close the database directly.

  3. Modify the configuration file of the database at the DR database end.

    After shutting down the database at the DR end, modify the value of parameter id in the nameserver.ini configuration file of the database at the DR end to keep the value consistent with that of the database at the production end.
    NOTE:
    This value is used to identify storage devices of the database.
    1. Use PuTTY to log in to the server at the production end as user root.
    2. Run the following command to prevent PuTTY from exiting due to session timeout:

      TMOUT=0

      NOTE:

      After you run this command, the system continues to run when no operation is performed, resulting a risk. For security purposes, you are advised to run the exit command to exit the system after completing your operations.

    3. Run the following command to view the value of id in the nameserver.ini configuration file. SID is the value of SAP HANA System ID set when you create a database. Set the value based on the site requirements.

      cat /usr/sap/SID/SYS/global/hdb/custom/config/nameserver.ini

    4. Use PuTTY to log in to the server at the DR end as user root.
    5. Run the following command to prevent PuTTY from exiting due to session timeout:

      TMOUT=0

      NOTE:

      After you run this command, the system continues to run when no operation is performed, resulting a risk. For security purposes, you are advised to run the exit command to exit the system after completing your operations.

    6. Run the following command to open the nameserver.ini configuration file: SID is the value of SAP HANA System ID set during creating a database. Set the value based on the site requirements.

      vi /usr/sap/SID/SYS/global/hdb/custom/config/nameserver.ini

    7. Press i to enter the editing mode and find parameter id. Modify the value of id to be that obtained from 3.c.
    8. Press ESC to enter the browsing mode, and then press Shift and : simultaneously. Run wq! to save and exit.

  4. Replace the database SSFS verification files used at the DR end with those used at the production end.

    /hana/shared/SID/global/hdb/security/ssfs/SSFS_SID.DAT

    /hana/shared/SID/global/hdb/security/ssfs/SSFS_SID.KEY

    NOTE:

    SID indicates the value of SAP HANA System ID specified during the database creation.

Storage End
  • Check the storage environments of the production end and the DR end.
    1. Check the remote replication between the production end and the DR end. Before using remote replication to protect objects at the production end, ensure that the remote replication is normal between the production end and the DR end.
  • Check the storage configurations of the DR end.
    1. Check whether all file systems used by the database to be recovered are unmounted from the DR host.
    2. On the DeviceManager, check that secondary LUNs of the remote replications used by DR databases are not mapped to any host, host group, or mapping view.
    3. For the Linux end host, check whether all volume groups of the logical volumes used by the database to be recovered are deactivated and exported from the DR host.
    4. After setting up an application environment on the DR end, delete the mapping between the host and the storage LUNs or volumes that store the data files and log files of the application.

      • If the DR storage is the S5000 series or T series V1, map the initiator of the physical DR host to the logical host of the DR storage.
      • If the storage is T series V2R2 or later or 18000 series, the storage support automatically add host and storage mapping function. If the connection between the storage device and the host initiator is normal, the hosts, host groups, LUN mappings, and mapping view will be created in the storage device automatically.
      • If the DR storage is T series V2 or later or 18000 series, the DR host can belong to only one host group that belongs to only one mapping view. The remote replication secondary LUN that corresponds to the storage LUN used by the protected applications can belong to only one LUN group, and the LUN group must belong to the same mapping view as the host group.

Translation
Download
Updated: 2019-05-21

Document ID: EDOC1100075861

Views: 10664

Downloads: 55

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next