No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 9000 V300R006C00 File System Feature Guide 12

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Configuration Planning

Configuration Planning

This section describes how to plan InfoReplicator in a one-to-one disaster recovery scenario.

Compatibility Issues

Consider the compatibility issues of InfoReplicator before planning it. The compatibility issues include interoperability between the source (primary) and destination (secondary) storage systems and that between clients and storage systems. This example does not contain compatibility issues because:
  • Data is replicated between the same OceanStor 9000 storage systems, and no heterogeneous storage systems are used.
  • Both OceanStor 9000 storage systems are deployed for the first time and of the same version.
  • The clients run Windows Server 2003 and Mac OS X 10.7.5. These clients are compatible with OceanStor 9000 storage systems.

Network Planning

Use the existing network and make as few changes as possible. The example uses the existing 10GE network, as shown in Figure 6-20.
Figure 6-20  Network diagram for one-to-one disaster recovery
The OceanStor 9000 storage systems in data center D and data center Y use a service network for communication and data transmission in between. Table 6-8 plans the front-end service and management IP address for the two storage systems.
Table 6-8  IP address planning
Data Center

Item

IP Address

Subnet Mask

Data center D

Dynamic front-end IP address pool

192.168.1.10 to 192.168.1.15

255.255.255.0

Static front-end IP address

192.168.1.16 to 192.168.1.20

DNS IP address 192.168.1.1

Management IP address

10.110.100.10

255.255.255.0

Data center Y

Dynamic front-end IP address pool

192.168.2.10 to 192.168.2.15

255.255.255.0

Static front-end IP address

192.168.2.16 to 192.168.2.20

DNS IP address 192.168.2.1

Management IP address

10.110.101.10

255.255.255.0

NOTE:
The preceding IP addresses planned for InfoReplicator are only examples. Deploying an OceanStor 9000 storage system requires a more detailed IP address planning. Follow instructions in the OceanStor 9000 Planning Guide to plan the IP addresses.

Service Planning

The following plans services for InfoReplicator according to the customer requirements. If other features or services are required, plan these services or feature by following instructions in the related OceanStor 9000 documentation.

Table 6-9 describes the service planning.
Table 6-9  General planning

Planned Item

Description

Planned Value

License requirement

InfoReplicator is a value-added feature of the OceanStor 9000, and must be purchased and enabled for the storage systems in both data centers.

Activate the licenses.

Directory Create a directory for each type of services that use the OceanStor 9000 under the system root directory in data center D and data center Y. Share the directories to application servers and pair directories as the source and destination of the remote replication.

Create the following directories in the data center D and data center Y:

  • Image file directory: /Bill
  • Video file directory: /Media
  • Log file directory: /AppLog
Replication zone A replication zone is a collection of nodes that participate in remote replication. If you want to use some nodes of the replication zone for remote replication, configure a range of nodes in the default replication zone after the initial configuration.

In this example, you are advised to use all nodes to improve the replication efficiency.

Data center D: 192.168.1.16 to 192.168.1.20

Data center Y: 192.168.2.16 to 192.168.2.20

Replication channel A replication channel is created between the two data centers to establish replication links in a one-to-one scenario.

Do not set a bandwidth limit for the replication channel because most of the customer's service traffic is in the daytime. To avoid the impact on services, start data synchronization during off-peak hours.

Channel_01
Pair Create a pair for each type of services and use the pair to replicate data of a specific service type. Create three pairs:
  • Image file pair: Pair_01
  • Video file pair: Pair_02
  • Log file pair: Pair_03
Because the file image, video, and log file services have similar remote replication requirements, Table 6-10 uses the image file service as an example to describe how to plan the replication policy for a replication pair. Use this table as a reference when planning replication policies for the rest two types of services.
Table 6-10  Planning a replication policy

Planned Item

Description

Planned Value
Synchronization rate Four synchronization rates are available for background tasks of different priorities. The higher the synchronization rate is, the faster data is replicated. The synchronization rate is directly proportional to the impact on system performance.
The default rate is Medium.
  • Low
  • Medium
  • High
  • Highest
Medium
Recovery policy Two recovery policies are available:
  • Automatic: Synchronization is automatically resumed after replication is restored.
  • Manual: Synchronization must be manually started once replication is restored.
Automatic
Synchronization method Three synchronization methods are available:
  • Manual: Synchronization is manually started.
  • Timed wait when synchronization begins: A timed wait starts with synchronization and the next synchronization is automatically started after the waiting period ends.
  • Timed wait when synchronization ends: A timed wait starts after synchronization and the next synchronization is automatically started after the waiting period ends.
Timed wait when synchronization ends
Timed wait

The timed wait refers to an interval between synchronizations.

  • If Synchronization Method is set to Manual, no timed wait is required.
  • If Synchronization Method is set to Timed wait when synchronization begins or Timed wait when synchronization ends, set a timed wait.

The timed wait ranges from 15 to 1440 minutes.

15 minutes
Secondary directory snapshot retention period InfoReplicator allows you to set a retention period for secondary directory snapshots.
  • If a snapshot retention period is not set, only snapshots created for the secondary directory after the latest two synchronizations are retained. Earlier snapshots are deleted to release system space.
  • If a snapshot retention period is set, the expired secondary directory snapshots will be automatically deleted. Snapshots within the retention period will not be deleted.
    NOTE:
    At least two snapshots must be reserved for the secondary directory. If a snapshot retention period is set and only one or no snapshot is created for the secondary directory, expired secondary directory snapshots will not be deleted.
Disable
Translation
Download
Updated: 2019-06-27

Document ID: EDOC1000122519

Views: 84188

Downloads: 151

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next