No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionStorage V100R006C30 Block Storage Service Disaster Recovery Feature Guide 03

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
System Requirements

System Requirements

Before you deploy the DR service for FusionStorage Block, ensure that the system meets the following requirements.

Compatibility

For the compatibility requirements of FusionStorage Block DR service, see Compatibility.

Hardware Requirements

Table 1-1  Hardware requirement
Category Requirement

DR node (independent deployment)/Storage node (converged deployment)

  • DR nodes can be deployed on VMs or physical servers.
    • A DR cluster supports 3 to 64 DR nodes.
    • For details about supported operating systems (OSs), see Compatibility.
  • Requirements for x86 CPU:
    • Converged deployment (SSD cache + HDDs): Each CPU supports at least 6 vCPUs.
    • Converged deployment (only SSDs): Each CPU supports at least 18 vCPUs.
    • Independent deployment: Each CPU supports at least 20 vCPUs.
  • Memory:
    • Independent deployment: Memory used by each server = Memory used by the OS + Memory used by the FSA + Memory used by the DR service

      For the memory used by the OS, see the documents provided by the OS vendor.

      The memory used by the FSA is 8 GB.

      When the DR service is independently deployed, the minimum memory used by each server is 39.5 GB and the recommended memory is 64 GB.

    • Converged deployment: Memory used by each server = Memory used by the storage node + Memory used by the DR service

      When 10GE or 25GE NICs are used on the storage plane, the memory used by the DR service is 18 GB.

      When 56 Gb IB or 100 Gb IB NICs are used on the storage plane, the memory used by the DR service is 22.5 GB.

  • Metadata space:
    • Used space of CCDB: The minimum size is 11 GB and the recommended size is 27 GB.
    • Used space of ZK: The minimum size is 27 GB and the recommended size is 43 GB.
    • Metadata can be deployed on independent disks, physical partitions of non-operating system disks, or logical partitions of operating system disks (not recommended). Metadata space type: SSD and SAS.

      If independent disks are used, ensure that the servers contain independent disks that meet the requirements. If partitions are used, ensure that desired partitions have been created before you create the control cluster.

Quorum server

Quorum servers can be deployed on VMs or physical servers.
  • A quorum server supports up to 8 HyperMetro domains.
  • Reserve at least 4 GB of memory and 10 GB of storage space for a quorum server.
  • A quorum server supports up to four IP addresses.
  • A Taishan node cannot be configured as a quorum server.
  • Supported OSs:
    • SUSE Enterprise 11 SP1-SP4, 64 bit
    • Red Hat Enterprise Linux 6.0-6.7, 64 bit
    • Euler OS 2.0 (SP3), 64 bit
NOTE:

HyperMetro requires the use of quorum servers, but remote replication does not.

Network

Network type
  • Within a site: same as the storage plane. 10GE, 25GE, 56 Gb IB, and 100 Gb IB networks are supported.
  • Between sites:
    • Between HyperMetro sites: 10GE (TCP/IP) and 25GE (TCP/IP) networks are supported.
    • Between remote replication sites: GE (TCP/IP), 10GE (TCP/IP), 25GE (TCP/IP)
    • Between the site and the quorum server: GE and 10GE (TCP/IP) networks are supported.
Requirements on links
  • Requirements of HyperMetro on links:
    • Between the two sites: Two storage systems at the two sites are connected using optical fibers. The maximum supported Round-Trip Time (RTT) is 10 ms, and the typical RTT is less than or equal to 1 ms.

      Estimated bandwidth based on actual services: HyperMetro bandwidth = Host write bandwidth x 1.75

    • Between the site and the quorum server: The maximum supported RTT is 200 ms, and the typical RTT is less than or equal to 50 ms. The minimum supported bandwidth is 2 Mbit/s, and the typical bandwidth is greater than or equal to 10 Mbit/s.
  • Requirements of remote replication on links:

    Between the two sites: The maximum connection distance is unlimited. The minimum bidirectional connection bandwidth is 10 Mbit/s, and the average replication volume write bandwidth is not higher than the remote replication bandwidth.

Replication plane network configuration requirements:
  • The replication plane uses two physical network ports, each configured with an IP address, to share loads. Do not configure bond for the ports. The replication plane can implement load balancing and link redundancy.
  • Each IP address at the local end can communicate with the two IP addresses at the remote end.
  • If the two IP addresses at the local end belong to different VALNs, the communication between the local and remote ends must pass through the Layer 3 network. Otherwise, the DR function may be unavailable after the network is disconnected intermittently.
  • If the two IP addresses of the replication plane are on the same network segment, policy-based routing must be configured.
NOTE:

You are advised to use security gateways to encrypt links on the replication plane between the two sites to ensure data transmission security between storage systems.

Translation
Download
Updated: 2019-01-17

Document ID: EDOC1100044928

Views: 17540

Downloads: 34

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next