No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

OceanStor 2600 V3 Storage System V300R005 HyperMetro Feature Guide 06

"This document describes the implementation principles and application scenarios of theHyperMetro feature. Also, it explains how to configure and manage HyperMetro."
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Network Planning

Network Planning

This section describes network planning for the active-active data center solution, including standard, non-recommended, and unsupported networking modes.

Standard Networking

This section describes the standard networking for the active-active data center solution, including both single-DC and cross-DC deployments.

Single-DC Deployment

Figure 2-1 illustrates the standard networking for deployment in a single DC.

Figure 2-1 Standard networking for single-DC deployment
  • Scenario

    The storage systems are deployed in two equipment rooms in the same campus. Their distance is generally within 10 km.

  • Networking principle
    • The two equipment rooms are in different fault domains. Uninterruptible power supplies (UPSs) must be configured for the quorum server and network devices separately.
    • The hosts are deployed in a cluster.
    • Hosts are physically and logically connected to both storage systems.
    • The switches are connected in pairs between the equipment rooms.
      NOTE:

      If the storage systems are in the same equipment room, UPSs must be configured for the storage systems, quorum server, and quorum network devices separately.

  • Network planning

    HyperMetro ensures the reliability of storage arrays by using redundant links among all of its networks. For the network details, see Table 2-2.

    Table 2-2 Network planning

    Network

    Description

    Storage-to-host network

    All of the hosts can be interconnected to form a cluster.

    Network type

    Support 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 10GE, and GE networks.

    Networking mode

    • A fully interconnected network is used between hosts and storage systems, that is, each host is physically and logically connected to every controller on both storage systems.
    • A host must connect to both storage systems using the same type of network.
    • The HyperMetro replication network, storage-to-host network, and quorum network must be physically isolated and use different ports.

    HyperMetro replication network

    A network between the storage systems to synchronize heartbeat information and data.

    NOTE:

    The storage system sets link priorities for transferring different types of data. Heartbeat information has a higher priority than data synchronization.

    Network type

    • Supports 10GE, 8 Gbit/s Fibre Channel, and 16 Gbit/s Fibre Channel networks.
      NOTE:

      If this is a 10GE network, you can use L2 or L3 devices. For better synchronization performance, L2 devices are recommended.

    • Bandwidth: ≥ peak service bandwidth (total read and write bandwidth on both sides)
    • The HyperMetro replication network, storage-to-host network, and quorum network must be physically isolated and use different ports.

    Networking mode

    • Each controller on every storage system has at least two redundant physical links.
    • Controllers are connected in a parallel way between the storage systems. To be more specific, controller A of the local storage system connects to controller A of the remote storage system, and so do controllers B.
    • The HyperMetro replication network does not support network address translation (NAT).

    Quorum network

    If communication between the storage systems is interrupted or a storage system malfunctions, the quorum server determines which storage system is accessible.

    NOTE:

    The quorum server resides on a dedicated network that is linked to both storage systems.

    Network type

    • Quorum links support GE and 10GE networks, but not a Fibre Channel network.
    • Quorum ports on storage systems: independent service ports can be used as quorum ports, but management and maintenance ports cannot.
    • Quorum links support IPv4 and IPv6 addresses.
    • Network quality and bandwidth requirements:
      • Latency: RTT ≤ 50 ms
      • Bandwidth: ≥ 10 Mbit/s
    • The HyperMetro replication network, storage-to-host network, and quorum network must be physically isolated and use different ports.

    Networking mode

    • An independent quorum server must be deployed.
    • A GE/10GE port on each controller of a storage system is connected to the quorum server and the service network ports on the quorum server are connected to two storage systems, ensuring that the quorum server is connected to all controllers of each storage system.
    • The quorum ports on the storage systems are on different network segments to prevent arbitration failure caused by a network segment fault.
    • The quorum server and storage systems can be connected by Layer 2 or Layer 3 networks, but do not support Virtual Router Redundancy Protocol (VRRP).
    • The networks between both storage systems and the quorum server, and the active and standby ports on the quorum server must be on different network segments.
    • The quorum network does not support NAT.
Cross-DC Deployment

Figure 2-2 illustrates the standard networking for cross-DC deployment.

Figure 2-2 Standard networking for cross-DC deployment
  • Scenario

    The storage systems are deployed in different data centers that are dozens to 300 km apart from each other.

  • Networking principle
    • The two data centers and the quorum site must be in different fault domains for fault isolation.
      NOTE:

      A fault domain is a set of devices that share a fault source such as the power system, cooling system, gateway, network, and impact from natural disasters.

    • The hosts are deployed in a cluster.
    • Hosts are physically and logically connected to both storage systems.
    • The switches are connected in pairs between the data centers.
    • The replication links between the data centers must be carried by bare fibers. If the data centers are interconnected by an IP network, DWDM devices must be deployed if they are more than 80 km apart. If the data centers are interconnected by a Fibre Channel network, DWDM devices must be deployed if they are more than 25 km apart.
      NOTE:

      The direct transmission distance of the Fibre Channel switches depends on the optical modules. You can query the value from the label on the optical module or in the optical module documentation.

  • Network planning

    HyperMetro ensures the reliability of storage arrays by using redundant links among all of its networks. For the network details, see Table 2-3.

    Table 2-3 Network planning

    Network

    Description

    Storage-to-host network

    All of the hosts can be interconnected across data centers to form a cluster.

    Network type

    Support 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 10GE, and GE networks.

    Networking mode

    • A fully interconnected network is used between hosts and storage systems, that is, each host is physically and logically connected to every controller on both storage systems.
    • A host must connect to both storage systems using the same type of network.
    • Dual-switch networking must be used.
    • The HyperMetro replication network, storage-to-host network, and quorum network must be physically isolated and use different ports.

    HyperMetro replication network

    A network between the storage systems in the two data centers to synchronize heartbeat information and data.

    NOTE:

    The storage system sets link priorities for transferring different types of data. Heartbeat information has a higher priority than data synchronization.

    Network type

    • Supports 10GE, 8 Gbit/s Fibre Channel, and 16 Gbit/s Fibre Channel networks.
      NOTE:

      If this is a 10GE network, you can use L2 or L3 devices. For better synchronization performance, L2 devices are recommended.

    • Network quality and bandwidth requirements:
      • Bandwidth: ≥ peak service bandwidth (total read and write bandwidth on both sides)
      • Latency: RTT < 10 ms (distance < 300 km)
        NOTE:

        In practice, the latency is determined by the requirements of the application layer. The active-active solution must meet the minimum latency requirements. For the Oracle RAC, SQL Server, and DB2 applications, the RTT must be less than 1 ms (with a distance of less than 100 km). For the VMware vSphere applications, the RTT must be less than 10 ms.

      • No jitter or packet loss
      • BER: ≤ 10-12
    • FastWrite:

      If the HyperMetro replication network spans over 10 km, you are advised to enable this function. If the HyperMetro replication network spans over 25 km, you must enable this function.

      NOTE:

      The 8 Gbit/s Fibre Channel, 10GE, and SmartIO (in 10GE mode) interface modules support FastWrite. The 16 Gbit/s Fibre Channel and SmartIO (in 8 Gbit/s or 16 Gbit/s Fibre Channel mode) interface modules do not support FastWrite.

    • The HyperMetro replication network, storage-to-host network, and quorum network must be physically isolated and use different ports.

    Networking mode

    • Each controller on every storage system in both data centers has at least two redundant physical links.
    • Controllers are connected in a parallel way between the storage systems. To be more specific, controller A of the local storage system connects to controller A of the remote storage system, and so do controllers B.
    • The HyperMetro replication network does not support network address translation (NAT).

    Network between data centers

    Both data centers are interconnected to carry the same services and back up each other.

    Network type

    The network uses bare fibers.

    Networking mode

    • For Fibre Channel networks:
      • The two data centers can be directly connected using bare fibers if their distance is within 25 km. Ensure that the storage and application layers each have at least two pairs (four wires) of bare fibers for heartbeat interconnection in the cluster.
      • If the data centers are greater than or equal to 25 km apart, use DWDM devices to build the interconnection network between DCs.
    • For IP networks:
      • The two data centers can be directly connected using bare fibers if their distance is within 80 km. If core switches are deployed, ensure that at least two pairs (four wires) of bare fibers are connected to the core switches for HyperMetro mirroring at the storage layer and heartbeat interconnection at the application layer.
      • If the data centers are greater than or equal to 80 km apart, use DWDM devices to interconnect them.

    Quorum network

    If communication between the storage systems in data centers A and B is interrupted or a storage system malfunctions, the quorum server determines which storage system is accessible.

    NOTE:

    The quorum server resides on a dedicated network that is linked to both storage systems.

    Network type

    • Quorum links support GE and 10GE networks, but not a Fibre Channel network.
    • The requirements for the quorum ports on the storage systems are the same as those for the deployment in a single data center.
    • Quorum links support IPv4 and IPv6 addresses.
    • Network quality and bandwidth requirements:
      • Latency: RTT ≤ 50 ms
      • Bandwidth: ≥ 10 Mbit/s
    • The HyperMetro replication network, storage-to-host network, and quorum network must be physically isolated and use different ports.

    Networking mode

    • An independent quorum server must be deployed.
    • You are advised to deploy the quorum server at a dedicated site.
      NOTE:

      If there is no dedicated quorum site, it is recommended that you deploy the quorum server at the preferred site and configure UPS protection for the quorum server and quorum network devices.

      If the quorum server is deployed at the preferred site, the preferred site wins the arbitration and continues to provide services in the event of a failure on the network between the data centers. If the quorum server is deployed at the non-preferred site, services will be provided by the non-preferred site if the network between the data centers fails, and the expectations of setting the preferred site cannot be met.

    • A dual-switch network is required. At least one GE or 10GE port on every controller of each storage system in both data centers must be connected to the service network ports on the quorum server.
    • It is recommended that you connect the quorum ports on each storage system to different switches and configure different network segments to prevent arbitration failure caused by a network segment fault.
    • The quorum server can be a physical or a virtual server. If a virtual server is used, you are advised to deploy VMware vSphere/FusionSphere FT or HA to achieve high availability.
    • Huawei Enterprise Cloud (HEC) can be used as a quorum server. When the HEC is used as the quorum server, you need to apply for a VM (including the CPU, memory, disk, and OS). The VM specifications are the same as those of the quorum server. In addition, you need to apply for 2 Mbit/s exclusive bandwidth and one elastic IP address for each storage system.
    • The quorum server and storage systems can be connected by Layer 2 or Layer 3 networks, but the quorum network between the two data centers does not support Virtual Router Redundancy Protocol (VRRP).
    • The networks between both storage systems and the quorum server, and the active and standby ports on the quorum server must be on different network segments.
    • The quorum network does not support NAT.

Non-recommended Networking

This section describes the non-recommended networking for the active-active data center solution.

NOTE:

If your network does not use the standard or any of the following topologies, contact Huawei technical support engineers for risk evaluation.

Quorum Server Deployed at Either Site

The following figure is an example of deploying the quorum server at DC A.

Figure 2-3 Networking diagram

  • Scenario

    The quorum server is deployed at DC A or DC B. No UPS is configured for the quorum server or quorum network devices.

  • Risk

    Services will be interrupted if the DC where the quorum server is deployed is down due to a power failure or an unexpected disaster.

No Quorum Server

The following uses cross-DC deployment as an example. DC A is the preferred site and DC B is the non-preferred site. Figure 2-4 illustrates the networking diagram.

NOTE:

This scenario is also applicable to deployment in the local data center.

Figure 2-4 Networking diagram
  • Scenario

    No quorum server is deployed and the storage systems use static priorities.

  • Risk

    Services will be interrupted if the preferred site fails.

Both Storage Systems Deployed in the Same Equipment Room

Figure 2-5 shows the networking diagram.

Figure 2-5 Networking diagram
  • Scenario

    Both storage systems are deployed in the same equipment room.

  • Risk

    Services will be interrupted if the equipment room is down due to a power failure or an unexpected disaster.

Unsupported Networking

This section describes the networking modes that are not supported by the active-active data center solution.

NOTICE:

Do not use the networking modes in this section when deploying the active-active data center solution.

Port Sharing

The following uses cross-DC deployment as an example. Figure 2-6 illustrates the networking diagram.

NOTE:

This scenario is also applicable to deployment in the local data center.

Figure 2-6 Networking diagram
  • Scenario

    The storage-to-host service network, HyperMetro replication network between the data centers, and quorum network share ports on the storage system.

  • Risk
    • If the service and replication networks share a port, both the service and replication links will be down if this port fails. Services may be interrupted.
    • If the service and quorum networks share a port, the quorum link will be down if this port fails.
    • If the replication and quorum networks share a port, both the replication and service links will be down if this port fails. Services may be interrupted.
Cloned Quorum Server VM

In this networking mode, the quorum server is deployed on a VM, which is cloned from another existing quorum server VM. Figure 2-7 shows the networking diagram.

NOTICE:

When the quorum server is deployed on a VM, the VM's system and data disks must not be created on the HyperMetro LUNs.

Figure 2-7 Networking diagram
  • Scenario

    The quorum server is a cloned VM of another existing quorum server VM.

  • Risk

    The cloned VM has the same information as the source VM, which will cause quorum server conflict.

Translation
Download
Updated: 2018-09-03

Document ID: EDOC1000106183

Views: 26784

Downloads: 269

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next