Standard Networking
This section describes the standard networking for the active-active DC solution, including both single-DC and cross-DC deployment modes.
Single-DC Deployment
Figure 3-3 illustrates the standard networking for single-DC deployment.
- Scenario
The storage systems are deployed in two equipment rooms in the same campus.
- Networking principle
- The two equipment rooms are in different fault domains. Uninterruptible power supplies (UPSs) must be configured separately for the quorum server and network devices.
- Hosts are deployed in a cluster.
- Hosts are physically and logically connected to both storage systems.
- Each equipment room uses two switches for HyperMetro replication. The switches are connected in pairs.
If the storage systems are in the same equipment room, UPSs must be configured for the storage systems, quorum server, and quorum network devices separately.
- Network planning
HyperMetro ensures the reliability of storage systems by using redundant links among all of its networks. For details, see Table 3-1.
Table 3-1 Network planningNetwork
Description
Host-to-storage network
All hosts can be interconnected to form a cluster.
Network type
The network can be an 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 32 Gbit/s Fibre Channel, GE, 10GE, 25GE, 40GE, or 100GE network.
Networking mode
- A full mesh network is used between hosts and storage systems.
- For OceanStor Dorado 3000 V6, Dorado 5000 V6, and Dorado 6000 V6, each host is physically and logically connected to every controller on both storage systems.
- For OceanStor Dorado 8000 V6 and Dorado 18000 V6, each host is physically and logically connected to each quadrant on both storage systems.
- A host must connect to both storage systems using the same type of network.
- Dual-switch networking must be used.
- The HyperMetro replication network, host-to-storage network, and quorum network must be physically isolated and use different ports.
HyperMetro replication network
A network between the storage systems to synchronize heartbeat information and data.
NOTE:Storage systems set link priorities for transferring different types of data. Heartbeat information has a higher priority than data synchronization.
Network type
- The network can be an 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 32 Gbit/s Fibre Channel, 10GE, 25GE, 40GE, or 100GE network.NOTE:
If you use an IP network, you can use an L2 or L3 network. For better synchronization performance, you are advised to use an L2 network.
- Bandwidth: ≥ peak service bandwidth (total read and write bandwidth on both storage systems). At least 2 Gbit/s is required.
- FastWrite
- It is recommended that you enable FastWrite when the Fibre Channel network between two sites spans over 10 km. To enable FastWrite, run the change port fc fc_port_id=XXX fast_write_enable=yes command on both the local and remote storage systems. Then run the show port general port_id=XXX command. If Fast Write Enable in the command output is Yes, FastWrite has been enabled successfully.
- On an IP replication network, FastWrite has been enabled by default.
- FastWrite on the storage system and FastWrite on the switch cannot be used together. (The function name for Brocade switches is Fast Write, while for Cisco switches is Write Acceleration.)
Networking mode
- On OceanStor Dorado 3000, 5000, and 6000 V6, each controller on every storage system has at least two redundant physical links. On OceanStor Dorado 8000 and 18000 V6, each quadrant must have at least two redundant physical links.
- A full-mesh network is recommended for the connections between two storage systems.
- For OceanStor Dorado 3000 V6, Dorado 5000 V6, and Dorado 6000 V6, physical or logical links are established between every controller on each local controller enclosure and every controller on each remote controller enclosure. For example, controller A on the local controller enclosure 0 must have links to every controller on each remote controller enclosure.
- For OceanStor Dorado 8000 V6 and Dorado 18000 V6, physical or logical links are established between every quadrant on each local controller enclosure and the same quadrant on each remote controller enclosure. For example, quadrant A on the local controller enclosure 0 must have links to quadrant A on every remote controller enclosure.
- The HyperMetro replication network, host-to-storage network, and quorum network must be physically isolated and use different ports. The ports used to establish replication links between both storage systems cannot be bond ports.
- The HyperMetro replication network and quorum network must not share switches.
- The storage systems do not support network address translation (NAT). If a NAT device is deployed on the network, you must configure bidirectional NAT on the NAT device to translate both the source and destination addresses of the HyperMetro replication network.
Quorum network
If communication between the storage systems is interrupted or a storage system malfunctions, the quorum server determines which storage system is accessible.
NOTE:The quorum server resides on a dedicated network that is linked to both storage systems.
Network type
- Quorum links must be established on GE and 10GE networks, but not a Fibre Channel network.
- Quorum links support IPv4 and IPv6 addresses.
- Network quality and bandwidth requirements
- Latency: RTT ≤ 50 ms
- Bandwidth: ≥ 10 Mbit/s
Networking mode
- Independent front-end or management ports can be used as quorum ports, but maintenance ports cannot.NOTE:
- You are advised to use independent front-end ports as quorum ports.
- If you use independent GE ports as quorum ports, you must install dedicated GE interface modules if your product model is one of the following: OceanStor Dorado 5000 V6, Dorado 6000 V6, Dorado 8000 V6, and Dorado 18000 V6.
- An independent quorum server must be deployed.
- For OceanStor Dorado 3000 V6, Dorado 5000 V6, and Dorado 6000 V6, each controller of the storage systems must have quorum links. For OceanStor Dorado 8000 V6 and Dorado 18000 V6, each quadrant must have quorum links.
- It is recommended that you configure the quorum ports on the storage systems on different network segments to prevent arbitration failures caused by network segment faults.
- The quorum server and storage systems can be connected by L2 or L3 networks, but do not support Virtual Router Redundancy Protocol (VRRP).
- The networks between both storage systems and the quorum server, and the active and standby ports on the quorum server must be on different network segments.
- The HyperMetro replication network, host-to-storage network, and quorum network must be physically isolated and use different ports. In addition, the HyperMetro replication network and quorum network must not share switches.
- The storage systems do not support network address translation (NAT). If a NAT device is deployed on the network, you must configure bidirectional NAT on the NAT device to translate both the source and destination addresses of the HyperMetro replication network.NOTE:
For details about the quorum network, see Reference Quorum Networks.
- A full mesh network is used between hosts and storage systems.
Cross-DC Deployment
Figure 3-4 illustrates the standard networking for cross-DC deployment.
- Scenario
The storage systems are deployed in two different DCs up to 300 km apart.
- Networking principle
- The two DCs and the quorum site must be in different fault domains.
A fault domain is a set of devices that share a possible point of failure, such as the power system, cooling system, gateway, network, and impact from natural disasters.
- Hosts are deployed in a cluster.
- Hosts are physically and logically connected to both storage systems.
- Each DC uses two switches for HyperMetro replication. The switches are connected in pairs.
- The two DCs and the quorum site must be in different fault domains.
- Network planning
HyperMetro ensures the reliability of storage systems by using redundant links among all of its networks. For details, see Table 3-2.
Table 3-2 Network planningNetwork
Description
Host-to-storage network
All hosts can be interconnected across DCs to form a cluster.
Network type
The network can be an 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 32 Gbit/s Fibre Channel, GE, 10GE, 25GE, 40GE, or 100GE network.
Networking mode
- A full mesh network is used between hosts and storage systems.
- For OceanStor Dorado 3000 V6, Dorado 5000 V6, and Dorado 6000 V6, each host is physically and logically connected to every controller on both storage systems.
- For OceanStor Dorado 8000 V6 and Dorado 18000 V6, each host is physically and logically connected to each quadrant on both storage systems.
- A host must connect to both storage systems using the same type of network.
- Dual-switch networking must be used.
- The HyperMetro replication network, host-to-storage network, and quorum network must be physically isolated and use different ports.
HyperMetro replication network
A network between the storage systems in the two DCs to synchronize heartbeat information and data.
NOTE:Storage systems set link priorities for transferring different types of data. Heartbeat information has a higher priority than data synchronization.
Network type
- The network can be an 8 Gbit/s Fibre Channel, 16 Gbit/s Fibre Channel, 32 Gbit/s Fibre Channel, 10GE, 25GE, 40GE, or 100GE network.
- Network quality and bandwidth requirements
- Bandwidth: ≥ peak service bandwidth (total read and write bandwidth on both storage systems). At least 2 Gbit/s is required.
- Latency: RTT < 10 ms (distance < 300 km)NOTE:
In practice, the latency is determined by the requirements of the application layer. The active-active DC solution must meet the minimum latency requirement. For the Oracle RAC, SQL Server, and DB2 applications, the RTT must be less than 1 ms (with a distance of less than 100 km). For the VMware vSphere applications, the RTT must be less than 10 ms.
- No jitter or packet loss
- BER: ≤ 10-12
- FastWrite
- It is recommended that you enable FastWrite when the Fibre Channel network between two sites spans over 10 km. To enable FastWrite, run the change port fc fc_port_id=XXX fast_write_enable=yes command on both the local and remote storage systems. Then run the show port general port_id=XXX command. If Fast Write Enable in the command output is Yes, FastWrite has been enabled successfully.
- On an IP replication network, FastWrite has been enabled by default.
- FastWrite on the storage system and FastWrite on the switch cannot be used together. (The function name for Brocade switches is Fast Write, while for Cisco switches is Write Acceleration.)
Networking mode
- On OceanStor Dorado 3000, 5000, and 6000 V6, each controller on every storage system has at least two redundant physical links. On OceanStor Dorado 8000 and 18000 V6, each quadrant must have at least two redundant physical links.
- A full-mesh network is recommended for the connections between two storage systems.
- For OceanStor Dorado 3000 V6, Dorado 5000 V6, and Dorado 6000 V6, physical or logical links are established between every controller on each local controller enclosure and every controller on each remote controller enclosure. For example, controller A on the local controller enclosure 0 must have links to every controller on each remote controller enclosure.
- For OceanStor Dorado 8000 V6 and Dorado 18000 V6, physical or logical links are established between every quadrant on each local controller enclosure and the same quadrant on each remote controller enclosure. For example, quadrant A on the local controller enclosure 0 must have links to quadrant A on every remote controller enclosure.
- The HyperMetro replication network, host-to-storage network, and quorum network must be physically isolated and use different ports. The ports used to establish replication links between both storage systems cannot be bond ports.
- The HyperMetro replication network and quorum network must not share switches.
- The storage systems do not support network address translation (NAT). If a NAT device is deployed on the network, you must configure bidirectional NAT on the NAT device to translate both the source and destination addresses of the HyperMetro replication network.
Network between DCs
Both DCs are interconnected to carry the same services and back up each other.
Network type
The network must use switches and bare fibers.
Networking mode
For Fibre Channel networks:- The two DCs can be connected using switches and bare fibers if their distance is within 25 km. Ensure that the storage and application layers each have at least two pairs (four wires) of bare fibers for heartbeat interconnection in the cluster.
- If the two DCs are greater than or equal to 25 km apart, use DWDM devices to interconnect them.NOTE:
The direct transmission distance of the Fibre Channel switches depends on the optical modules. You can query the value from the label on your optical module or in the documentation specific to your optical module.
For IP networks:- The two DCs can be connected using switches and bare fibers if their distance is within 80 km. If core switches are used, ensure that at least two pairs (four wires) of fibers are connected to the core switches for HyperMetro replication between the storage systems and heartbeat interconnection at the application layer.
- If the two DCs are greater than or equal to 80 km apart, use DWDM devices to interconnect them.
Quorum network
If communication between the storage systems is interrupted or a storage system malfunctions, the quorum server determines which storage system is accessible.
NOTE:The quorum server resides on a dedicated network that is linked to both storage systems.
Network type
- Quorum links must be established on GE and 10GE networks, but not a Fibre Channel network.
- The requirements for the quorum ports on the storage systems are the same as those for the deployment in a single DC.
- Quorum links support IPv4 and IPv6 addresses.
- Network quality and bandwidth requirements
- Latency: RTT ≤ 50 ms
- Bandwidth: ≥ 10 Mbit/s
Networking mode
- An independent quorum server must be deployed.
- You are advised to deploy the quorum server at a dedicated quorum site.NOTE:
- If there is no dedicated quorum site, you are advised to deploy the quorum server at the preferred site and configure UPS protection for the quorum server and quorum network devices.
- If the quorum server is deployed at the preferred site, the preferred site wins the arbitration and continues providing services in the event of a failure on the network between the DCs. If the quorum server is deployed at the non-preferred site, services will be provided by the non-preferred site if the network between the DCs fails, and the expectations of setting the preferred site cannot be met.
- For OceanStor Dorado 3000 V6, Dorado 5000 V6, and Dorado 6000 V6, each controller of the storage systems must have quorum links. For OceanStor Dorado 8000 V6 and Dorado 18000 V6, each quadrant of the storage systems must have quorum links.NOTE:
The requirements for the management ports that can connect to the quorum server are the same as those for the single-DC deployment.
- You are advised to connect the quorum ports on each storage system to different switches and configure different network segments to prevent arbitration failures caused by network segment faults.
- The quorum server can be a physical or a virtual server. If a virtual server is used, you are advised to deploy VMware vSphere/FusionSphere FT or HA to achieve high availability.
- The quorum server can be deployed on Huawei Enterprise Cloud (HEC). When the HEC is used, apply for a VM with the same specifications that you require on the quorum server (including the CPU, memory, disk, and OS). In addition, apply for 2 Mbit/s exclusive bandwidth and one elastic IP address for each storage system.
- The quorum server and storage systems can be connected by Layer 2 or Layer 3 networks, but the quorum network between the two DCs does not support Virtual Router Redundancy Protocol (VRRP).
- The networks between both storage systems and the quorum server, and the active and standby ports on the quorum server must be on different network segments.
- The HyperMetro replication network, host-to-storage network, and quorum network must be physically isolated and use different ports. In addition, the HyperMetro replication network and quorum network must not share switches.
- The storage systems do not support network address translation (NAT). If a NAT device is deployed on the network, you must configure bidirectional NAT on the NAT device to translate both the source and destination addresses of the HyperMetro replication network.NOTE:
For details about the quorum network, see Reference Quorum Networks.
- A full mesh network is used between hosts and storage systems.