Bay Layout and Connection Planning
Before installing a storage system, properly plan the connections between application servers and the storage system as well as between controller enclosures and disk enclosures.
Planning of Bay Layout
Plan bay locations and cable routing among bays in advance. Proper bay layout is critical for normal running of the storage devices.
To facilitate cable routing, you are advised to use the standard layout (where adjacent bays are joined). If the space is insufficient, you can also use the non-standard layout. For details, see Site Planning Guide.
Table 3-2 lists the requirements for bay layout.
Scenario |
Principle |
---|---|
Plan the location of a bay |
1. The space in front of a bay is used for device maintenance and module installation, and the space behind a bay is used for cable maintenance and can be slightly smaller than that in front of a bay. Leave at least 1200 mm between two rows of cabinets and 1000 mm between a wall and the nearest cabinet. |
2. Use 15 m SAS cables or 10 m RDMA cables to connect different bays. |
Connection Planning Between System Bays and Disk Bays
If a disk bay contains SAS or smart disk enclosures, use SAS or RDMA cables to connect the disk bay to a system bay. For the device layout inside system bays and disk bays, see Planning of Device Layout in a Bay.
- The expansion cables for connecting a disk bay to a system bay are bundled in the disk bay. One end of the cables has been connected to the disk bay before delivery. At the site, you only need to connect the other end of the cables to the system bay according to the labels on the cables.
- You can access Huawei Storage Product Networking Assistant (https://support.huawei.com/onlinetoolsweb/sna/#/home) for more networking diagrams.
- Connecting the system bay and disk bays (SAS disk enclosures)
Figure 3-2, Figure 3-3, Figure 3-4, and Figure 3-5 show how to connect SAS cables when the controller enclosure in system bay 0 is configured with four pairs of 12 Gbit/s SAS expansion modules.
Figure 3-4 Connecting the system bay and disk bays (18000 V5 series, 2 U and 4 U SAS disk enclosures)After disk enclosures are connected and powered on, do not change their positions in the storage system. Otherwise, IDs of disk enclosures may be displayed incorrectly, service performance may deteriorate, or some storage resources may be unavailable.
- Connecting the system bay and disk bays (smart disk enclosures)
Figure 3-6 and Figure 3-7 show how to connect 100 Gbit/s RDMA cables when the controller enclosure in system bay 0 is configured with eight pairs of 100 Gbit/s RDMA expansion modules.
- Connecting the system bay and disk bays (with SAS and smart NVMe disk enclosures intermixed)
The OceanStor 18000 V5 series supports the intermixing of SAS and smart NVMe disk enclosures, as shown in Figure 3-8, Figure 3-9, and Figure 3-10.
Figure 3-8 Connecting the system bay and disk bays (18000 V5 series with 2 U SAS and smart NVMe disk enclosures intermixed)Figure 3-9 Connecting the system bay and disk bays (18000 V5 series with 4 U SAS and smart NVMe disk enclosures intermixed)Figure 3-10 Connecting the system bay and disk bays (18000 V5 series with 2 U SAS, 4 U SAS, and smart NVMe disk enclosures intermixed)After disk enclosures are connected and powered on, do not change their positions in the storage system. Otherwise, IDs of disk enclosures may be displayed incorrectly, service performance may deteriorate, or some storage resources may be unavailable
(Optional) Connection Planning Between Controller Enclosures
This section describes how to connect the storage system with two or more controller enclosures.
Direct-Connection Network
Connect controllers of system bays through SO 100 Gbit/s RDMA interface modules. The controller enclosures of each system bay must be connected through SO 100 Gbit/s RDMA interface modules and the interface modules must be installed in slots IOM H3/L3 and IOM H10/L10. If your storage system has only one system bay, skip this section.
After the first power-on and initialization, the controller enclosures support remote power-on in later operations. For the first power-on and initialization, you must use the power button on the controller enclosures. For details on remote power-on, see "Powering On the Storage System (Remotely on the CLI, Applicable to V500R007C70 and Later Versions)" in the Administrator Guide specific to your product model.
Switched Network (Applicable to V500R007C70SPC200 and Later)
When a storage system has two or more controller enclosures, they can be connected using switches. The SO 100 Gbit/s RDMA interface modules used for multi-controller cascading must be installed in slots IOM H3/L3 and IOM H10/L10. To facilitate future maintenance, attach labels to data switches 0 and 1 to distinguish them after the networking is complete.
When switches are used for networking, only Huawei CE8850-32CQ-EI or CE8851-32CQ8DQ-P switches are supported.
- For CE8850-32CQ-EI switches, ensure that the switch software version is V200R005C10SPC800 (with the V200R005C10SPH017 patch installed) or later. For details about how to query and upgrade the switch version, see the product documentation of CE8850-32CQ-EI switches. To obtain the documentation, log in to Huawei's technical support website (https://support.huawei.com/enterprise/), enter the product model (for example, CE8850) in the search box, and click the suggested path beneath the search box. On the documentation page of the product model, search for, browse through, and download the desired documentation.
- For CE8851-32CQ8DQ-P switches, ensure that the switch software version is V300R020C10SPC200 or later. For details about how to query and upgrade the switch version, see the product documentation of CE8851-32CQ8DQ-P switches. To obtain the documentation, log in to Huawei's technical support website (https://support.huawei.com/enterprise/), enter the product model (for example, CE8851) in the search box, and click the suggested path beneath the search box. On the documentation page of the product model, search for, browse through, and download the desired documentation.
- The two switches used for controller expansion must be of the same model.
- Switches for controller expansion can only be used for scale-out networking. Do not use them for front-end service networking or other purposes.
- Switches used for controller expansion cannot be stacked or cascaded.
- Do not upgrade the firmware of the switches used for controller expansion.
Figure 3-13, Figure 3-14, Figure 3-15, Figure 3-16, Figure 3-17, and Figure 3-18 show how to connect multiple controller enclosures.
After the first power-on and initialization, the controller enclosures support remote power-on in later operations. For the first power-on and initialization, you must use the power button on the controller enclosures. For details on remote power-on, see "Powering On the Storage System (Remotely on the CLI, Applicable to V500R007C70 and Later Versions)" in the Administrator Guide specific to your product model.
Follow-up Procedure
After planning ports, you must formulate port correlation tables for connections between the storage system and switch ports, as listed in Table 3-3.
Connection Planning Between Controller Enclosures and Application Servers
A controller enclosure can be connected to an application server through interface modules of the respective rate. The interface modules on the controller enclosure are in redundancy configuration. Therefore, you are advised to connect the controller enclosure to application servers in redundancy mode. If your storage system has multiple controller enclosures, connect each controller enclosure to the application server.
Context
- Application servers and storage systems support various network modes. An application server is usually connected to a storage system over multiple paths for enhanced data transfer security and reliability.
- In planning block services, hosts require multipathing software to select and manage paths between application servers and storage systems. In planning file services, hosts do not require multipathing software.
- To connect storage systems to application servers, both iSCSI and Fibre Channel networks are supported for block services while only the NAS network is supported for file services. The principles for planning iSCSI and NAS networks between storage systems and application servers are the same.
- This section uses the block service network planning as an example. For details about how to configure services, see the Basic Storage Service Configuration Guide for Block and Basic Storage Service Configuration Guide for File specific to your product model.
On a network with multiple links for block services, multipathing software is required to select and manage paths between application servers and storage systems. This section uses UltraPath, Huawei-developed multipathing software, as an example. For details about how to install and configure UltraPath, see the OceanStor UltraPath User Guide. For the operating systems supporting UltraPath, see the Release Notes corresponding to your UltraPath version. If your operating system does not support UltraPath, use the operating system's native multipathing software.
- A 4 U controller enclosure has four quadrants: A, B, C, and D, as shown in Figure 3-19.
Planning Rules
To improve system reliability, comply with the following rules for cable connection:
- Figure 3-20 shows a 4 U controller enclosure. From left to right, the slot IDs are from IOM H0 to IOM H13 in the upper half and are from IOM L0 to IOM L13 in the lower half.
- In Fibre Channel networking:
- If 12 Gbit/s SAS interface modules are configured, the recommended slot sequence for installing front-end interface modules is as follows: IOM H0/L0 > IOM H13/L13 > IOM H1/L1 > IOM H12/L12 > IOM H2/L2 > IOM H11/L11 > IOM H4/L4 > IOM H9/L9 > IOM H3/L3 > IOM H10/L10.
- If BE 100 Gbit/s RDMA interface modules are configured, the recommended slot sequence for installing front-end interface modules is as follows: IOM H0/L0 > IOM H13/L13 > IOM H1/L1 > IOM H12/L12 > IOM H2/L2 > IOM H11/L11 > IOM H4/L4 > IOM H9/L9 > IOM H5/L5 > IOM H8/L8 > IOM H7/L7 > IOM H3/L3 > IOM H10/L10 > IOM H7/L7.
- In Ethernet networking:
- If 12 Gbit/s SAS interface modules are configured:
- When the storage system is equipped with two controllers, the recommended slot sequence for installing front-end interface modules is as follows: IOM H0/L0 > IOM H1/L1 > IOM H2/L2 > IOM H4/L4 > IOM H3/L3.
- When the storage system is equipped with four controllers, the recommended slot sequence for installing front-end interface modules is as follows: IOM H0/L0 > IOM H13/L13 > IOM H1/L1 > IOM H12/L12 > IOM H2/L2 > IOM H11/L11 > IOM H4/L4 > IOM H9/L9 > IOM H3/L3 > IOM H10/L10.
- If BE 100 Gbit/s RDMA interface modules are configured:
- When the storage system is equipped with two controllers, the recommended slot sequence for installing front-end interface modules is as follows: IOM H0/L0 > IOM H1/L1 > IOM H2/L2 > IOM H4/L4 > IOM H5/L5 > IOM H3/L3.
- When the storage system is equipped with four controllers, the recommended slot sequence for installing front-end interface modules is as follows: IOM H0/L0 > IOM H13/L13 > IOM H1/L1 > IOM H12/L12 > IOM H2/L2 > IOM H11/L11 > IOM H4/L4 > IOM H9/L9 > IOM H5/L5 > IOM H8/L8 > IOM H3/L3 > IOM H10/L10 > IOM H7/L7.
- In Fibre Channel networking:
- Interface modules of the same type must be inserted in sequence.
- Insert the interface modules used for Ethernet networking in ascending order of their port rates. Then insert the interface modules used for Fibre Channel networking in ascending order of their port rates.
- The front-end port connections are symmetric between slots H0 to H13 and slots L0 to L13 on the same storage system. That is, the interface modules reside in slots with the same slot ID and use the ports in the same positions.
- In the Ethernet port bonding scenario, the member bond ports are symmetric between slots H0 to H13 and slots L0 to L13 on the same storage system. That is, the interface modules reside in slots with the same slot ID and use the ports in the same positions.
- If the rate of a GE electrical port is set to 1000 Mbit/s, the GE electrical port supports only the full-duplex mode.
- If GE electrical ports on a GE electrical interface module or network ports on an application server (or a switch) are not in autonegotiation mode, you must use crossover cables to connect the controller enclosure to the application server (or switch). Otherwise, the controller enclosure may fail to communicate with the application server (or switch).
- A GE electrical port uses a CAT5 network cable or CAT6A shielded network cable.
Connection Plans
A storage system can be connected to an application server directly or through two switches.
Other connection modes between storage systems and application servers include single-switch connection, dual-switch connection in a cluster environment, and dual-switch connection in a HyperMetro cluster environment. For details, see "Typical UltraPath Applications" in the OceanStor UltraPath User Guide.
- Direct connection (single controller enclosure)
- iSCSI network
- When the controller enclosure has two controllers, the application server is directly connected to the storage system through two paths. UltraPath automatically calculates and selects the optimal path for data transmission based on the operating status of the storage system. Figure 3-21, Figure 3-22, and Figure 3-23 show the connection diagrams.
If a controller enclosure is configured with two controllers, interface modules can be installed only in slots in quadrants A and B.
- When the controller enclosure has four controllers, the application server is directly connected to the storage system through four paths. UltraPath automatically calculates and selects the optimal path for data transmission based on the operating status of the storage system. Figure 3-24, Figure 3-25, and Figure 3-26 show the connection diagrams.
- When the controller enclosure has two controllers, the application server is directly connected to the storage system through two paths. UltraPath automatically calculates and selects the optimal path for data transmission based on the operating status of the storage system. Figure 3-21, Figure 3-22, and Figure 3-23 show the connection diagrams.
- Fibre Channel network
It is recommended that you connect the application server to the storage system through four paths, no matter whether the controller enclosure has two or four controllers. UltraPath automatically calculates and selects the optimal path for data transmission based on the operating status of the storage system. Figure 3-27 and Figure 3-28 show the connection diagrams.
Alternatively, you can choose to connect the storage system and the application server through two paths. You only need to configure two interface modules in the same slots of the H and L planes, respectively.
- iSCSI network
- Dual-switch connection (single controller enclosure)
In this mode, two switches are added to the direct connections. Each controller is connected to each switch.
The switches increase the number of ports to allow more access paths. Moreover, the switches extend the transmission distance by connecting remote application servers to the storage system. The two switches are redundant of each other to prevent single point of failure and improve the forwarding capability.
When setting up connections through switches, note the following:
- If Ethernet switches are used, plan the network segment of the switches in advance and assign ports to each network segment.
- If Fibre Channel switches are used, plan switch zones in advance and assign ports to each zone.
- The first and last ports on a switch are typically used to connect to other switches.
- iSCSI network
- When the controller enclosure has two controllers, connect the storage system to each switch through two paths, as shown in Figure 3-29, Figure 3-30, and Figure 3-31.
If a controller enclosure is configured with two controllers, interface modules can be installed only in slots in quadrants A and B.
- When the controller enclosure has four controllers, connect the storage system to each switch through four paths, as shown in Figure 3-32, Figure 3-33, and Figure 3-34.
- When the controller enclosure has two controllers, connect the storage system to each switch through two paths, as shown in Figure 3-29, Figure 3-30, and Figure 3-31.
- Fibre Channel network
It is recommended that you connect the storage system to each switch through four paths, no matter whether the controller enclosure has two or four controllers. Figure 3-35 and Figure 3-36 show the connection diagrams.
Alternatively, you can choose to connect the storage system and each switch through two paths. You only need to configure two interface modules in the same slots of the H and L planes, respectively.
Figure 3-36 Connecting the storage device and the application server over Fibre Channel ports through switchesTable 3-4 provides an example of zone planning for a Fibre Channel network.
Table 3-4 Zone configuration exampleZone Name
Application Server
Storage System
Switch
Zone 1
HBA 0, P0
Quadrant A, P0
Switch 1
Zone 2
HBA 0, P0
Quadrant B, P0
Switch 1
Zone 3
HBA 0, P0
Quadrant C, P0
Switch 1
Zone 4
HBA 0, P0
Quadrant D, P0
Switch 1
Zone 5
HBA 1, P0
Quadrant A, P1
Switch 2
Zone 6
HBA 1, P0
Quadrant B, P1
Switch 2
Zone 7
HBA 1, P0
Quadrant C, P1
Switch 2
Zone 8
HBA 1, P0
Quadrant D, P1
Switch 2
- Dual-switch connection (multiple controller enclosures)
When the storage system has two controller enclosures, ensure that front-end interface modules on every enclosure are all connected to the application server. The connection between each controller enclosure and the application server is the same as that when a single controller enclosure is deployed, as shown in Figure 3-37, Figure 3-38, and Figure 3-39.
On Fibre Channel networks, you can choose to connect the storage system and each switch through two paths. You only need to configure two interface modules in the same slots of the H and L planes on each controller enclosure, respectively.
Table 3-5 provides an example of zone planning for a Fibre Channel network.
Table 3-5 Zone configuration exampleZone Name
Application Server
Storage System
Switch
Zone 1
HBA 0, P0
Controller enclosure 0, quadrant A, P0
Switch 1
Zone 2
HBA 0, P0
Controller enclosure 0, quadrant B, P0
Switch 1
Zone 3
HBA 0, P0
Controller enclosure 0, quadrant C, P0
Switch 1
Zone 4
HBA 0, P0
Controller enclosure 0, quadrant D, P0
Switch 1
Zone 5
HBA 1, P0
Controller enclosure 0, quadrant A, P1
Switch 2
Zone 6
HBA 1, P0
Controller enclosure 0, quadrant B, P1
Switch 2
Zone 7
HBA 1, P0
Controller enclosure 0, quadrant C, P1
Switch 2
Zone 8
HBA 1, P0
Controller enclosure 0, quadrant D, P1
Switch 2
Zone 9
HBA 0, P0
Controller enclosure 1, quadrant A, P0
Switch 1
Zone 10
HBA 0, P0
Controller enclosure 1, quadrant B, P0
Switch 1
Zone 11
HBA 0, P0
Controller enclosure 1, quadrant C, P0
Switch 1
Zone 12
HBA 0, P0
Controller enclosure 1, quadrant D, P0
Switch 1
Zone 13
HBA 1, P0
Controller enclosure 1, quadrant A, P1
Switch 2
Zone 14
HBA 1, P0
Controller enclosure 1, quadrant B, P1
Switch 2
Zone 15
HBA 1, P0
Controller enclosure 1, quadrant C, P1
Switch 2
Zone 16
HBA 1, P0
Controller enclosure 1, quadrant D, P1
Switch 2
Follow-up Procedure
After planning ports, you must formulate port correlation tables for connections between the application server and switch ports, and for connections between the storage system and switch ports. Table 3-6 shows port correlation between the application server and switches, and Table 3-7 shows port correlation between the storage system and switches.
(Optional) Active-Active Network Planning
HyperMetro is Huawei active-active storage feature. Two data centers (DCs) of HyperMetro serve as a backup for each other and both DCs are running. For details about HyperMetro planning and configuration, see the HyperMetro Feature Guide for Block or HyperMetro Feature Guide for File specific to your product model. If HyperMetro is not involved, skip this section.