No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

FusionServer G5500 Server G560 V5 Compute Node Maintenance and Service Guide 03

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Logical Structure

Logical Structure

The RAID controller card provides eight SAS ports, two of which connect to the HDD backplane. The RAID controller card determines RAID properties of the drives. The drive backplane only provides physical channels and does not process drive data.

The G560 V5 supports the following storage capability options:

  • An LSI SAS3008 or Avago SAS3408 RAID controller card can be configured to support two 2.5-inch local SAS/SATA/M.2 drives and six local SAS/SATA/NVMe drives, supporting RAID 0 and 1.
  • An Avago SAS3508 controller card can be configured to support two 2.5-inch local SAS/SATA/M.2 drives, six local SAS/SATA/NVMe drives, and eight external 3.5-inch SAS/SATA drives, supporting RAID 0, 1, 5, 6, 10, 50, and 60.

The G560 V5 supports the following external ports:

  • Two USB 3.0 ports are provided on the front panel and one USB 3.0 port is provided on the mainboard.
  • Two SFP+ Ethernet ports are provided by the 10GE NIC (Intel X722) that is integrated into the PCH and connected to the management module through the chassis backplane.
  • One DB15 VGA port is provided on the front panel by the iBMC built-in video card. This port is used for the local maintenance of the compute node.

Logical Structure of the G560 V5 and GP608

In this logical topology, the two PCIe switches in the GP608 are cascaded, and PCIe slots 1 to 8 belong to the same root port of the processors, supporting direct data transmission between a maximum of eight GPU cards with an optimal delay. This topology is ideal for machine learning. In this logical topology, I/O slot 2 is unavailable.

NOTE:

If an InfiniBand NIC is used to support a GPU cluster, I/O slot 4 is recommended.

Figure 13-1 Logical topology 1

In this logical topology, the two PCIe switches of the GP608 are connected to the two processors respectively and provide higher uplink bandwidth for PCIe slots 1 to 8. This topology is ideal for HPC and public cloud scenarios. This topology supports direct data transmission between a maximum of four GPU cards.

NOTE:

If InfiniBand NICs are used to support GPU clusters, I/O slots 2 and 4 are recommended.

Figure 13-2 Logical topology 2

Logical Structure of the G560 V5 and GS608

In the logical topology, each of four switches in the GS608 has one PCIe x16 port uplink port for connecting to the processors, providing a larger uplink bandwidth for the eight GPUs.

Figure 13-3 Logical topology 1

In this logical topology, four PCIe switches in the GS608 are cascaded. That is, PCIe switch 1 and PCIe switch 2 are cascaded, and PCIe switch 3 and PCIe switch 4 are cascaded.

NOTE:

If the IB NIC to be configured in the I/O slot supports a GPU cluster, you are advised to install the IB NIC in I/O slot 1, 2, 3, or 4.

Figure 13-4 Logical topology 2
Translation
Download
Updated: 2018-12-14

Document ID: EDOC1100031432

Views: 34248

Downloads: 48

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next