No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search


To have a better experience, please upgrade your IE browser.


Guide for Interworking Between HUAWEI CloudFabric Solution and Redhat OpenStack

Rate and give feedback:
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
Hybrid Overlay

Hybrid Overlay

  1. Underlay baseline networking

    The underlay network uses the mature spine-leaf architecture to provide the scale-out capability and high access performance in the area. Leaf and spine nodes are fully meshed so that ECMP paths are available to ensure high availability of the network.

    • Spine node: In most cases, a spine node is composed of two large-capacity switches, and the Ethernet interfaces connected to leaf nodes are configured as routed interfaces to establish an IP fabric network.
    • Leaf node: Leaf nodes are connected to each spine node. Leaf nodes are the L2/L3 boundary on the underlay network, and their Ethernet interfaces connected to the spine node are configured as routed interfaces. Because many ToR switches need to be deployed at leaf nodes, the Zero Touch Provisioning (ZTP) mode is recommended to simplify the deployment.
      Figure 2-4 Spine-Leaf architecture
    • Servers can be dual-homed to server leaf nodes through M-LAG, as shown in Figure 2-5.
      Figure 2-5 Servers dual-homed to server leaf nodes through M-LAG

    Device Role

    Recommended Model

    Server Leaf


    Service Leaf


    Boarder Leaf




    Fabric Gateway


  2. Introduction to the Overlay
    • Control plane principles
      1. vSwitches and hardware ToR switches act as NVE nodes.
      2. vSwitches and hardware ToR switches act as distributed routers.
      3. The spine node acts as the RR; the NVE ToR switches and the controller act as clients. The RR and clients belong to the same AS. They set up IBGP peer relationships and use the EVPN address family.
    • Entry synchronization processing in the hybrid overlay:
      1. When a VM goes online, the Agile Controller-DCN imports the MAC address, IP address, and NVE IP address of the VM to its EVPN instance.
      2. The Agile Controller-DCN floods the MAC or IP route of the online VM to other physical NVE nodes by advertising the inclusive IP route to enable Layer 2 and Layer 3 traffic forwarding from the physical NVE node to the virtual NVE node.
      3. When a BM server goes online, the physical switch imports the MAC address and NVE IP address of the VM to the EVPN instance.
      4. BGP EVPN on the physical switch floods the MAC route of the online BM server by advertising the inclusive IP route. The physical switch learns about the Agile Controller-DCN and other physical switches. The Layer 2 and Layer 3 traffic is enabled.
      5. The Agile Controller-DCN delivers flow tables to vSwitches.
Figure 2-6 Overlay
Updated: 2019-03-25

Document ID: EDOC1100072313

Views: 2508

Downloads: 17

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Previous Next