No relevant resource is found in the selected language.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Search

Reminder

To have a better experience, please upgrade your IE browser.

upgrade

Configuration Guide - VXLAN

CloudEngine 12800 and 12800E V200R003C00

This document describes the configurations of VXLAN.
Rate and give feedback :
Huawei uses machine translation combined with human proofreading to translate this document to different languages in order to help you better understand the content of this document. Note: Even the most advanced machine translation cannot match the quality of professional translators. Huawei shall not bear any responsibility for translation accuracy and it is recommended that you refer to the English document (a link for which has been provided).
All-Active VXLAN Gateway

All-Active VXLAN Gateway

NOTE:

This function is supported when the underlay network is an IPv4 network, and is not supported when the underlay network is an IPv6 network.

Background

Multiple gateways are often deployed on a VXLAN network to improve reliability. When one gateway fails, traffic can be rapidly switched to another gateway. This prevents service interruptions.

VRRP can be used to improve the reliability. In VRRP networking, only the active gateway can forward traffic and provide the gateway service. The standby gateway provides the gateway service only after the active gateway fails. This switchover mechanism reduces gateway usage and slows down convergence. It is required that reliability be guaranteed and multiple gateways be used to forward traffic to make full use of gateway resources.

Figure 5-1 Centralized all-active VXLAN gateway

Centralized all-active VXLAN gateway can be deployed to meet the preceding requirements. In typical networking composed of spine and leaf switches, the same VTEP address is configured for spine switches to simulate them into one VTEP, and then all spine switches are configured with Layer 3 gateway. Regardless of the spine switch to which traffic is sent, the spine switch can provide the gateway service and correctly forward packets to the next-hop device. Similarly, regardless of the spine switch to which external traffic is sent, traffic can be correctly forwarded to hosts. As shown in Figure 5-1, Spine1 and Spine2 are configured with centralized all-active VXLAN gateway so that they can forward traffic simultaneously. This function improves device resource usage and convergence performance.

When centralized all-active VXLAN gateways are deployed, the spine switch functions as the Layer 3 gateway. Entries of tenants whose packets are forwarded at Layer 3 are generated on the spine switch. However, the space of the spine switch is limited, and may present a bottleneck when increasing numbers of VMs or servers are deployed.

Concepts

In Figure 5-1, the concepts relevant to centralized all-active VXLAN gateway are described as follows:
  • Spine

    Layer 3 gateway on a VXLAN network. The spine switch decapsulates VXLAN packets and forwards them again, allowing servers or VMs between subnets to communicate and physical servers and VMs to communicate with the external network.

  • Leaf

    Layer 2 access device on a VXLAN network. The leaf switch connects to a physical server or VM, encapsulates packets from physical servers and VMs into VXLAN packets, and transmits them on the VXLAN network.

  • vVTEP

    In all-active VXLAN gateway networking, when the same VTEP address is configured for multiple gateways, the gateways form an all-active gateway group and function as a vVTEP.

All-Active Gateway Packet Forwarding

Either IPv4 or IPv6 addresses can be configured for hosts. This means that a VXLAN overlay network can be an IPv4 or IPv6 network. Figure 5-2 shows an IPv4 overlay network.

  • When the network communication between devices is normal:
    • Leaf1 learns two gateway routes through a routing protocol, and selects the optimum path according to route selection rules. When the costs of the two paths are the same, equal-cost routes are available to implement link backup and traffic load balancing.
    • Spine1 and Spine2 advertise Layer 3 VXLAN gateway routes to the IP core network, which advertises routes from other networks to Spine1 and Spine2.
    Figure 5-2 Links are normal

  • If the link between Spine1 and Leaf1 fails or Spine1 fails:
    • The routes that come from Spine1 and are saved on Leaf1 are deleted. According to route selection rules, routes of Spine2 are preferred to reach the IP core network.
      NOTE:
      If there are multiple links between Leaf1 and Spine1, routes of Spine1 will not be deleted if one link fails. Therefore, Leaf1 can still learn two gateway routes through a routing protocol, and select an optimal path according to route selection rules.
    • Spine1's network segment routes are deleted due to the link or device fault. According to route selection rules, routes of Spine2 are preferred to forward traffic from the IP core network to the server. Packets do not reach Spine1.
    Figure 5-3 A downlink of a spine switch or the spine switch fails

  • When the link between Spine1 and the IP core network fails:
    • The Monitor Link group can be configured so that downlink interface synchronizes the uplink interface status. On Spine1, the routes to the IP core network are deleted. On Leaf1, the saved routes from Spine1 are also deleted.

      Later on, traffic from Leaf1 reaches the IP core network through Spine2. Similarly, Spine2 sends traffic from the IP core network to Leaf1.

    • The open programmability system (OPS) mechanism can also be used to run Python scripts on devices. When Spine1 detects an uplink fault, Spine1 automatically decreases the priority of the advertised VTEP host route, so that all traffic is bypassed to Spine2. After the uplink fault recovers, Spine1 automatically restores the priority of the advertised VTEP host route, so that traffic is load balanced between Spine1 and Spine2. This allows for smooth fault recovery.
    Figure 5-4 An uplink of a spine switch fails

ARP/ND Entry Synchronization

In an all-active gateway scenario where the overlay network is an IPv4 network, multiple gateways advertise routes of the same subnet to the upper-layer routing device so that the upper-layer routing device has equal-cost routes to the specified network segment. Traffic from the upper-layer routing device is sent to a gateway through an equal-cost route. If there is no ARP entry of the destination host on the gateway, ARP packets are flooded and traffic is discarded.

To ensure correct traffic forwarding, all-active gateways must synchronize ARP entries. That is, when any host in the subnet where a gateway is deployed goes online, all gateways learn the ARP entry of the host. The device provides the following modes to implement ARP entry synchronization:
  • Controller mode: Dynamic ARP learning is disabled on the VBDIF interfaces of all-active gateways. The controller manages and controls ARP entries uniformly. When a host goes online, the controller obtains the ARP entry of the host and delivers the ARP entry to all-active gateways.
  • Single-node mode: The device automatically learns ARP entries, but does not depend on the controller. The working mechanism is as follows:
    1. A user specifies the IP addresses of all neighbors of a gateway in a DFS (Dynamic Fabric Service) group, so that the gateway establishes neighbor relationships with devices with specified IP addresses.

    2. After neighbor relationships are established, the gateways synchronize ARP entries from each other.

      Two methods are available for synchronizing ARP entries:

      • Real-time synchronization: After receiving an ARP request packet, a gateway synchronizes the new entry or the change to an existing entry to other gateways to ensure ARP entry consistency on the gateways.
      • Batch synchronization: A gateway working for a period synchronizes its large number of ARP entries to a new gateway or an existing gateway that recovers from a fault in a batch.

In an all-active gateway scenario where the overlay network is an IPv6 network, multiple gateways advertise routes of the same subnet to the upper-layer routing device so that the upper-layer routing device has equal-cost routes to the specified network segment. Traffic from the upper-layer routing device is sent to a gateway through an equal-cost route. If there is no ND entry of the destination host on the gateway, ND packets are flooded and traffic is discarded.

To ensure correct traffic forwarding, all-active gateways must synchronize ND entries. That is, when any host in the subnet where a gateway is deployed goes online, all gateways learn the ND entry of the host. The working mechanism is as follows:
  1. A user specifies the IP addresses of all neighbors of a gateway in a DFS group, so that the gateway establishes neighbor relationships with devices with specified IP addresses.

  2. After neighbor relationships are established, the gateways synchronize ND entries from each other.

    Two methods are available for synchronizing ND entries:

    • Real-time synchronization: After receiving an NS packet, a gateway synchronizes the new entry or the change to an existing entry to other gateways to ensure ND entry consistency on the gateways.
    • Batch synchronization: A gateway working for a period synchronizes its large number of ND entries to a new gateway or an existing gateway that recovers from a fault in a batch.
Translation
Download
Updated: 2019-05-05

Document ID: EDOC1100004207

Views: 30770

Downloads: 66

Average rating:
This Document Applies to these Products
Related Documents
Related Version
Share
Previous Next