NetEngine 8000 M14, M8 and M4 V800R023C00SPC500 Configuration Guide

VXLAN Configuration

VXLAN Configuration

VXLAN Feature Description

VXLAN Introduction

Definition

Virtual extensible local area network (VXLAN) is a Network Virtualization over Layer 3 (NVO3) technology that uses MAC-in-UDP encapsulation.

Purpose

As a widely deployed core cloud computing technology, server virtualization greatly reduces IT and O&M costs and improves service deployment flexibility.
Figure 1-1021 Server virtualization
On the network shown in Figure 1-1021, a server is virtualized into multiple virtual machines (VMs), each of which functions as a host. A great increase in the number of hosts causes the following problems:
  • VM scale is limited by the network specification.

    On a legacy large Layer 2 network, data packets are forwarded at Layer 2 based on MAC entries. However, there is a limit on the MAC table capacity, which subsequently limits the number of VMs.

  • Network isolation capabilities are limited.

    Most networks currently use VLANs to implement network isolation. However, the deployment of VLANs on large-scale virtualized networks has the following limitations:
    • The VLAN tag field defined in IEEE 802.1Q has only 12 bits and can support only a maximum of 4096 VLANs, which cannot meet user identification requirements of large Layer 2 networks.
    • VLANs on legacy Layer 2 networks cannot adapt to dynamic network adjustment.
  • VM migration scope is limited by the network architecture.

    After a VM is started, it may need to be migrated to a new server due to resource issues on the original server, for example, when the CPU usage is too high or memory resources are inadequate. To ensure uninterrupted services during VM migration, the IP address of the VM must remain unchanged. To carry this out, the service network must be a Layer 2 network and also provide multipathing redundancy backup and reliability.

VXLAN addresses the preceding problems on large Layer 2 networks.
  • Eliminates VM scale limitations imposed by network specifications.

    VXLAN encapsulates data packets sent from VMs into UDP packets and encapsulates IP and MAC addresses used on the physical network into the outer headers. Then the network is only aware of the encapsulated parameters and not the inner data. This greatly reduces the MAC address specification requirements of large Layer 2 networks.

  • Provides greater network isolation capabilities.

    VXLAN uses a 24-bit network segment ID, called VXLAN network identifier (VNI), to identify users. This VNI is similar to a VLAN ID and supports a maximum of 16M [(2^24 - 1)/1024^2] VXLAN segments.

  • Eliminates VM migration scope limitations imposed by network architecture.

    VXLAN uses MAC-in-UDP encapsulation to extend Layer 2 networks. It encapsulates Ethernet packets into IP packets for these Ethernet packets to be transmitted over routes, and does not need to be aware of VMs' MAC addresses. There is no limitation on Layer 3 network architecture, and therefore Layer 3 networks are scalable and have strong automatic fault rectification and load balancing capabilities. This allows for VM migration irrespective of the network architecture.

Benefits

As server virtualization is being rapidly deployed on data centers based on physical network infrastructure, VXLAN offers the following benefits:
  • A maximum of 16M VXLAN segments are supported using 24-bit VNIs, which allows a data center to accommodate multiple tenants.
  • Non-VXLAN network edge devices do not need to identify the VM's MAC address, which reduces the number of MAC addresses that have to be learned and enhances network performance.
  • MAC-in-UDP encapsulation extends Layer 2 networks, decoupling between physical and virtual networks. Tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management.

VXLAN Basics

VXLAN Basic Concepts

Virtual extensible local area network (VXLAN) is an NVO3 network virtualization technology that encapsulates data packets sent from virtual machines (VMs) into UDP packets and encapsulates IP and MAC addresses used on the physical network in outer headers before sending the packets over an IP network. The egress tunnel endpoint then decapsulates the packets and sends the packets to the destination VM.

Figure 1-1022 VXLAN architecture

VXLAN allows a virtual network to provide access services to a large number of tenants. In addition, tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management. Table 1-465 describes VXLAN concepts.

Table 1-465 VXLAN concepts

Concept

Description

Underlay and overlay networks

VXLAN allows virtual Layer 2 or Layer 3 networks (overlay networks) to be built over existing physical networks (underlay networks). Overlay networks use encapsulation technologies to transmit tenant packets between sites over Layer 3 forwarding paths provided by underlay networks. Tenants are aware of only overlay networks.

Network virtualization edge (NVE)

A network entity that is deployed at the network edge and implements network virtualization functions.

NOTE:

vSwitches on devices and servers can function as NVEs.

VXLAN tunnel endpoint (VTEP)

A VXLAN tunnel endpoint that encapsulates and decapsulates VXLAN packets. It is represented by an NVE.

A VTEP connects to a physical network and is assigned a physical network IP address. This IP address is irrelevant to virtual networks.

In VXLAN packets, the source IP address is the local node's VTEP address, and the destination IP address is the remote node's VTEP address. This pair of VTEP addresses corresponds to a VXLAN tunnel.

VXLAN network identifier (VNI)

A VXLAN segment identifier similar to a VLAN ID. VMs on different VXLAN segments cannot communicate directly at Layer 2.

A VNI identifies only one tenant. Even if multiple terminal users belong to the same VNI, they are considered one tenant. A VNI consists of 24 bits and supports a maximum of 16M tenants.

A VNI can be a Layer 2 or Layer 3 VNI.

  • A Layer 2 VNI is mapped to a BD for intra-segment transmission of VXLAN packets.

  • A Layer 3 VNI is bound to a VPN instance for inter-segment transmission of VXLAN packets.

Bridge domain (BD)

A Layer 2 broadcast domain through which VXLAN data packets are forwarded.

VNIs identifying VNs must be mapped to BDs so that a BD can function as a VXLAN network entity to transmit VXLAN traffic.

Virtual Bridge Domain Interface (VBDIF)

A Layer 3 logical interface created for a BD. Configuring IP addresses for VBDIF interfaces allows communication between VXLANs on different network segments and between VXLANs and non-VXLANs and implements Layer 2 network access to a Layer 3 network.

Gateway

A device that ensures communication between VXLANs identified by different VNIs and between VXLANs and non-VXLANs.

A VXLAN gateway can be a Layer 2 or Layer 3 gateway.
  • Layer 2 gateway: allows tenants to access VXLANs and intra-segment communication on a VXLAN.

  • Layer 3 gateway: allows inter-segment VXLAN communication and access to external networks.

Combinations of Underlay and Overlay Networks

The infrastructure network on which VXLAN tunnels are established is called the underlay network, and the service network carried over VXLAN tunnels are called the overlay network. The following combinations of underlay and overlay networks exist in VXLAN scenarios.

Category

Description

Example

IPv4 over IPv4

The overlay and underlay networks are both IPv4 networks.

In Figure 1-1023, Server IP and VTEP IP are both IPv4 addresses.

IPv6 over IPv4

The overlay network is an IPv6 network, and the underlay network is an IPv4 network.

In Figure 1-1023, Server IP is an IPv6 address, and VTEP IP is an IPv4 address.

IPv4 over IPv6

The overlay network is an IPv4 network, and the underlay network is an IPv6 network.

In Figure 1-1023, Server IP is an IPv4 address, and VTEP IP is an IPv6 address.

IPv6 over IPv6

The overlay and underlay networks are both IPv6 networks.

In Figure 1-1023, Server IP and VTEP IP are both IPv6 addresses.

Figure 1-1023 Combinations of underlay and overlay networks

VXLAN Packet Format

VXLAN is a network virtualization technique that uses MAC-in-UDP encapsulation by adding a UDP header and a VXLAN header before a raw Ethernet packet.

Figure 1-1024 shows VXLAN packet formats for different combinations of underlay and overlay networks.
Figure 1-1024 Brief VXLAN packet formats

Figure 1-1025 shows VXLAN packet format details.

Figure 1-1025 VXLAN packet format details
Table 1-466 Description of VXLAN packet formats

Field

Description

VXLAN header

  • VXLAN Flags (8 bits): The value is 00001000.
  • VNI (24 bits): VXLAN network identifier used to identify a VXLAN segment.
  • Reserved fields (24 bits and 8 bits): must be set to 0.

Outer UDP header

  • DestPort: destination port number, which is 4789 for UDP.
  • Source Port: source port number, which is calculated by performing the hash operation on inner Ethernet frame headers.
    NOTE:

    In the case of intra-subnet communication (Layer 2 forwarding), the default hash factor used for calculating the source port number is L2VNI+MAC address. You can change the hash factor to L2VNI through configuration. If the L2VNI+MAC address or L2VNI value is unique, traffic fails to be hashed. In this case, you can configure Layer 2 deep hash to calculate the source port number based on the source IP address, destination IP address, source port number, destination port number, and protocol type.

    In the case of inter-subnet communication (Layer 3 forwarding), the default hash factor used for calculating the source port number is L3VNI+IP address. You can change the hash factor to L3VNI through configuration.

Outer IP header

  • IP SA: source IP address, which is the IP address of the local VTEP of a VXLAN tunnel.
  • IP DA: destination IP address, which is the IP address of the remote VTEP of a VXLAN tunnel.

Outer Ethernet header

  • MAC DA: destination MAC address, which is the MAC address mapped to the next hop IP address based on the destination VTEP address in the routing table of the VTEP on which the VM that sends packets resides.
  • MAC SA: source MAC address, which is the MAC address of the VTEP on which the VM that sends packet resides.
  • 802.1Q Tag: VLAN tag carried in packets. This field is optional.
  • Ethernet Type: Ethernet frame type.

EVPN VXLAN Fundamentals

Introduction

Ethernet virtual private network (EVPN) is a VPN technology used for Layer 2 internetworking. EVPN is similar to BGP/MPLS IP VPN. EVPN defines a new type of BGP network layer reachability information (NLRI), called the EVPN NLRI. The EVPN NLRI defines new BGP EVPN routes to implement MAC address learning and advertisement between Layer 2 networks at different sites.

VXLAN does not provide a control plane, and VTEP discovery and host information (IP and MAC addresses, VNIs, and gateway VTEP IP address) learning are implemented by traffic flooding on the data plane, resulting in high traffic volumes on DC networks. To address this problem, VXLAN uses EVPN as the control plane. EVPN allows VTEPs to exchange BGP EVPN routes to implement automatic VTEP discovery and host information advertisement, preventing unnecessary traffic flooding.

In summary, EVPN introduces several new types of BGP EVPN routes through BGP extension to advertise VTEP addresses and host information. In this way, EVPN applied to VXLAN networks enables VTEP discovery and host information learning on the control plane instead of on the data plane.

BGP EVPN Routes

EVPN NLRI defines the following BGP EVPN route types applicable to the VXLAN control plane:

Type 2 Route: MAC/IP Route

Figure 1-1026 shows the format of a MAC/IP route.

Figure 1-1026 Format of a MAC/IP route

Table 1-467 describes the meaning of each field.

Table 1-467 Fields of a MAC/IP route

Field

Description

Route Distinguisher

RD value set in an EVI

Ethernet Segment Identifier

Unique ID for defining the connection between local and remote devices

Ethernet Tag ID

VLAN ID configured on the device

MAC Address Length

Length of the host MAC address carried in the route

MAC Address

Host MAC address carried in the route

IP Address Length

Length of the host IP address carried in the route

IP Address

Host IP address carried in the route

MPLS Label1

L2VNI carried in the route

MPLS Label2

L3VNI carried in the route

MAC/IP routes function as follows on the VXLAN control plane:

  • MAC address advertisement

    To implement Layer 2 communication between intra-subnet hosts, the source and remote VTEPs must learn the MAC addresses of the hosts. The VTEPs function as BGP EVPN peers to exchange MAC/IP routes so that they can obtain the host MAC addresses. The MAC Address field identifies the MAC address of a host.

  • ARP advertisement

    A MAC/IP route can carry both the MAC and IP addresses of a host, and therefore can be used to advertise ARP entries between VTEPs. The MAC Address field identifies the MAC address of the host, whereas the IP Address field identifies the IP address of the host. This type of MAC/IP route is called the ARP route.

  • IP route advertisement

    In distributed VXLAN gateway scenarios, to implement Layer 3 communication between inter-subnet hosts, the source and remote VTEPs that function as Layer 3 gateways must learn the host IP routes. The VTEPs function as BGP EVPN peers to exchange MAC/IP routes so that they can obtain the host IP routes. The IP Address field identifies the destination address of the IP route. In addition, the MPLS Label2 field must carry the L3VNI. This type of MAC/IP route is called the integrated routing and bridging (IRB) route.

    An ARP route carries host MAC and IP addresses and an L2VNI. An IRB route carries host MAC and IP addresses, an L2VNI, and an L3VNI. Therefore, IRB routes carry ARP routes and can be used to advertise IP routes as well as ARP entries.

  • Host IPv6 route advertisement

    In a distributed gateway scenario, to implement Layer 3 communication between hosts on different subnets, the VTEPs (functioning as Layer 3 gateways) must learn host IPv6 routes from each other. To achieve this, VTEPs functioning as BGP EVPN peers exchange MAC/IP routes to advertise host IPv6 routes to each other. The IP Address field carried in the MAC/IP routes indicates the destination addresses of host IPv6 routes, and the MPLS Label2 field must carry an L3VNI. MAC/IP routes in this case are also called IRBv6 routes.

    An ND route carries host MAC and IPv6 addresses and an L2VNI. An IRBv6 route carries host MAC and IPv6 addresses, an L2VNI, and an L3VNI. Therefore, IRBv6 routes carry ND routes and can be used to advertise both host IPv6 routes and ND entries.

Type 3 Route: Inclusive Multicast Route

An inclusive multicast route comprises a prefix and a PMSI attribute. Figure 1-1027 shows the format of an inclusive multicast route.

Figure 1-1027 Format of an inclusive multicast route

Table 1-468 describes the meaning of each field.

Table 1-468 Fields of an inclusive multicast route

Field

Description

Route Distinguisher

RD value set in an EVI.

Ethernet Tag ID

VLAN ID, which is all 0s in this type of route.

IP Address Length

Length of the local VTEP's IP address carried in the route.

Originating Router's IP Address

Local VTEP's IP address carried in the route.

Flags

Flags indicating whether leaf node information is required for the tunnel.

This field is inapplicable in VXLAN scenarios.

Tunnel Type

Tunnel type carried in the route.

The value can only be 6, representing Ingress Replication in VXLAN scenarios. It is used for BUM packet forwarding.

MPLS Label

L2VNI carried in the route.

Tunnel Identifier

Tunnel identifier carried in the route.

This field is the local VTEP's IP address in VXLAN scenarios.

Inclusive multicast routes are used on the VXLAN control plane for automatic VTEP discovery and dynamic VXLAN tunnel establishment. VTEPs that function as BGP EVPN peers transmit L2VNIs and VTEPs' IP addresses through inclusive multicast routes. The originating router's IP Address field identifies the local VTEP's IP address; the MPLS Label field identifies an L2VNI. If the remote VTEP's IP address is reachable at Layer 3, a VXLAN tunnel to the remote VTEP is established. In addition, the local end creates a VNI-based ingress replication list and adds the peer VTEP IP address to the list for subsequent BUM packet forwarding.

Type 5 Route: IP Prefix Route

Figure 1-1028 shows the format of an IP prefix route.

Figure 1-1028 Format of an IP prefix route

Table 1-469 describes the meaning of each field.

Table 1-469 Fields of an IP prefix route

Field

Description

Route Distinguisher

RD value set in a VPN instance

Ethernet Segment Identifier

Unique ID for defining the connection between local and remote devices

Ethernet Tag ID

Currently, this field can only be set to 0

IP Prefix Length

Length of the IP prefix carried in the route

IP Prefix

IP prefix carried in the route

GW IP Address

Default gateway address

MPLS Label

L3VNI carried in the route

An IP prefix route can carry either a host IP address or a network segment address.

  • When carrying a host IP address, the route is used for IP route advertisement in distributed VXLAN gateway scenarios, which functions the same as an IRB route on the VXLAN control plane.

  • When carrying a network segment address, the route can be advertised to allow hosts on a VXLAN network to access the specified network segment or external network.

VXLAN Gateway Deployment

To implement Layer 3 interworking, a Layer 3 gateway must be deployed on a VXLAN. VXLAN gateways can be deployed in centralized or distributed mode.

Centralized VXLAN Gateway Mode

In this mode, Layer 3 gateways are configured on one device. On the network shown in Figure 1-1029, traffic across network segments is forwarded through Layer 3 gateways to implement centralized traffic management.

Figure 1-1029 Centralized VXLAN gateway networking
Centralized VXLAN gateway deployment has its advantages and disadvantages.
  • Advantage: Inter-segment traffic can be centrally managed, and gateway deployment and management is easy.
  • Disadvantages:
    • Forwarding paths are not optimal. Inter-segment Layer 3 traffic of data centers connected to the same Layer 2 gateway must be transmitted to the centralized Layer 3 gateway for forwarding.
    • The ARP entry specification is a bottleneck. ARP entries must be generated for tenants on the Layer 3 gateway. However, only a limited number of ARP entries are allowed by the Layer 3 gateway, impeding data center network expansion.
Distributed VXLAN Gateway Mode

Deploying distributed VXLAN gateways addresses problems that occur in centralized VXLAN gateway networking. Distributed VXLAN gateways use the spine-leaf network. In this networking, leaf nodes, which can function as Layer 3 VXLAN gateways, are used as VTEPs to establish VXLAN tunnels. Spine nodes are unaware of the VXLAN tunnels and only forward VXLAN packets between different leaf nodes. On the network shown in Figure 1-1030, Server 1 and Server 2 on different network segments both connect to Leaf 1. When Server 1 and Server 2 communicate, traffic is forwarded only through Leaf 1, not through any spine node.

Figure 1-1030 Distributed VXLAN gateway networking

A spine node supports high-speed IP forwarding capabilities.

A leaf node can:
  • Function as a Layer 2 VXLAN gateway to connect to physical servers or VMs and allow tenants to access VXLANs.
  • Function as a Layer 3 VXLAN gateway to perform VXLAN encapsulation and decapsulation to allow inter-segment VXLAN communication and access to external networks.
Distributed VXLAN gateway networking has the following characteristics:
  • Flexible deployment. A leaf node can function as both Layer 2 and Layer 3 VXLAN gateways.
  • Improved network expansion capabilities. A leaf node only needs to learn the ARP or ND entries of servers attached to it. A centralized Layer 3 gateway in the same scenario, however, has to learn the ARP or ND entries of all servers on the network. Therefore, the ARP or ND entry specification is no longer a bottleneck on a distributed VXLAN gateway.

Functional Scenarios

Centralized VXLAN Gateway Deployment in Static Mode

In centralized VXLAN gateway deployment in static mode, the control plane is responsible for VXLAN tunnel establishment and dynamic MAC address learning; the forwarding plane is responsible for intra-subnet known unicast packet forwarding, intra-subnet BUM packet forwarding, and inter-subnet packet forwarding.

Deploying centralized VXLAN gateways in static mode involves heavy workload and is inflexible, and therefore is inapplicable to large-scale networks. As such, deploying centralized VXLAN gateways using BGP EVPN is recommended.

The following VXLAN tunnel establishment uses an IPv4 over IPv4 network as an example. Table 1-470 shows the implementation differences between the other combinations of underlay and overlay networks and IPv4 over IPv4.
Table 1-470 Implementation differences

Combination Category

Implementation Difference

IPv6 over IPv4

  • During dynamic MAC address learning, a Layer 2 gateway learns the local host's MAC address using neighbor solicitation (NS) packets sent by the host.

  • In the inter-subnet interworking scenario, an IPv6 address must be configured for the Layer 3 gateway's VBDIF interface. During inter-subnet packet forwarding, the Layer 3 gateway needs to search its IPv6 routing table for the next-hop address of the destination IPv6 address, queries the ND table based on the next-hop address, and then obtains information such as the destination MAC address.

IPv4 over IPv6

The VTEPs at both ends of a VXLAN tunnel use IPv6 addresses, and IPv6 Layer 3 route reachability must be implemented between the VTEPs.

IPv6 over IPv6

  • The VTEPs at both ends of a VXLAN tunnel use IPv6 addresses, and IPv6 Layer 3 route reachability must be implemented between the VTEPs.

  • During dynamic MAC address learning, a Layer 2 gateway learns the local host's MAC address using NS packets sent by the host.

  • In the inter-subnet interworking scenario, an IPv6 address must be configured for the Layer 3 gateway's VBDIF interface. During inter-subnet packet forwarding, the Layer 3 gateway needs to search its IPv6 routing table for the next hop address of the destination IPv6 address, queries the ND table based on the next-hop address, and then obtains information such as the destination MAC address.

VXLAN Tunnel Establishment

A VXLAN tunnel is identified by a pair of VTEP IP addresses. A VXLAN tunnel can be statically created after you configure local and remote VNIs, VTEP IP addresses, and an ingress replication list, and the tunnel goes Up when the pair of VTEPs is reachable at Layer 3.

On the network shown in Figure 1-1031, Leaf 1 connects to Host 1 and Host 3; Leaf 2 connects to Host 2; Spine functions as a Layer 3 gateway.

  • To allow Host 3 and Host 2 to communicate, Layer 2 VNIs and an ingress replication list must be configured on Leaf 1 and Leaf 2. The peer VTEPs' IP addresses must be specified in the ingress replication list. A VXLAN tunnel can be established between Leaf 1 and Leaf 2 if their VTEPs have Layer 3 routes to each other.

  • To allow Host 1 and Host 2 to communicate, Layer 2 VNIs and an ingress replication list must be configured on Leaf 1, Leaf 2, and also Spine. The peer VTEPs' IP addresses must be specified in the ingress replication list. A VXLAN tunnel can be established between Leaf 1 and Spine and between Leaf 2 and Spine if they have Layer 3 routes to the IP addresses of the VTEPs of each other.

    Although Host 1 and Host 3 both connect to Leaf 1, they belong to different subnets and must communicate through the Layer 3 gateway (Spine). Therefore, a VXLAN tunnel is also required between Leaf 1 and Spine.

Figure 1-1031 VXLAN tunnel networking
Dynamic MAC Address Learning

VXLAN supports dynamic MAC address learning to allow communication between tenants. MAC address entries are dynamically created and do not need to be manually maintained, greatly reducing maintenance workload. The following example illustrates dynamic MAC address learning for intra-subnet communication on the network shown in Figure 1-1032.

Figure 1-1032 Dynamic MAC Address Learning
  1. Host 3 sends an ARP request for Host 2's MAC address. The ARP request carries the source MAC address being MAC3, destination MAC address being all Fs, source IP address being IP3, and destination IP address being IP2.

  2. Upon receipt of the ARP request, Leaf 1 determines that the Layer 2 sub-interface receiving the ARP request belongs to a BD that has been bound to a VNI (20), meaning that the ARP request packet must be transmitted over the VXLAN tunnel identified by VNI 20. Leaf 1 then learns the mapping between Host 3's MAC address, BDID (Layer 2 broadcast domain ID), and inbound interface (Port1 for the Layer 2 sub-interface) that has received the ARP request and generates a MAC address entry for Host 3. The MAC address entry's outbound interface is Port1.

  3. Leaf 1 then performs VXLAN encapsulation on the ARP request, with the VNI being the one bound to the BD, source IP address in the outer IP header being the VTEP's IP address of Leaf 1, destination IP address in the outer IP header being the VTEP's IP address of Leaf 2, source MAC address in the outer Ethernet header being NVE1's MAC address of Leaf 1, and destination MAC address in the outer Ethernet header being the MAC address of the next hop pointing to the destination IP address. Figure 1-1033 shows the VXLAN packet format. The VXLAN packet is then transmitted over the IP network based on the IP and MAC addresses in the outer headers and finally reaches Leaf 2.

    Figure 1-1033 VXLAN packet format
  4. After Leaf 2 receives the VXLAN packet, it decapsulates the packet and obtains the ARP request originated from Host 3. Leaf 2 then learns the mapping between Host 3's MAC address, BDID, and VTEP's IP address of Leaf 1 and generates a MAC address entry for Host 3. Based on the next hop (VTEP's IP address of Leaf 1), the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Leaf1.

  5. Leaf 2 broadcasts the ARP request in the Layer 2 domain. Upon receipt of the ARP request, Host 2 finds that the destination IP address is its own IP address and saves Host 3's MAC address to the local MAC address table. Host 2 then responds with an ARP reply.

So far, Host 2 has learned Host 3's MAC address. Therefore, Host 2 responds with a unicast ARP reply. The ARP reply is transmitted to Host 3 in the same manner. After Host 2 and Host 3 learn the MAC address of each other, they will subsequently communicate with each other in unicast mode.

Dynamic MAC address learning is required only between hosts and Layer 3 gateways in inter-subnet communication scenarios. The process is the same as that for intra-subnet communication.

Intra-Subnet Known Unicast Packet Forwarding

Intra-subnet known unicast packets are forwarded only through Layer 2 VXLAN gateways and are unknown to Layer 3 VXLAN gateways. Figure 1-1034 shows the intra-subnet known unicast packet forwarding process.

Figure 1-1034 Intra-subnet known unicast packet forwarding
  1. After Leaf 1 receives Host 3's packet, it determines the Layer 2 BD of the packet based on the access interface and VLAN information and searches for the outbound interface and encapsulation information in the BD.
  2. Leaf 1's VTEP performs VXLAN encapsulation based on the encapsulation information obtained and forwards the packets through the outbound interface obtained.
  3. Upon receipt of the VXLAN packet, Leaf 2's VTEP verifies the VXLAN packet based on the UDP destination port number, source and destination IP addresses, and VNI. Leaf 2 obtains the Layer 2 BD based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
  4. Leaf 2 obtains the destination MAC address of the inner Layer 2 packet, adds VLAN tags to the packets based on the outbound interface and encapsulation information in the local MAC address table, and forwards the packets to Host 2.

Host 2 sends packets to Host 3 in the same manner.

Intra-Subnet BUM Packet Forwarding

Intra-subnet BUM packet forwarding is completed between Layer 2 VXLAN gateways in ingress replication mode. Layer 3 VXLAN gateways do not need to be aware of the process. In ingress replication mode, when a BUM packet enters a VXLAN tunnel, the ingress VTEP uses ingress replication to perform VXLAN encapsulation and send a copy of the BUM packet to every egress VTEP in the list. When the BUM packet leaves the VXLAN tunnel, the egress VTEP decapsulates the BUM packet. Figure 1-1035 shows the BUM packet forwarding process.

Figure 1-1035 Ingress replication for forwarding BUM packets
  1. After Leaf 1 receives Terminal A's packet, it determines the Layer 2 BD of the packet based on the access interface and VLAN information.
  2. Leaf 1's VTEP obtains the ingress replication list for the VNI, replicates packets based on the list, and performs VXLAN encapsulation by adding outer headers. Leaf 1 then forwards the VXLAN packet through the outbound interface.
  3. Upon receipt of the VXLAN packet, Leaf 2's VTEP and Leaf 3's VTEP verify the VXLAN packet based on the UDP destination port number, source and destination IP addresses, and VNI. Leaf 2/Leaf 3 obtains the Layer 2 BD based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
  4. Leaf 2/Leaf 3 checks the destination MAC address of the inner Layer 2 packet and finds it a BUM MAC address. Therefore, Leaf 2/Leaf 3 broadcasts the packet onto the network connected to the terminals (not the VXLAN tunnel side) in the Layer 2 broadcast domain. Specifically, Leaf 2/Leaf 3 finds the outbound interfaces and encapsulation information not related to the VXLAN tunnel, adds VLAN tags to the packet, and forwards the packet to Terminal B/Terminal C.

Terminal B/Terminal C responds to Terminal A in the same process as intra-subnet known unicast packet forwarding.

Inter-Subnet Packet Forwarding

Inter-subnet packets must be forwarded through a Layer 3 gateway. Figure 1-1036 shows inter-subnet packet forwarding in centralized VXLAN gateway scenarios.

Figure 1-1036 Inter-subnet packet forwarding
  1. After Leaf 1 receives Host 1's packet, it determines the Layer 2 BD of the packet based on the access interface and VLAN information and searches for the outbound interface and encapsulation information in the BD.
  2. Leaf 1's VTEP performs VXLAN encapsulation based on the outbound interface and encapsulation information and forwards the packets to Spine.
  3. After Spine receives the VXLAN packet, it decapsulates the packet and finds that the destination MAC address of the inner packet is the MAC address (MAC3) of the Layer 3 gateway interface (VBDIF10) so that the packet must be forwarded at Layer 3.
  4. Spine removes the inner Ethernet header, parses the destination IP address, and searches the routing table for a next hop address. Spine then searches the ARP table based on the next hop address to obtain the destination MAC address, VXLAN tunnel's outbound interface, and VNI.
  5. Spine performs VXLAN encapsulation on the inner packet again and forwards the VXLAN packet to Leaf 2, with the source MAC address in the inner Ethernet header being the MAC address (MAC4) of the Layer 3 gateway interface (VBDIF20).
  6. Upon receipt of the VXLAN packet, Leaf 2's VTEP verifies the VXLAN packet based on the UDP destination port number, source and destination IP addresses, and VNI. Leaf 2 then obtains the Layer 2 broadcast domain based on the VNI and removes the outer headers to obtain the inner Layer 2 packet. It then searches for the outbound interface and encapsulation information in the Layer 2 broadcast domain.
  7. Leaf 2 adds VLAN tags to the packets based on the outbound interface and encapsulation information and forwards the packets to Host 2.

Host 2 sends packets to Host 1 in the same manner.

Establishment of a VXLAN in Centralized Gateway Mode Using BGP EVPN

During the establishment of a VXLAN in centralized gateway mode using BGP EVPN, the control plane process includes:

The forwarding plane process includes:

This mode uses EVPN to automatically discover VTEPs and dynamically establish VXLAN tunnels, providing high flexibility and is applicable to large-scale VXLAN networking scenarios. It is recommended for establishing VXLANs with centralized gateways.

The following uses an IPv4 over IPv4 network as an example. Table 1-471 shows the implementation differences between IPv4 over IPv4 networks and other combinations of underlay and overlay networks.
Table 1-471 Implementation differences

Combination Type

Implementation Difference

IPv6 over IPv4

  • During dynamic MAC address learning, the Layer 2 gateway learns the local host's MAC address through neighbor discovery. Hosts at both ends learn each other's MAC address by exchanging Neighbor Solicitation (NS)/Neighbor Advertisement (NA) packets.

  • In the inter-subnet interworking scenario, an IPv6 address must be configured for the Layer 3 gateway's VBDIF interface. During inter-subnet packet forwarding, the Layer 3 gateway needs to search its IPv6 routing table for the next hop address of the destination IPv6 address, query the ND table based on the next hop address, and then obtain information such as the destination MAC address.

IPv4 over IPv6

  • A BGP EVPN IPv6 peer relationship is established between gateways.
  • The VTEP IP addresses are IPv6 addresses.

IPv6 over IPv6

  • A BGP EVPN IPv6 peer relationship is established between gateways.
  • The VTEP IP addresses are IPv6 addresses.
  • During dynamic MAC address learning, the Layer 2 gateway learns the local host's MAC address through neighbor discovery. Hosts at both ends learn each other's MAC address by exchanging NS/NA packets.
  • In the inter-subnet interworking scenario, an IPv6 address must be configured for the Layer 3 gateway's VBDIF interface. During inter-subnet packet forwarding, the Layer 3 gateway needs to search its IPv6 routing table for the next hop address of the destination IPv6 address, query the ND table based on the next hop address, and then obtain information such as the destination MAC address.
VXLAN Tunnel Establishment

A VXLAN tunnel is identified by a pair of VTEP IP addresses. During VXLAN tunnel establishment, the local and remote VTEPs attempt to obtain IP addresses of each other. A VXLAN tunnel can be established if the IP addresses obtained are routable at Layer 3. When BGP EVPN is used to dynamically establish a VXLAN tunnel, the local and remote VTEPs first establish a BGP EVPN peer relationship and then exchange BGP EVPN routes to transmit VNIs and VTEP IP addresses.

As shown in Figure 1-1037, two hosts connect to Leaf1, one host connects to Leaf2, and a Layer 3 gateway is deployed on the spine node. A VXLAN tunnel needs to be established between Leaf1 and Leaf2 to implement communication between Host3 and Host2. To implement communication between Host1 and Host2, a VXLAN tunnel needs to be established between Leaf1 and Spine and between Spine and Leaf2. Though Host1 and Host3 both connect to Leaf1, they belong to different subnets and need to communicate through the Layer 3 gateway deployed on Spine. Therefore, a VXLAN tunnel needs to be created between Leaf1 and Spine.

A VXLAN tunnel is determined by a pair of VTEP IP addresses. When a local VTEP receives the same remote VTEP IP address repeatedly, only one VXLAN tunnel can be established, but packets are encapsulated with different VNIs before being forwarded through the tunnel.

Figure 1-1037 VXLAN tunnel networking

The following example illustrates how to dynamically establish a VXLAN tunnel using BGP EVPN between Leaf1 and Leaf2 on the network shown in Figure 1-1038.

Figure 1-1038 Dynamic VXLAN tunnel establishment
  1. First, a BGP EVPN peer relationship is established between Leaf1 and Leaf2. Then, Layer 2 broadcast domains are created on Leaf1 and Leaf2, and VNIs are bound to the Layer 2 broadcast domains. Next, an EVPN instance is configured in each Layer 2 broadcast domain, and an RD, export VPN target (ERT), and import VPN target (IRT) are configured for the EVPN instance. After the local VTEP IP address is configured on Leaf1 and Leaf2, they generate a BGP EVPN route and send it to each other. The BGP EVPN route carries the local EVPN instance's ERT, Next_Hop attribute, and an inclusive multicast route (Type 3 route defined in BGP EVPN). Figure 1-1039 shows the format of an inclusive multicast route, which comprises a prefix and a PMSI attribute. VTEP IP addresses are stored in the Originating Router's IP Address field in the inclusive multicast route prefix, and VNIs are stored in the MPLS Label field in the PMSI attribute. The VTEP IP address is also included in the Next_Hop attribute.

    Figure 1-1039 Format of an inclusive multicast route
  2. After Leaf1 and Leaf2 receive a BGP EVPN route from each other, they match the ERT of the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. Leaf1 and Leaf2 obtain the peer VTEP IP address (from the Next_Hop attribute) and VNI carried in the route. If the peer VTEP IP address is reachable at Layer 3, they establish a VXLAN tunnel to the peer end. Moreover, the local end creates a VNI-based ingress replication table and adds the peer VTEP IP address to the table for forwarding BUM packets.

The process of dynamically establishing VXLAN tunnels between Leaf1 and Spine and between Leaf2 and Spine using BGP EVPN is similar to the preceding process.

A VPN target is an extended community attribute of BGP. An EVPN instance can have the IRT and ERT configured. The local EVPN instance's ERT must match the remote EVPN instance's IRT for EVPN route advertisement. If not, VXLAN tunnels cannot be dynamically established. If only one end can successfully accept the BGP EVPN route, this end can establish a VXLAN tunnel to the other end, but cannot exchange data packets with the other end. The other end drops packets after confirming that there is no VXLAN tunnel to the end that has sent these packets.

For details about VPN targets, see Basic BGP/MPLS IP VPN.

Dynamic MAC Address Learning

VXLAN supports dynamic MAC address learning to allow communication between tenants. MAC address entries are dynamically created and do not need to be manually maintained, greatly reducing maintenance workload. The following example illustrates dynamic MAC address learning for intra-subnet communication of hosts on the network shown in Figure 1-1040.

Figure 1-1040 Dynamic MAC address learning
  1. Host3 sends dynamic ARP packets when it first communicates with Leaf1. Leaf1 learns the MAC address of Host3 and the mapping between the BDID and packet inbound interface (that is, the physical interface Port 1 corresponding to the Layer 2 sub-interface), and generates a MAC address entry about Host3 in the local MAC address table, with the outbound interface being Port 1. Leaf1 generates a BGP EVPN route based on the ARP entry of Host3 and sends it to Leaf2. The BGP EVPN route carries the local EVPN instance's ERT, Next_Hop attribute, and a Type 2 route (MAC/IP route) defined in BGP EVPN. The Next_Hop attribute carries the local VTEP's IP address. The MAC Address Length and MAC Address fields identify Host3's MAC address. The Layer 2 VNI is stored in the MPLS Label1 field. Figure 1-1041 shows the format of a MAC route or an IP route.

    Figure 1-1041 Format of a MAC/IP route
  2. After receiving the BGP EVPN route from Leaf1, Leaf2 matches the ERT of the EVPN instance carried in the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. After accepting the route, Leaf2 obtains the MAC address of Host3 and the mapping between the BDID and the VTEP IP address (Next_Hop attribute) of Leaf1, and generates the MAC address entry of the Host3 in the local MAC address table. The recursion to the outbound interface needs to be performed based on the next hop, and the final recursion result is the VXLAN tunnel destined for Leaf1.

Leaf1 learns the MAC route of Host2 in a similar process.

  • When hosts on different subnets communicate with each other, only the hosts and Layer 3 gateway need to dynamically learn MAC addresses from each other. This process is similar to the preceding process.

  • Leaf nodes can learn the MAC addresses of hosts during data forwarding, depending on their capabilities to learn MAC addresses from data packets. If VXLAN tunnels are established using BGP EVPN, leaf nodes can dynamically learn the MAC addresses of hosts through BGP EVPN routes, rather than during data forwarding.

Intra-subnet Forwarding of Known Unicast Packets

Intra-subnet known unicast packets are forwarded only between Layer 2 VXLAN gateways and are unknown to Layer 3 VXLAN gateways. Figure 1-1042 shows the forwarding process of known unicast packets.

Figure 1-1042 Intra-subnet forwarding of known unicast packets
  1. After Leaf1 receives a packet from Host3, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information, and searches for the outbound interface and encapsulation information in the broadcast domain.
  2. Leaf1's VTEP performs VXLAN encapsulation based on the obtained encapsulation information and forwards the packet through the outbound interface obtained.
  3. After the VTEP on Leaf2 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
  4. Leaf2 obtains the destination MAC address of the inner Layer 2 packet, adds a VLAN tag to the packet based on the outbound interface and encapsulation information in the local MAC address table, and forwards the packet to Host2.

Host2 sends packets to Host3 in the same process.

Intra-subnet Forwarding of BUM Packets

Intra-subnet BUM packets are forwarded only between Layer 2 VXLAN gateways, and are unknown to Layer 3 VXLAN gateways. Intra-subnet BUM packets can be forwarded in ingress replication mode. In this mode, when a BUM packet enters a VXLAN tunnel, the access-side VTEP performs VXLAN encapsulation, and then forwards the packet to all egress VTEPs that are in the ingress replication list. When the BUM packet leaves the VXLAN tunnel, the egress VTEP decapsulates the packet. Figure 1-1043 shows the forwarding process of BUM packets.

Figure 1-1043 Intra-subnet forwarding of BUM packets in ingress replication mode
  1. After Leaf1 receives a packet from TerminalA, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information in the packet.
  2. Leaf1's VTEP obtains the ingress replication list for the VNI, replicates the packet based on the list, and performs VXLAN encapsulation. Leaf1 then forwards the VXLAN packet through the outbound interface.
  3. After the VTEP on Leaf2 or Leaf3 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 or Leaf3 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
  4. Leaf2 or Leaf3 checks the destination MAC address of the inner Layer 2 packet and finds it a BUM MAC address. Therefore, Leaf2 or Leaf3 broadcasts the packet onto the network connected to terminals (not the VXLAN tunnel side) in the Layer 2 broadcast domain. Specifically, Leaf2 or Leaf3 finds the outbound interfaces and encapsulation information not related to the VXLAN tunnel, adds VLAN tags to the packet, and forwards the packet to TerminalB or TerminalC.

The forwarding process of a response packet from TerminalB/TerminalC to TerminalA is similar to the intra-subnet forwarding process of known unicast packets.

Inter-subnet Packet Forwarding

Inter-subnet packets must be forwarded through a Layer 3 gateway. Figure 1-1044 shows the inter-subnet packet forwarding process in centralized VXLAN gateway scenarios.

Figure 1-1044 Inter-subnet packet forwarding
  1. After Leaf1 receives a packet from Host1, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN in the packet, and searches for the outbound interface and encapsulation information in the Layer 2 broadcast domain.
  2. The VTEP on Leaf1 performs VXLAN tunnel encapsulation based on the outbound interface and encapsulation information, and forwards the packet to Spine.
  3. Spine decapsulates the received VXLAN packet, finds that the destination MAC address in the inner packet is MAC3 of the Layer 3 gateway interface VBDIF10, and determines that the packet needs to be forwarded at Layer 3.
  4. Spine removes the Ethernet header of the inner packet and parses the destination IP address. It then searches the routing table based on the destination IP address to obtain the next hop address, and searches ARP entries based on the next hop to obtain the destination MAC address, VXLAN tunnel outbound interface, and VNI.
  5. Spine re-encapsulates the VXLAN packet and forwards it to Leaf2. The source MAC address in the Ethernet header of the inner packet is MAC4 of the Layer 3 gateway interface VBDIF20.
  6. After the VTEP on Leaf2 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. The VTEP then obtains the Layer 2 broadcast domain based on the VNI, decapsulates the packet to obtain the inner Layer 2 packet, and searches for the outbound interface and encapsulation information in the corresponding Layer 2 broadcast domain.
  7. Leaf2 adds a VLAN tag to the packet based on the outbound interface and encapsulation information, and forwards the packet to Host2.

Host2 sends packets to Host1 through a similar process.

Establishment of a VXLAN in Distributed Gateway Mode Using BGP EVPN

During the establishment of a VXLAN in distributed gateway mode using BGP EVPN, the control plane process is as follows:

The forwarding plane process includes:

This mode supports the advertisement of host IP routes, MAC addresses, and ARP entries. For details, see EVPN VXLAN Fundamentals. This mode is recommended for establishing VXLANs with distributed gateways.

The following uses an IPv4 over IPv4 network as an example. Table 1-472 shows the implementation differences between IPv4 over IPv4 networks and other combinations of underlay and overlay networks.
Table 1-472 Implementation differences

Combination Type

Implementation Difference

IPv6 over IPv4

  • In the inter-subnet forwarding scenario where VXLAN tunnels are established using BGP EVPN, if VXLAN gateways advertise IP prefix routes to each other, they can advertise only network segment routes, and cannot advertise host routes.

  • During dynamic MAC address learning, the Layer 2 gateway learns the local host's MAC address through neighbor discovery. Hosts at both ends learn each other's MAC address by exchanging NS/NA packets.

  • During inter-subnet packet forwarding, a gateway must search the IPv6 routing table in the local L3VPN instance.

IPv4 over IPv6

  • A BGP EVPN IPv6 peer relationship is established between gateways.
  • The VTEP IP addresses are IPv6 addresses.

IPv6 over IPv6

  • A BGP EVPN IPv6 peer relationship is established between gateways.
  • The VTEP IP addresses are IPv6 addresses.
  • During dynamic MAC address learning, the Layer 2 gateway learns the local host's MAC address through neighbor discovery. Hosts at both ends learn each other's MAC address by exchanging NS/NA packets.
  • During inter-subnet packet forwarding, a gateway must search the IPv6 routing table in the local L3VPN instance.
VXLAN Tunnel Establishment

A VXLAN tunnel is identified by a pair of VTEP IP addresses. During VXLAN tunnel establishment, the local and remote VTEPs attempt to obtain IP addresses of each other. A VXLAN tunnel can be established if the IP addresses obtained are routable at Layer 3. When BGP EVPN is used to dynamically establish a VXLAN tunnel, the local and remote VTEPs first establish a BGP EVPN peer relationship and then exchange BGP EVPN routes to transmit VNIs and VTEP IP addresses.

In distributed VXLAN gateway scenarios, leaf nodes function as both Layer 2 and Layer 3 VXLAN gateways. Spine nodes are unaware of the VXLAN tunnels and only forward VXLAN packets between different leaf nodes. On the control plane, a VXLAN tunnel only needs to be set up between leaf nodes. In Figure 1-1045, a VXLAN tunnel is established between Leaf1 and Leaf2 for Host1 and Host2 or Host3 and Host2 to communicate. Because Host1 and Host3 both connect to Leaf1, they can directly communicate through Leaf1 instead of over a VXLAN tunnel.

A VXLAN tunnel is determined by a pair of VTEP IP addresses. When a local VTEP receives the same remote VTEP IP address repeatedly, only one VXLAN tunnel can be established, but packets are encapsulated with different VNIs before being forwarded through the tunnel.

Figure 1-1045 VXLAN tunnel networking

In distributed gateway scenarios, BGP EVPN can be used to dynamically establish VXLAN tunnels in either of the following situations:

Intra-subnet Communication

On the network shown in Figure 1-1046, intra-subnet communication between Host2 and Host3 requires only Layer 2 forwarding. The process for establishing a VXLAN tunnel using BGP EVPN is as follows.

Figure 1-1046 Dynamic VXLAN tunnel establishment (1)
  1. First, a BGP EVPN peer relationship is established between Leaf1 and Leaf2. Then, Layer 2 broadcast domains are created on Leaf1 and Leaf2, and VNIs are bound to the Layer 2 broadcast domains. Next, an EVPN instance is configured in each Layer 2 broadcast domain, and an RD, an ERT, and an IRT are configured for the EVPN instance. After the local VTEP IP address is configured on Leaf1 and Leaf2, they generate a BGP EVPN route and send it to each other. The BGP EVPN route carries the local EVPN instance's ERT and an inclusive multicast route (Type 3 route defined in BGP EVPN). Figure 1-1047 shows the format of an inclusive multicast route, which comprises a prefix and a PMSI attribute. VTEP IP addresses are stored in the Originating Router's IP Address field in the inclusive multicast route prefix, and VNIs are stored in the MPLS Label field in the PMSI attribute. The VTEP IP address is also included in the Next_Hop attribute.

    Figure 1-1047 Format of an inclusive multicast route
  2. After Leaf1 and Leaf2 receive a BGP EVPN route from each other, they match the ERT of the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. Leaf1 and Leaf2 obtain the peer VTEP IP address (from the Next_Hop attribute) and VNI carried in the route. If the peer VTEP IP address is reachable at Layer 3, they establish a VXLAN tunnel to the peer end. Moreover, the local end creates a VNI-based ingress replication table and adds the peer VTEP IP address to the table for forwarding BUM packets.

A VPN target is an extended community attribute of BGP. An EVPN instance can have the IRT and ERT configured. The local EVPN instance's ERT must match the remote EVPN instance's IRT for EVPN route advertisement. If not, VXLAN tunnels cannot be dynamically established. If only one end can successfully accept the BGP EVPN route, this end can establish a VXLAN tunnel to the other end, but cannot exchange data packets with the other end. The other end drops packets after confirming that there is no VXLAN tunnel to the end that has sent these packets.

For details about VPN targets, see Basic BGP/MPLS IP VPN.

Inter-Subnet Communication

Inter-subnet communication between Host1 and Host2 requires Layer 3 forwarding. When VXLAN tunnels are established using BGP EVPN, Leaf1 and Leaf2 must advertise host IP routes. Typically, 32-bit host IP routes are advertised. Because different leaf nodes may connect to the same network segment on the VXLAN network, the network segment routes advertised by the leaf nodes may conflict. This conflict may cause host unreachability of some leaf nodes. Leaf nodes can advertise network segment routes in the following scenarios:

  • The network segment that a leaf node connects to is unique on a VXLAN, and a large number of specific host routes are available. In this case, the routes of the network segment to which the host IP routes belong can be advertised so that leaf nodes do not have to store all these routes.

  • When hosts on a VXLAN need to access external networks, leaf nodes can advertise routes destined for external networks onto the VXLAN to allow other leaf nodes to learn the routes.

Before establishing a VXLAN tunnel, perform configurations listed in the following table on Leaf1 and Leaf2.

Step

Function

Create a Layer 2 broadcast domain and associate a Layer 2 VNI with the Layer 2 broadcast domain.

A broadcast domain functions as a VXLAN network entity to transmit VXLAN data packets.

Establish a BGP EVPN peer relationship between Leaf1 and Leaf2.

This configuration is used to exchange BGP EVPN routes.

Configure an EVPN instance in a Layer 2 broadcast domain, and configure an RD, an ERT, and an IRT for the EVPN instance.

This configuration is used to generate BGP EVPN routes.

Configure L3VPN instances for tenants and bind the L3VPN instances to the VBDIF interfaces of the Layer 2 broadcast domain.

This configuration is used to differentiate and isolate IP routing tables of different tenants.

Specify a Layer 3 VNI for an L3VPN instance.

This configuration allows the leaf nodes to determine the L3VPN routing table for forwarding data packets.

Configure the export VPN target (eERT) and import VPN target (eIRT) for EVPN routes in the L3VPN instance.

This configuration controls the local L3VPN instance to advertise and receive BGP EVPN routes.

Configure the type of route to be advertised between Leaf1 and Leaf2.

This configuration is used to advertise IP routes between Host1 and Host 2. Two types of routes are available, IRB and IP prefix routes, which can be selected as needed.

  • IRB routes advertise only 32-bit host IP routes. IRB routes include ARP routes. Therefore, if only 32-bit host IP routes need to be advertised, it is recommended that IRB routes be advertised.

  • IP prefix routes can advertise both 32-bit host IP routes and network segment routes. However, before IP prefix routes advertise 32-bit host IP routes, direct routes to the host IP addresses must be generated. This will affect VM migration. If only 32-bit host IP route advertisement is needed, advertising IP prefix routes is not recommended. Advertise IP prefix routes only when network segment route advertisement is needed.

Dynamic VXLAN tunnel establishment varies depending on how host IP routes are advertised.

  • Host IP routes are advertised through IRB routes. (Figure 1-1048 shows the process.)

    Figure 1-1048 Dynamic VXLAN tunnel establishment (2)
    1. When Host1 communicates with Leaf1 for the first time, Leaf1 learns the ARP entry of Host1 after receiving dynamic ARP packets. Leaf1 then finds the L3VPN instance bound to the VBDIF interface of the Layer 2 broadcast domain where Host1 resides, and obtains the Layer 3 VNI associated with the L3VPN instance. The EVPN instance of Leaf1 then generates an IRB route based on the information obtained. Figure 1-1049 shows the IRB route. The host IP address is stored in the IP Address Length and IP Address fields; the Layer 3 VNI is stored in the MPLS Label2 field.

      Figure 1-1049 IRB route
    2. Leaf1 generates and sends a BGP EVPN route to Leaf2. The BGP EVPN route carries the local EVPN instance's ERT, extended community attribute, Next_Hop attribute, and the IRB route. The extended community attribute carries the tunnel type (VXLAN tunnel) and local VTEP MAC address; the Next_Hop attribute carries the local VTEP IP address.

    3. After Leaf2 receives the BGP EVPN route from Leaf1, Leaf2 processes the route as follows:

      • If the ERT carried in the route is the same as the IRT of the local EVPN instance, the route is accepted. After the EVPN instance obtains IRB routes, it can extract ARP routes from the IRB routes for the advertisement of host ARP entries.

      • If the ERT carried in the route is the same as the eIRT of the local L3VPN instance, the route is accepted. Then, the L3VPN instance obtains the IRB route carried in the route, extracts the host IP address and Layer 3 VNI of Host1, and saves the host IP route of Host1 to the routing table. The outbound interface is obtained through recursion based on the next hop of the route. The final recursion result is the VXLAN tunnel to Leaf1, as shown in Figure 1-1050.

        A BGP EVPN route is discarded only when the ERT in the route is different from the local EVPN instance's IRT and local L3VPN instance's eIRT.

        Figure 1-1050 Remote host IP route information
      • If the route is accepted by the EVPN instance or L3VPN instance, Leaf2 obtains Leaf1's VTEP IP address from the Next_Hop attribute. If the VTEP IP address is routable at Layer 3, a VXLAN tunnel to Leaf1 is established.

    Leaf1 establishes a VXLAN tunnel to Leaf2 through a similar process.

  • Host IP routes are advertised through IP prefix routes, as shown in Figure 1-1051.

    Figure 1-1051 Dynamic VXLAN tunnel establishment (3)
    1. Leaf1 generates a direct route to Host1's IP address. Then, Leaf1 has an L3VPN instance configured to import the direct route, so that Host1's IP route is saved to the routing table of the L3VPN instance and the Layer 3 VNI associated with the L3VPN instance is added. Figure 1-1052 shows the local host IP route.

      Figure 1-1052 Local host IP route information

      If network segment route advertisement is required, use a dynamic routing protocol, such as OSPF. Then configure an L3VPN instance to import the routes of the dynamic routing protocol.

    2. Leaf1 is configured to advertise IP prefix routes in the L3VPN instance. Figure 1-1053 shows the IP prefix route. The host IP address is stored in the IP Prefix Length and IP Prefix fields; the Layer 3 VNI is stored in the MPLS Label field. Leaf1 generates and sends a BGP EVPN route to Leaf2. The BGP EVPN route carries the local L3VPN instance's eERT, extended community attribute, Next_Hop attribute, and the IP prefix route. The extended community attribute carries the tunnel type (VXLAN tunnel) and local VTEP MAC address; the Next_Hop attribute carries the local VTEP IP address.

      Figure 1-1053 Format of an IP prefix route
    3. After Leaf2 receives the BGP EVPN route from Leaf1, Leaf2 processes the route as follows:

      • Matches the eERT of the route against the eIRT of the local L3VPN instance. If a match is found, the route is accepted. Then, the L3VPN instance obtains the IP prefix type route carried in the route, extracts the host IP address and Layer 3 VNI of Host1, and saves the host IP route of Host1 to the routing table. The outbound interface is obtained through recursion based on the next hop of the route. The final recursion result is the VXLAN tunnel to Leaf1, as shown in Figure 1-1054.

        Figure 1-1054 Remote host IP route information
      • If the route is accepted by the EVPN instance or L3VPN instance, Leaf2 obtains Leaf1's VTEP IP address from the Next_Hop attribute. If the VTEP IP address is routable at Layer 3, a VXLAN tunnel to Leaf1 is established.

    Leaf1 establishes a VXLAN tunnel to Leaf2 through a similar process.

Dynamic MAC address learning

VXLAN supports dynamic MAC address learning to allow communication between tenants. MAC address entries are dynamically created and do not need to be manually maintained, greatly reducing maintenance workload. In distributed VXLAN gateway scenarios, inter-subnet communication requires Layer 3 forwarding; MAC address learning is implemented using dynamic ARP packets between the local host and gateway. The following example illustrates dynamic MAC address learning for intra-subnet communication of hosts on the network shown in Figure 1-1055.

Figure 1-1055 Dynamic MAC address learning
  1. Host3 sends dynamic ARP packets when it first communicates with Leaf1. Leaf1 learns the MAC address of Host3 and the mapping between the BDID and packet inbound interface (that is, the physical interface Port 1 corresponding to the Layer 2 sub-interface), and generates a MAC address entry about Host3 in the local MAC address table, with the outbound interface being Port 1. Leaf1 generates a BGP EVPN route based on the ARP entry of Host3 and sends it to Leaf2. The BGP EVPN route carries the local EVPN instance's ERT, Next_Hop attribute, and a Type 2 route (MAC/IP route) defined in BGP EVPN. The Next_Hop attribute carries the local VTEP's IP address. The MAC Address Length and MAC Address fields identify Host3's MAC address. The Layer 2 VNI is stored in the MPLS Label1 field. Figure 1-1056 shows the format of a MAC route or an IP route.

    Figure 1-1056 Format of a MAC/IP route
  2. After receiving the BGP EVPN route from Leaf1, Leaf2 matches the ERT of the EVPN instance carried in the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. After accepting the route, Leaf2 obtains the MAC address of Host3 and the mapping between the BDID and the VTEP IP address (Next_Hop attribute) of Leaf1, and generates the MAC address entry of the Host3 in the local MAC address table. The outbound interface is obtained through recursion based on the next hop, and the final recursion result is the VXLAN tunnel destined for Leaf1.

Leaf1 learns the MAC route of Host2 through a similar process.

Leaf nodes can learn the MAC addresses of hosts during data forwarding, depending on their capabilities to learn MAC addresses from data packets. If VXLAN tunnels are established using BGP EVPN, leaf nodes can dynamically learn the MAC addresses of hosts through BGP EVPN routes, rather than during data forwarding.

Intra-subnet Forwarding of Known Unicast Packets

Intra-subnet known unicast packets are forwarded only between Layer 2 VXLAN gateways and are unknown to Layer 3 VXLAN gateways. Figure 1-1057 shows the forwarding process of known unicast packets.

Figure 1-1057 Intra-subnet forwarding of known unicast packets
  1. After Leaf1 receives a packet from Host3, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information, and searches for the outbound interface and encapsulation information in the broadcast domain.
  2. Leaf1's VTEP performs VXLAN encapsulation based on the obtained encapsulation information and forwards the packet through the outbound interface obtained.
  3. After the VTEP on Leaf2 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
  4. Leaf2 obtains the destination MAC address of the inner Layer 2 packet, adds a VLAN tag to the packet based on the outbound interface and encapsulation information in the local MAC address table, and forwards the packet to Host2.

Host2 sends packets to Host3 through a similar process.

Intra-subnet Forwarding of BUM Packets

Intra-subnet BUM packets are forwarded only between Layer 2 VXLAN gateways, and are unknown to Layer 3 VXLAN gateways. Intra-subnet BUM packets can be forwarded in ingress replication mode. In this mode, when a BUM packet enters a VXLAN tunnel, the access-side VTEP performs VXLAN encapsulation, and then forwards the packet to all egress VTEPs that are in the ingress replication list. When the BUM packet leaves the VXLAN tunnel, the egress VTEP decapsulates the packet. Figure 1-1058 shows the forwarding process of BUM packets.

Figure 1-1058 Intra-subnet forwarding of BUM packets in ingress replication mode
  1. After Leaf1 receives a packet from TerminalA, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information in the packet.
  2. Leaf1's VTEP obtains the ingress replication list for the VNI, replicates the packet based on the list, and performs VXLAN encapsulation. Leaf1 then forwards the VXLAN packet through the outbound interface.
  3. After the VTEP on Leaf2 or Leaf3 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 or Leaf3 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
  4. Leaf2 or Leaf3 checks the destination MAC address of the inner Layer 2 packet and finds it a BUM MAC address. Therefore, Leaf2 or Leaf3 broadcasts the packet onto the network connected to terminals (not the VXLAN tunnel side) in the Layer 2 broadcast domain. Specifically, Leaf2 or Leaf3 finds the outbound interfaces and encapsulation information not related to the VXLAN tunnel, adds VLAN tags to the packet, and forwards the packet to TerminalB or TerminalC.

The forwarding process of a response packet from TerminalB/TerminalC to TerminalA is similar to the intra-subnet forwarding process of known unicast packets.

Inter-subnet Packet Forwarding

Inter-subnet packets must be forwarded through a Layer 3 gateway. Figure 1-1059 shows the inter-subnet packet forwarding process in distributed VXLAN gateway scenarios.

Figure 1-1059 Inter-subnet packet forwarding
  1. After Leaf1 receives a packet from Host1, it finds that the destination MAC address of the packet is a gateway MAC address so that the packet must be forwarded at Layer 3.
  2. Leaf1 first determines the Layer 2 broadcast domain of the packet based on the inbound interface and then finds the L3VPN instance to which the VBDIF interface of the Layer 2 broadcast domain is bound. Leaf1 searches the routing table of the L3VPN instance for a matching host route based on the destination IP address of the packet and obtains the Layer 3 VNI and next hop address corresponding to the route. Figure 1-1060 shows the host route in the L3VPN routing table. If the outbound interface is a VXLAN tunnel, Leaf1 determines that VXLAN encapsulation is required and then:
    • Obtains MAC addresses based on the VXLAN tunnel's source and destination IP addresses and replaces the source and destination MAC addresses in the inner Ethernet header.
    • Encapsulates the Layer 3 VNI into the packet.
    • Encapsulates the VXLAN tunnel's destination and source IP addresses in the outer header. The source MAC address is the MAC address of the outbound interface on Leaf1, and the destination MAC address is the MAC address of the next hop.
    Figure 1-1060 Host route information in the L3VPN routing table
  3. The VXLAN packet is then transmitted over the IP network based on the IP and MAC addresses in the outer headers and finally reaches Leaf2.
  4. After Leaf2 receives the VXLAN packet, it decapsulates the packet and finds that the destination MAC address is its own MAC address. It then determines that the packet must be forwarded at Layer 3.
  5. Leaf2 finds the corresponding L3VPN instance based on the Layer 3 VNI carried in the packet. Then, Leaf2 searches the routing table of the L3VPN instance and finds that the next hop of the packet is the gateway interface address. Leaf2 then replaces the destination MAC address with the MAC address of Host2, replaces the source MAC address with the MAC address of Leaf2, and forwards the packet to Host2.

Host2 sends packets to Host1 in a similar process.

When Huawei devices need to communicate with non-Huawei devices, ensure that the non-Huawei devices use the same forwarding mode. Otherwise, the Huawei devices may fail to communicate with non-Huawei devices.

Function Enhancements

Establishment of a Three-Segment VXLAN for Layer 3 Communication Between DCs

Background

To meet the requirements of inter-regional operations, user access, geographical redundancy, and other scenarios, an increasing number of enterprises deploy DCs across regions. Data Center Interconnect (DCI) is a solution that enables communication between VMs in different DCs. Using technologies such as VXLAN and BGP EVPN, DCI securely and reliably transmits DC packets over carrier networks. Three-segment VXLAN can be configured to enable inter-subnet communication between VMs in different DCs.

Benefits

Three-segment VXLAN enables Layer 3 communication between DC and offers the following benefits to users:

  • Hosts in different DCs can communicate at Layer 3.
  • Different DCs do not need to run the same routing protocol for communication.
  • Different DCs do not require information orchestration for communication.
Implementation

Three-segment VXLAN establishes one VXLAN tunnel segment in each of the DCs and also establishes one VXLAN tunnel segment between the DCs. As shown in Figure 1-1061, BGP EVPN is used to create VXLAN tunnels in distributed gateway mode within both DC A and DC B so that the VMs in each DC can communicate with each other. Leaf2 and Leaf3 are the edge devices within the DCs that connect to the backbone network. BGP EVPN is used to configure a VXLAN tunnel between Leaf2 and Leaf3, so that the VXLAN packets received by one DC can be decapsulated, re-encapsulated, and sent to the peer DC. This process provides E2E transport for inter-DC VXLAN packets and ensures that VMs in different DCs can communicate with each other.

This function applies only to IPv4 over IPv4 networks.

In three-segment VXLAN scenarios, only VXLAN tunnels in distributed gateway mode can be deployed in DCs.

Figure 1-1061 Using three-segment VXLAN for DCI

Control Plane

The following describes how three-segment VXLAN tunnels are established.

The process of advertising routes on Leaf1 and Leaf4 is not described in this section. For details, see VXLAN Tunnel Establishment.

  1. Leaf4 learns the IP address of VMb2 in DC B and saves it to the routing table for the L3VPN instance. Leaf4 then sends a BGP EVPN route to Leaf3.
  2. As shown in Figure 1-1062, Leaf3 receives the BGP EVPN route and obtains the host IP route contained in it. Leaf3 then establishes a VXLAN tunnel to Leaf 4 according to the process described in VXLAN Tunnel Establishment. Leaf3 sets the next hop of the route to its own VTEP address, re-encapsulates the route with the Layer 3 VNI of the L3VPN instance, and sets the source MAC address of the route to its own MAC address. Finally, Leaf3 sends the re-encapsulated BGP EVPN route to Leaf2.
    Figure 1-1062 Control plane process

  3. Leaf2 receives the BGP EVPN route and obtains the host IP route contained in it. Leaf2 then establishes a VXLAN tunnel to Leaf3 according to the process described in VXLAN Tunnel Establishment. Leaf2 sets the next hop of the route to its own VTEP address, re-encapsulates the route with the Layer 3 VNI of the L3VPN instance, and sets the source MAC address of the route to its own MAC address. Finally, Leaf2 sends the re-encapsulated BGP EVPN route to Leaf1.
  4. Leaf1 receives the BGP EVPN route and establishes a VXLAN tunnel to Leaf2 according to the process described in VXLAN Tunnel Establishment.

Data Packet Forwarding

A general overview of the packet forwarding process on Leaf1 and Leaf4 is provided below. For detailed information, see Intra-subnet Packet Forwarding.

  1. Leaf1 receives Layer 2 packets destined for VMb2 from VMa1 and determines that the destination MAC addresses in these packets are all gateway interface MAC addresses. Leaf1 then terminates these Layer 2 packets and finds the L3VPN instance corresponding to the BDIF interface through which VMa1 accesses the broadcast domain. Leaf1 then searches the L3VPN instance routing table for the VMb2 host route, encapsulates the received packets as VXLAN packets, and sends them to Leaf2 over the VXLAN tunnel.
  2. As shown in Figure 1-1063, Leaf2 receives and parses these VXLAN packets. After finding the L3VPN instance corresponding to the Layer 3 VNI of the packets, Leaf2 searches the L3VPN instance routing table for the VMb2 host route. Leaf2 then re-encapsulates these VXLAN packets (setting the Layer 3 VNI and inner destination MAC address to the Layer 3 VNI and MAC address carried in the VMb2 host route sent by Leaf3). Finally, Leaf2 sends these packets to Leaf3.
    Figure 1-1063 Data packet forwarding

  3. As shown in Figure 1-1063, Leaf3 receives and parses these VXLAN packets. After finding the L3VPN instance corresponding to the Layer 3 VNI of the packets, Leaf3 searches the L3VPN instance routing table for the VMb2 host route. Leaf3 then re-encapsulates these VXLAN packets (setting the Layer 3 VNI and inner destination MAC address to the Layer 3 VNI and MAC address carried in the VMb2 host route sent by Leaf4). Finally, Leaf3 sends these packets to Leaf4.
  4. Leaf4 receives and parses these VXLAN packets. After finding the L3VPN instance corresponding to the Layer 3 VNI of the packets, Leaf4 searches the L3VPN instance routing table for the VMb2 host route. Using this routing information, Leaf4 forwards these packets to VMb2.
Other Functions

Local leaking of EVPN routes is needed in scenarios where different VPN instances are used for the access of different services in a DC and but an external VPN instance is used to communicate with other DCs to block VPN instance allocation information within the DC from the outside. Depending on route sources, this function can be used in the following scenarios:

Local VPN routes are advertised through EVPN after being locally leaked

As shown in Figure 1-1064, the process is as follows:
  1. The function to import VPN routes to a local VPN instance named vpn1 is configured in the BGP VPN instance IPv4 or IPv6 address family.
  2. vpn1 sends received routes to the VPNv4 or VPNv6 component, which then checks whether the ERT of vpn1 is the same as the IRT of the external VPN instance vpn2. If they are the same, the VPNv4 or VPNv6 component imports these routes to vpn2.
  3. vpn2 sends locally leaked routes to the EVPN component and advertises these routes as BGP EVPN routes to peers. In this case, vpn2 must be able to advertise locally leaked routes as BGP EVPN routes.
Figure 1-1064 Local leaking of EVPN routes (1)

Remote public network routes are advertised through EVPN after being locally leaked

As shown in Figure 1-1065, the process is as follows:
  1. The EVPN component receives public network routes from a remote peer.
  2. The EVPN component imports the received routes to vpn1.
  3. vpn1 sends received routes to the VPNv4 or VPNv6 component, which then checks whether the ERT of vpn1 is the same as the IRT of vpn2. If they are the same, the VPNv4 or VPNv6 component imports these routes to vpn2. In this case, vpn1 must be able to perform remote and local route leaking route leaking in succession.
  4. vpn2 sends locally leaked routes to the EVPN component and advertises these routes as BGP EVPN routes to peers. In this case, vpn2 must be able to advertise locally leaked routes as BGP EVPN routes.
Figure 1-1065 Local leaking of EVPN routes (2)

Using Three-Segment VXLAN to Implement Layer 2 Interconnection Between DCs

Background

Figure 1-1066 shows the scenario where three-segment VXLAN is deployed to implement Layer 2 interconnection between DCs. VXLAN tunnels are configured both within DC A and DC B and between transit leaf nodes in both DCs. To enable communication between VM1 and VM2, implement Layer 2 communication between DC A and DC B. If the VXLAN tunnels within DC A and DC B use the same VXLAN Network Identifier (VNI), this VNI can also be used to establish a VXLAN tunnel between Transit Leaf1 and Transit Leaf2. In practice, however, different DCs have their own VNI spaces. Therefore, the VXLAN tunnels within DC A and DC B tend to use different VNIs. In this case, to establish a VXLAN tunnel between Transit Leaf1 and Transit Leaf2, VNIs conversion must be implemented.

Figure 1-1066 Deployment of three-segment VXLAN for Layer 2 interworking
Benefits

This solution offers the following benefits to users:

  • Implements Layer 2 interconnection between hosts in different DCs.

  • Decouples the VNI space of the network within a DC from that of the network between DCs, simplifying network maintenance.

  • Isolates network faults within a DC from those between DCs, facilitating fault location.

Principles

Currently, this solution is implemented in the local VNI mode. It is similar to downstream label allocation. The local VNI of the peer transit leaf node functions as the outbound VNI, which is used by packets that the local transit leaf node sends to the peer transit leaf node for VXLAN encapsulation.

Control Plane

This function is only supported for IPv4 over IPv4 networks.

The establishment of VXLAN tunnels between leaf nodes is the same as VXLAN tunnel establishment for intra-subnet interworking in common VXLAN scenarios. Therefore, the detailed process is not described here. Regarding the control plane, MAC address learning by a host is described here.

On the network shown in Figure 1-1067, the control plane is implemented as follows:

Figure 1-1067 Control plane for VXLAN mapping in local VNI mode
  1. Server Leaf1 learns VM1's MAC address, generates a BGP EVPN route, and sends it to Transit Leaf1. The BGP EVPN route contains the following information:

    • Type 2 route: EVPN instance's RD value, VM1's MAC address, and Server Leaf1's local VNI.

    • Next hop: Server Leaf1's VTEP IP address.

    • Extended community attribute: encapsulated tunnel type (VXLAN).

    • ERT: EVPN instance's export RT value.

  2. Upon receipt, Transit Leaf1 adds the BGP EVPN route to its local EVPN instance and generates a MAC address entry for VM1 in the EVPN instance-bound BD. Based on the next hop and encapsulated tunnel type, the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Server Leaf1. The VNI in VXLAN tunnel encapsulation information is Transit Leaf1's local VNI.

  3. Transit Leaf1 re-originates the BGP EVPN route and then advertises the route to Transit Leaf2. The re-originated BGP EVPN route contains the following information:

    • Type 2 route: EVPN instance's RD value, VM1's MAC address, and Transit Leaf1's local VNI.

    • Next hop: Transit Leaf1's VTEP IP address.

    • Extended community attribute: encapsulated tunnel type (VXLAN).

    • ERT: EVPN instance's export RT value.

  4. Upon receipt, Transit Leaf2 adds the re-originated BGP EVPN route to its local EVPN instance and generates a MAC address entry for VM1 in the EVPN instance-bound BD. Based on the next hop and encapsulated tunnel type, the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Transit Leaf1. The outbound VNI in VXLAN tunnel encapsulation information is Transit Leaf1's local VNI.

  5. Transit Leaf2 re-originates the BGP EVPN route and then advertises the route to Server Leaf2. The re-originated BGP EVPN route contains the following information:

    • Type 2 route: EVPN instance's RD value, VM1's MAC address, and Transit Leaf2's local VNI.

    • Next hop: Transit Leaf2's VTEP IP address.

    • Extended community attribute: encapsulated tunnel type (VXLAN).

    • ERT: EVPN instance's export RT value.

  6. Upon receipt, Server Leaf2 adds the re-originated BGP EVPN route to its local EVPN instance and generates a MAC address entry for VM1 in the EVPN instance-bound BD. Based on the next hop and encapsulated tunnel type, the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Transit Leaf2. The VNI in VXLAN tunnel encapsulation information is Server Leaf2's local VNI.

The preceding process takes MAC address learning by VM1 for example. MAC address learning by VM2 is the same, which is not described here.

Forwarding Plane

Figure 1-1068 shows the known unicast packets are forwarded. The following example process shows how VM2 sends Layer 2 packets to VM1:

Figure 1-1068 Known unicast packet forwarding with VXLAN mapping in local VNI mode
  1. After receiving a Layer 2 packet from VM2 through a BD Layer 2 sub-interface, Server Leaf2 searches the BD's MAC address table based on the destination MAC address for the VXLAN tunnel's outbound interface and obtains VXLAN tunnel encapsulation information (local VNI, destination VTEP IP address, and source VTEP IP address). Based on the obtained information, the Layer 2 packet is encapsulated through the VXLAN tunnel and then forwarded to Transit Leaf2.

  2. Upon receipt, Transit Leaf2 decapsulates the VXLAN packet, finds the target BD based on the VNI, searches the BD's MAC address table based on the destination MAC address for the VXLAN tunnel's outbound interface, and obtains the VXLAN tunnel encapsulation information (outbound VNI, destination VTEP IP address, and source VTEP IP address). Based on the obtained information, the Layer 2 packet is encapsulated through the VXLAN tunnel and then forwarded to Transit Leaf1.

  3. Upon receipt, Transit Leaf1 decapsulates the VXLAN packet. Because the packet's VNI is Transit Leaf1's local VNI, the target BD can be found based on this VNI. Transit Leaf1 also searches the BD's MAC address table based on the destination MAC address for the VXLAN tunnel's outbound interface and obtains the VXLAN tunnel encapsulation information (local VNI, destination VTEP IP address, and source VTEP IP address). Based on the obtained information, the Layer 2 packet is encapsulated through the VXLAN tunnel and then forwarded to Server Leaf1.

  4. Upon receipt, Server Leaf1 decapsulates the VXLAN packet and forwards it at Layer 2 to VM1.

In the scenario with three-segment VXLAN for Layer 2 interworking, BUM packet forwarding is the same as that in the common VXLAN scenario except that the split horizon group is used to prevent loops. The similarities are not described here.

  • After receiving BUM packets from a Server Leaf node in the same DC, a Transit Leaf node obtains the split horizon group to which the source VTEP belongs. Because all nodes in the same DC belong to the default split horizon group, BUM packets will not be replicated to other Server Leaf nodes within the DC. Because the peer Transit Leaf node belongs to a different split horizon group, BUM packets will be replicated to the peer Transit Leaf node.

  • Upon receipt, the peer Transit Leaf node obtains the split horizon group to which the source VTEP belongs. Because the Transit Leaf nodes at both ends belong to the same split horizon group, BUM packets will not be replicated to the peer Transit Leaf node. Because the Server Leaf nodes within the DC belong to a different split horizon group, BUM packets will be replicated to them.

VXLAN Active-Active Reliability

Basic Concepts

The network in Figure 1-1069 shows a scenario where an enterprise site (CPE) connects to a data center. The VPN GWs (PE1 and PE2) and CPE are connected through VXLAN tunnels to exchange the L2/L3 services between the CPE and data center. The data center gateway (CE1) is dual-homed to PE1 and PE2 to access the VXLAN network for enhanced network access reliability. If one PE fails, services can be rapidly switched to the other PE, minimizing service loss.

PE1 and PE2 on the network use the same virtual address as an NVE interface address (Anycast VTEP address) at the network side. In this way, the CPE is aware of only one remote NVE interface. After the CPE establishes a VXLAN tunnel with this virtual address, the packets from the CPE can reach CE1 through either PE1 or PE2. However, when a single-homed CE, such as CE2 or CE3, exists on the network, the packets from the CPE to the single-homed CE may need to detour to the other PE after reaching one PE. To achieve PE1-PE2 reachability, a bypass VXLAN tunnel needs to be established between PE1 and PE2. To establish this tunnel, an EVPN peer relationship is established between PE1 and PE2, and different addresses, namely, bypass VTEP addresses, are configured for PE1 and PE2.

Figure 1-1069 Basic networking of the VXLAN active-active scenario
Control Plane
  • PE1 and PE2 exchange Inclusive Multicast routes (Type 3) whose source IP address is their shared anycast VTEP address. Each route carries a bypass VXLAN extended community attribute, which contains the bypass VTEP address of PE1 or PE2.

  • After receiving the Inclusive Multicast route from each other, PE1 and PE2 consider that they form an anycast relationship based on the following details: The source IP address (anycast VTEP address) of the route is identical to PE1's and PE2's local virtual addresses, and the route carries a bypass VXLAN extended community attribute. PE1 and PE2 then establish a bypass VXLAN tunnel between them.

  • PE1 and PE2 learn the MAC addresses of the CEs through the upstream packets from the AC side and advertise the MAC/IP routes (Type 2) to each other. The routes carry the ESIs of the access links of the CEs, information about the VLANs that the CEs access, and the bypass VXLAN extended community attribute.

  • PE1 and PE2 learn the MAC address of the CPE through downstream packets from the network side. After learning that the next-hop address of the MAC route can be recursed to a static VXLAN tunnel, PE1 and PE2 advertise the route to each other through an MAC/IP route, without changing the next-hop address.

Data Packets Processing
  • Layer 2 unicast packet forwarding

    • Uplink

      As shown in Figure 1-1070, after receiving Layer 2 unicast packets destined for the CPE from CE1, CE2, and CE3, PE1 and PE2 search for their local MAC address table to obtain outbound interfaces, perform VXLAN encapsulation on the packets, and forward them to the CPE.

      Figure 1-1070 Uplink unicast packet forwarding
    • Downlink

      As shown in Figure 1-1071:

      After receiving a Layer 2 unicast packet sent by the CPE to CE1, PE1 performs VXLAN decapsulation on the packet, searches the local MAC address table for the destination MAC address, obtains the outbound interface, and forwards the packet to CE1.

      After receiving a Layer 2 unicast packet sent by the CPE to CE2, PE1 performs VXLAN decapsulation on the packet, searches the local MAC address table for the destination MAC address, obtains the outbound interface, and forwards the packet to CE2.

      After receiving a Layer 2 unicast packet sent by the CPE to CE3, PE1 performs VXLAN decapsulation on the packet, searches the local MAC address table for the destination MAC address, and forwards it to PE2 over the bypass VXLAN tunnel. After the packet reaches PE2, PE2 searches the destination MAC address, obtains the outbound interface, and forwards the packet to CE3.

      The process for PE2 to forward packets from the CPE is the same as that for PE1 to forward packets from the CPE.

      Figure 1-1071 Downlink unicast packet forwarding
  • BUM packet forwarding

    • As shown in Figure 1-1072, if the destination address of a BUM packet from the CPE is the Anycast VTEP address of PE1 and PE2, the BUM packet may be forwarded to either PE1 or PE2. If the BUM packet reaches PE2 first, PE2 sends a copy of the packet to CE3 and CE1. In addition, PE2 sends a copy of the packet to PE1 through the bypass VXLAN tunnel between PE1 and PE2. After the copy of the packet reaches PE1, PE1 sends it to CE2, not to the CPE or CE1. In this way, CE1 receives only one copy of the packet.

      Figure 1-1072 BUM packets from the CPE
    • As shown in Figure 1-1073, after a BUM packet from CE2 reaches PE1, PE1 sends a copy of the packet to CE1 and the CPE. In addition, PE1 sends a copy of the packet to PE2 through the bypass VXLAN tunnel between PE1 and PE2. After the copy of the packet reaches PE2, PE2 sends it to CE3, not to the CPE or CE1.

      Figure 1-1073 BUM packets from CE2
    • As shown in Figure 1-1074, after a BUM packet from CE1 reaches PE1, PE1 sends a copy of the packet to CE2 and the CPE. In addition, PE1 sends a copy of the packet to PE2 through the bypass VXLAN tunnel between PE1 and PE2. After the copy of the packet reaches PE2, PE2 sends it to CE3, not to the CPE or CE1.

      Figure 1-1074 BUM packets from CE1
  • Layer 3 packets transmitted on the same subnet

    • Uplink

      As shown in Figure 1-1070, after receiving Layer 3 unicast packets destined for the CPE from CE1, CE2, and CE3, PE1 and PE2 search for the destination address and directly forward them to the CPE because they are on the same network segment.

    • Downlink

      As shown in Figure 1-1071:

      After the Layer 3 unicast packet sent from the CPE to CE1 reaches PE1, PE1 searches for the destination address and directly sends it to CE1 because they are on the same network segment.

      After the Layer 3 unicast packet sent from the CPE to CE2 reaches PE1, PE1 searches for the destination address and directly sends it to CE2 because they are on the same network segment.

      After the Layer 3 unicast packet sent from the CPE to CE3 reaches PE1, PE1 searches for the destination address and sends it to PE2, then sends it to CE3, because they are on the same network segment.

      The process for PE2 to forward packets from the CPE is the same as that for PE1 to forward packets from the CPE.

  • Layer 3 packets transmitted across subnets

    • Uplink

      As shown in Figure 1-1070:

      Because the CPE is on a different network segment from PE1 and PE2, the destination MAC address of a Layer 3 unicast packet sent from CE1, CE2, or CE3 to the CPE is the MAC address of the BDIF interface on the Layer 3 gateway of PE1 or PE2. After receiving the packet, PE1 or PE2 removes the Layer 2 tag from the packet, searches for a matching Layer 3 routing entry, and obtains the outbound interface that is the BDIF interface connecting the CPE to the Layer 3 gateway. The BDIF interface searches the ARP table, obtains the destination MAC address, encapsulates the packet into a VXLAN packet, and sends it to the CPE through the VXLAN tunnel.

      After receiving the Layer 3 packet from PE1 or PE2, the CPE removes the Layer 2 tag from the packet because the destination MAC address is the MAC address of the BDIF interface on the CPE. Then the CPE searches the Layer 3 routing table to obtain a next-hop address to forward the packet.

    • Downlink

      As shown in Figure 1-1071:

      Before sending a Layer 3 unicast packet to CE1 across subnets, the CPE searches its Layer 3 routing table and obtains the outbound interface that is the BDIF interface on the Layer 3 gateway connecting to PE1. The BDIF interface searches the ARP table to obtain the destination MAC address, encapsulates the packet into a VXLAN packet, and forwards it to PE1 over the VXLAN tunnel.

      After receiving the packet from the CPE, PE1 removes the Layer 2 tag from the packet because the destination address of the packet is the MAC address of PE1's BDIF interface. Then PE1 searches the Layer 3 routing table and obtains the outbound interface that is the BDIF interface connecting PE1 to its attached CE. The BDIF interface searches its ARP table and obtains the destination address, performs Layer-2 encapsulation for the packet, and sends it to CE1.

      The process for PE2 to forward packets from the CPE is the same as that for PE1 to forward packets from the CPE.

NFVI Distributed Gateway (Asymmetric Mode)

Huawei's network functions virtualization infrastructure (NFVI) telco cloud solution incorporates Data Center Interconnect (DCI) and data center network (DCN) solutions. A large volume of UE traffic enters the DCN and accesses the vUGW and vMSE on the DCN. After being processed by the vUGW and vMSE, the UE traffic IPv4 or IPv6 is forwarded over the DCN to destination devices on the Internet. Likewise, return traffic sent from the destination devices to UEs also undergoes this process. To meet the preceding requirements and ensure that the UE traffic is load-balanced within the DCN, you need to deploy the NFVI distributed gateway function on DCN devices.

The vUGW is a unified packet gateway developed based on Huawei's CloudEdge solution. It can be used for 3rd Generation Partnership Project (3GPP) access in general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), and Long Term Evolution (LTE) modes. The vUGW can function as a gateway GPRS support node (GGSN), serving gateway (S-GW), or packet data network gateway (P-GW) to meet carriers' various networking requirements in different phases and operational scenarios.

The vMSE is developed based on Huawei's multi-service engine (MSE). The carrier's network has multiple functional boxes deployed, such as the firewall box, video acceleration box, header enrichment box, and URL filtering box. All functions are added through patch installation. As time goes by, the network becomes increasingly slow, complicating service rollout and maintenance. To solve this problem, the vMSE integrates the functions of these boxes and manages these functions in a unified manner, providing value-added services for the data services initiated by users.

Networking Overview

Figure 1-1075 and Figure 1-1076 show NFVI distributed gateway networking. The DC gateways are the DCN's border gateways, which exchange Internet routes with the external network through PEs. L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions (VNFs). VNF1 and VNF2 can be deployed as virtualized NEs to implement the vUGW and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through the interface processing unit (IPU).

This networking can be considered a combination of the distributed gateway function and VXLAN active-active/quad-active gateway function.
  • The VXLAN active-active/quad-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. All DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.

  • The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between them.

In the NFVI distributed gateway scenario, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can function as either a DC gateway or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.

Each L2GW/L3GW in Figure 1-1075 represents two devices on the live network. anycast VXLAN active-active is configured on the devices for them to function as one to improve network reliability.

The method of deploying the VXLAN quad-active gateway function on DC gateways is similar to that of deploying the VXLAN active-active gateway function on DC gateways. This section uses the VXLAN active-active gateway function as an example.

Figure 1-1075 NFVI distributed gateway networking (active-active DC gateways)
Function Deployment
On the network shown in Figure 1-1075, the number of bridge domains (BDs) must be planned according to the number of network segments to which the IPUs belong. For example, if five IP addresses planned for five IPUs are allocated to four network segments, you need to plan four different BDs. You also need to configure all BDs and VBDIF interfaces on each of the DC gateways and L2GWs/L3GWs, and bind all VBDIF interfaces to the same L3VPN instance. In addition, ensure that:
  • A VPN BGP peer relationship is set up between each VNF and DC gateway, so that the VNF can advertise UE routes to the DC gateway.

  • Static VPN routes are configured on L2GW/L3GW1 and L2GW/L3GW2 for them to access VNFs. The routes' destination IP addresses are the VNFs' IP addresses, and the next hop addresses are the IP addresses of the IPUs.

  • A BGP EVPN peer relationship is established between each DC gateway and L2GW/L3GW. An L2GW/L3GW can flood static routes to the VNFs to other devices through BGP EVPN peer relationships. A DC gateway can advertise local loopback routes and default routes to the L2GWs/L3GWs through the BGP EVPN peer relationships.

  • Traffic exchanged between a UE and the Internet through a VNF is called north-south traffic, whereas traffic exchanged between VNF1 and VNF2 is called east-west traffic. Load balancing is configured on DC gateways and L2GWs/L3GWs to balance both north-south and east-west traffic.

Generation of Forwarding Entries
In the NFVI distributed gateway networking, all traffic is forwarded at Layer 2 from DC gateways to VNFs after entering the DCN, regardless of whether it is from UEs to the Internet or vice versa. However, after traffic leaves the DCN, it is forwarded at Layer 3 from VNFs to DC gateways. This prevents traffic loops between DC gateways and L2GWs/L3GWs. On the network shown in Figure 1-1076, IPUs connect to multiple L2GWs/L3GWs. If Layer 3 forwarding is used between DC gateways and VNFs, some traffic forwarded by an L2GW/L3GW to the VNF will be forwarded to another L2GW/L3GW due to load balancing. For example, L2GW/L3GW2 forwards some of the traffic to L2GW/L3GW1 and vice versa. As a result, a traffic loop occurs. If Layer 2 forwarding is used, the L2GW/L3GW does not forward the Layer 2 traffic received from another L2GW/L3GW back, preventing traffic loops.
Figure 1-1076 Traffic loop
Forwarding entries are generated on each DC gateway and L2GW/L3GW through the following process:
  1. BDs are deployed on each L2GW/L3GW and bound to links connecting to the IPU interfaces on the associated network segments. Then, VBDIF interfaces are configured as the gateways of these IPU interfaces. The number of BDs is the same as that of network segments to which the IPU interfaces belong. A static VPN route is configured on each L2GW/L3GW, so that the L2GW/L3GW can generate a route forwarding entry with the destination address being the VNF address, next hop being the IPU address, and outbound interface being the associated VBDIF interface.

    Figure 1-1077 Static route forwarding entry on an L2GW/L3GW
  2. An L2GW/L3GW learns IPU MAC address and ARP information through the data plane, and then advertises the information as an EVPN route to DC gateways. The information is then used to generate an ARP entry and MAC forwarding entry for Layer 2 forwarding.

    • The destination MAC addresses in MAC forwarding entries on the L2GW/L3GW are the MAC addresses of the IPUs. For IPUs directly connecting to an L2GW/L3GW (for example, in Figure 1-1075, IPU1, IPU2, and IPU3 directly connect to L2GW/L3GW1), these IPUs are used as outbound interfaces in the MAC forwarding entries on the L2GW/L3GW. For IPUs connecting to the other L2GW/L3GW (for example, IPU4 and IPU5 connect to L2GW/L3GW2 in Figure 1-1075), the MAC forwarding entries use the VTEP address of the other L2GW/L3GW (L2GW/L3GW2) as the next hop and carry the L2 VNI used for Layer 2 forwarding.

    • In MAC forwarding entries on a DC gateway, the destination MAC address is the IPU MAC address, and the next hop is the L2GW/L3GW VTEP address. These MAC forwarding entries also store the L2 VNI information of the corresponding BDs.

    To forward incoming traffic only at Layer 2, you are advised to configure devices to advertise only ARP (ND) routes to each other. In this way, the DC gateway and L2GW/L3GW do not generate IP prefix routes based on IP addresses. If the devices are configured to advertise IRB (IRBv6) routes to each other, enable the IRB asymmetric mode on devices that receive routes.

    Figure 1-1078 MAC forwarding entries on the DC gateway and L2GW/L3GW
  3. After static VPN routes are configured on the L2GW/L3GW, they are imported into the BGP EVPN routing table and then sent as IP prefix routes to the DC gateway through the BGP EVPN peer relationship.

    There are multiple links and static routes between the L2GW/L3GW and VNF. To implement load balancing, you need to enable the Add-Path function when configuring static routes to be imported to the BGP EVPN routing table.

  4. By default, the next hop address of an IP prefix route received by the DC gateway is the IP address of the L2GW/L3GW, and the route recurses to a VXLAN tunnel. In this case, incoming traffic is forwarded at Layer 3. To forward incoming traffic at Layer 2, a routing policy must be configured on the L2GW/L3GW to add the Gateway IP attribute to the static routes destined for the DC gateway. Gateway IP addresses are the IP addresses of IPU interfaces. After receiving an IP prefix route carrying the Gateway IP attribute, the DC gateway does not recurse the route to a VXLAN tunnel. Instead, it performs IP recursion. Finally, the destination address of a route forwarding entry on the DC gateway is the IP address of the VNF, the next hop is the IP address of an IPU interface, and the outbound interface is the VBDIF interface corresponding to the network segment on which the IPU resides. If traffic needs to be sent to the VNF, the forwarding entry can be used to find the corresponding VBDIF interface, which then can be used to find the corresponding ARP entry and MAC entry for Layer 2 forwarding.

    Figure 1-1079 Forwarding entries on the DC gateway and L2GW/L3GW
  5. To establish a VPN BGP peer relationship with the VNF, the DC gateway needs to advertise its loopback address to the L2GW/L3GW. In addition, because the DC gateway uses the anycast VTEP address for the L2GW/L3GW, the VNF1-to-DCGW1 loopback protocol packets may be sent to DCGW2. Therefore, the DC gateway needs to advertise its loopback address to the other DC gateway. Finally, each L2GW/L3GW has a forwarding entry for the VPN route to the loopback addresses of DC gateways, and each DC gateway has a forwarding entry for the VPN route to the loopback address of the other DC gateway. After the VNF and DC gateways establish BGP peer relationships, the VNF can send UE routes to the DC gateways, and the next hops of these routes are the VNF IP address.

    Figure 1-1080 Forwarding entries on the DC gateway and L2GW/L3GW
  6. The DCN does not need to be aware of external routes. Therefore, a route policy must be configured on the DC gateway, so that the DC gateway can send default routes and loopback routes to the L2GW/L3GW.

    Figure 1-1081 Forwarding entries on the DC gateway and L2GW/L3GW
  7. As the border gateway of the DCN, the DC gateway can exchange Internet routes with external PEs, such as routes to server IP addresses on the Internet.

    Figure 1-1082 Forwarding entries on the DC gateway and L2GW/L3GW
  8. To implement load balancing during traffic transmission, load balancing and Add-Path can be configured on the DC gateway and L2GW/L3GW. This balances both north-south and east-west traffic.

    • North-south traffic balancing: Take DCGW1 in Figure 1-1075 as an example. DCGW1 can receive EVPN routes to VNF2 from L2GW/L3GW1 and L2GW/L3GW2. By default, after load balancing is configured, DCGW1 sends half of traffic destined for VNF2 to L2GW/L3GW1 and half of traffic destined for VNF2 to L2GW/L3GW2. However, L2GW/L3GW1 has only one link to VNF2, while L2GW/L3GW2 has two links to VNF2. As a result, the traffic is not evenly balanced. To address this issue, the Add-Path function must be configured on the L2GW/L3GWs. After Add-Path is configured, L2GW/L3GW2 advertises two routes with the same destination address to DCGW1 to implement load balancing.

    • East-west traffic balancing: Take L2GW/L3GW1 in Figure 1-1075 as an example. Because Add-Path is configured on L2GW/L3GW2, L2GW/L3GW1 receives two EVPN routes from L2GW/L3GW2. In addition, L2GW/L3GW1 has a static route with the next hop being IPU3. The destination address of these three routes is the IP address of VNF2. To implement load balancing, load balancing among static and EVPN routes must be configured.

Traffic Forwarding Process
Figure 1-1083 shows the process of forwarding north-south traffic (from a UE to the Internet).
  1. Upon receipt of UE traffic, the base station encapsulates these packets and redirect them to a GPRS tunneling protocol (GTP) tunnel whose destination address is the VNF IP address. The encapsulated packets reach the DC gateway through IP forwarding.

  2. Upon receipt, the DC gateway searches its virtual routing and forwarding (VRF) table and finds a matching forwarding entry whose next hop is an IPU IP address and outbound interface is a VBDIF interface. Therefore, the received packets match the network segment on which the VBDIF interface resides. The DC gateway searches for the desired ARP entry on the network segment, finds a matching MAC forwarding entry based on the ARP entry, and recurses the route to a VXLAN tunnel based on the MAC forwarding entry. Then, the packets are forwarded to the L2GW/L3GW over a VXLAN tunnel.

  3. Upon receipt, the L2GW/L3GW finds the target BD based on the L2 VNI, searches for a matching MAC forwarding entry in the BD, and then forwards the packets to the VNF based on the MAC forwarding entry.

  4. After the packets reach the VNF, the VNF removes their GTP tunnel header, searches the routing table based on their destination IP addresses, and forwards them to the L2GW/L3GW through the VNF's default gateway.

  5. After the packets reach the L2GW/L3GW, the L2GW/L3GW searches their VRF table for a matching forwarding entry. Over the default route advertised by the DC gateway to the L2GW/L3GW, the packets are encapsulated with the L3 VNI and then forwarded to the DC gateway through the VXLAN tunnel.

  6. Upon receipt, the DC gateway searches the corresponding VRF table for a matching forwarding entry based on the L3 VNI and forwards these packets to the Internet.

Figure 1-1083 Process of forwarding north-south traffic from a UE to the Internet
Figure 1-1084 shows the process of forwarding north-south traffic from the Internet to a UE through the VNF.
  1. A device on the Internet sends response traffic to a UE. The destination address of the response traffic is the destination address of the UE route. The route is advertised by the VNF to the DC gateway through the VPN BGP peer relationship, and the DC gateway in turn advertises the route to the Internet. Therefore, the response traffic must first be forwarded to the VNF first.

  2. Upon receipt, the DC gateway searches the routing table for a forwarding entry that matches the UE route. The route is advertised over the VPN BGP peer relationship between the DC gateway and VNF and recurses to one or more VBDIF interfaces. Traffic is load-balanced among these VBDIF interfaces. A matching MAC forwarding entry is found based on the ARP information on these VBDIF interfaces. Based on the MAC forwarding entry, the response packets are encapsulated with the L2 VNI and then forwarded to the L2GW/L3GW over a VXLAN tunnel.

  3. Upon receipt, the L2GW/L3GW finds the target BD based on the L2 VNI, searches for a matching MAC forwarding entry in the BD, obtains the outbound interface information from the MAC forwarding entry, and forwards these packets to the VNF.

  4. Upon receipt, the VNF processes them and finds the base station corresponding to the destination address of the UE. The VNF then encapsulates tunnel information into these packets (with the base station as the destination) and forwards these packets to the L2GW/L3GW through the default gateway.

  5. Upon receipt, the L2GW/L3GW searches its VRF table for the default route advertised by the DC gateway to the L2GW/L3GW. Then, the L2GW/L3GW encapsulates these packets with the L3 VNI and forwards them to the DC gateway over a VXLAN tunnel.

  6. Upon receipt, the DC gateway searches its VRF table for the default (or specific) route based on the L3 VNI and forwards these packets to the destination base station. The base station then decapsulates these packets and sends them to the target UE.

Figure 1-1084 Process of forwarding north-south traffic from the Internet to a UE
During this process, the VNF may send the received packets to another VNF for value-added service processing, based on the packet information. In this case, east-west traffic is generated. Figure 1-1085 shows the process of forwarding east-west traffic (from VNF1 to VNF2), which differs from the north-south traffic forwarding process in packet processing after packets reach VNF1:
  1. VNF1 sends a received packet to VNF2 for processing. VNF2 re-encapsulates the packet by using its own address as the destination address of the packet and sends the packet to the L2GW/L3GW over the default route.

  2. Upon receipt, the L2GW/L3GW searches its VRF table and finds that multiple load-balancing forwarding entries exist. Some entries use the IPU as the outbound interface, and some entries use the L2GW/L3GW as the next hop.

  3. If the path to the other L2GW/L3GW (L2GW/L3GW2) is selected preferentially, the packet is encapsulated with the L2 VNI and forwarded to L2GW/L3GW2 over a VXLAN tunnel. L2GW/L3GW2 finds the target BD based on the L2 VNI and the destination MAC address, and forwards the packet to VNF2.

  4. Upon receipt, VNF2 processes the packet and forwards it to the Internet server. The subsequent forwarding process is the same as the process for forwarding north-south traffic.

Figure 1-1085 Process of forwarding east-west traffic (from VNF1 to VNF2)

NFVI Distributed Gateway (Symmetric Mode)

Huawei's network functions virtualization infrastructure (NFVI) telco cloud solution incorporates Data Center Interconnect (DCI) and data center network (DCN) solutions. A large volume of UE traffic enters the DCN and accesses the vUGW and vMSE on the DCN. After being processed by the vUGW and vMSE, the UE traffic (IPv4 or IPv6) is forwarded over the DCN to destination devices on the Internet. Likewise, return traffic sent from the destination devices to UEs also undergoes this process. To meet the preceding requirements and ensure that the UE traffic is load-balanced within the DCN, you need to deploy the NFVI distributed gateway function on DCN devices.

The vUGW is a unified packet gateway developed based on Huawei's CloudEdge solution. It can be used for 3rd Generation Partnership Project (3GPP) access in general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), and Long Term Evolution (LTE) modes. The vUGW can function as a gateway GPRS support node (GGSN), serving gateway (S-GW), or packet data network gateway (P-GW) to meet carriers' various networking requirements in different phases and operational scenarios.

The vMSE is developed based on Huawei's multi-service engine (MSE). The carrier's network has multiple functional boxes deployed, such as the firewall box, video acceleration box, header enrichment box, and URL filtering box. All functions are added through patch installation. As time goes by, the network becomes increasingly slow, complicating service rollout and maintenance. To solve this problem, the vMSE integrates the functions of these boxes and manages these functions in a unified manner, providing value-added services for the data services initiated by users.

Networking

Figure 1-1086 and Figure 1-1087 show NFVI distributed gateway networking. The DC gateways are the DCN's border gateways, which exchange Internet routes with the external network through PEs. L2GW/L3GW1 and L2GW/L3GW2 connect to virtualized network functions (VNFs). VNF1 and VNF2 can be deployed as virtualized NEs to respectively provide vUGW and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through interface processing units (IPUs).

This networking combines the distributed gateway function and the VXLAN active-active gateway function:
  • The VXLAN active-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. Both DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.

  • The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between L2GW/L3GW1 and L2GW/L3GW2.

In the NFVI distributed gateway scenario, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M functions as either a DCGW or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.

Each L2GW/L3GW in Figure 1-1086 represents two devices on the live network. anycast VXLAN active-active is configured on the devices for them to function as one to improve network reliability.

Figure 1-1086 NFVI distributed gateway networking (with active-active DC gateways)
Function Deployment
On the network shown in Figure 1-1086, the number of bridge domains (BDs) must be planned according to the number of subnets to which the IPUs belong. For example, if five IP addresses planned for five IPUs are allocated to four subnets, you need to plan four different BDs. You need to configure all BDs and VBDIF interfaces only on L2GWs/L3GWs and bind all VBDIF interfaces to the same L3VPN instance. In addition, deploy the following functions on the network:
  • Establish VPN BGP peer relationships between VNFs and DC gateways, so that VNFs can advertise UE routes to DC gateways.

  • Configure VPN static routes on L2GW/L3GW1 and L2GW/L3GW2, or configure L2GWs/L3GWs to establish VPN IGP neighbor relationships with VNFs to obtain VNF routes with next hop addresses being IPU addresses.

  • Establish BGP EVPN peer relationships between any two of the DC gateways and L2GWs/L3GWs. L2GWs/L3GWs can then advertise VNF routes to DC gateways and other L2GWs/L3GWs through BGP EVPN peer relationships. DC gateways can advertise the local loopback route and default route as well as obtained UE routes to L2GWs/L3GWs through BGP EVPN peer relationships.

  • Traffic forwarded between the UE and Internet through VNFs is called north-south traffic, and traffic forwarded between VNF1 and VNF2 is called east-west traffic. To balance both types of traffic, you need to configure load balancing on DC gateways and L2GWs/L3GWs.

Generation of Forwarding Entries
Table 1-473 lists the differences between the asymmetric and symmetric modes in terms of forwarding entry generation.
Table 1-473 Differences between the asymmetric and symmetric modes in terms of forwarding entry generation

Asymmetric Mode

Symmetric Mode

All traffic is forwarded at Layer 2 from DC gateways to VNFs after entering the DCN, regardless of whether it is from UEs to the Internet or vice versa. However, after traffic leaves the DCN, it is forwarded at Layer 3 from VNFs to DC gateways. This prevents traffic loops between DC gateways and L2GWs/L3GWs. On the network shown in Figure 1-1087, IPUs connect to multiple L2GWs/L3GWs. If Layer 3 forwarding is used between DC gateways and VNFs, some traffic forwarded by an L2GW/L3GW to the VNF will be forwarded to another L2GW/L3GW due to load balancing. For example, L2GW/L3GW2 forwards some of the traffic to L2GW/L3GW1 and vice versa. As a result, a traffic loop occurs. If Layer 2 forwarding is used, the L2GW/L3GW does not forward the Layer 2 traffic received from another L2GW/L3GW back, preventing traffic loops.

After traffic enters the DCN, the traffic is forwarded from DC gateways to the VNF at Layer 3. The traffic from the VNF to DC gateways and then out of the DCN is also forwarded at Layer 3. On the network shown in Figure 1-1087, IPUs connect to multiple L2GWs/L3GWs. Layer 3 forwarding is used between DC gateways and VNFs, and some traffic forwarded by an L2GW/L3GW to the VNF will be forwarded over a VXLAN tunnel to another L2GW/L3GW due to load balancing. After receiving VXLAN traffic, an L2GW/L3GW searches for matching routes. If these routes work in hybrid load-balancing mode, the L2GW/L3GW preferentially selects the access-side outbound interface to forward the traffic, preventing loops.

Figure 1-1087 Traffic loop
In symmetric mode, forwarding entries are created on each DC gateway and L2GW/L3GW as follows:
  1. BDs are deployed on each L2GW/L3GW and bound to links connecting to the IPU interfaces on the associated network segments. Then, VBDIF interfaces are configured as the gateways of these IPU interfaces. The number of BDs is the same as that of network segments to which the IPU interfaces belong. A VPN static route is configured on each L2GW/L3GW or a VPN IGP neighbor relationship is established between each L2GW/L3GW and the VNF, so that the L2GW/L3GW can generate a route forwarding entry with the destination address being the VNF address, next hop being the IPU address, and outbound interface being the associated VBDIF interface.

    Figure 1-1088 Route forwarding entry for traffic from an L2GW/L3GW to the VNF
  2. After VPN static or IGP routes are configured on the L2GW/L3GW, they are imported into the BGP EVPN routing table and then sent as IP prefix routes to the DC gateway through the BGP EVPN peer relationship.

    There are multiple links and routes between the L2GW/L3GW and VNF. To implement load balancing, you need to enable the Add-Path function when configuring routes to be imported into the BGP EVPN routing table.

  3. The next hop address of an IP prefix route received by the DC gateway is the IP address of the L2GW/L3GW, and the route recurses to a VXLAN tunnel. In this case, incoming traffic is forwarded at Layer 3.

    Figure 1-1089 Forwarding entries on the DC gateway and L2GW/L3GW
  4. To establish a VPN BGP peer relationship with the VNF, the DC gateway needs to advertise its loopback address to the L2GW/L3GW. In addition, because the DC gateway uses the anycast VTEP address for the L2GW/L3GW, the VNF1-to-DCGW1 loopback protocol packets may be sent to DCGW2. Therefore, the DC gateway needs to advertise its loopback address to the other DC gateway. Finally, each L2GW/L3GW has a forwarding entry for the VPN route to the loopback addresses of DC gateways, and each DC gateway has a forwarding entry for the VPN route to the loopback address of the other DC gateway. After the VNF and DC gateways establish BGP peer relationships, the VNF can send UE routes to the DC gateways, and the next hops of these routes are the VNF IP address.

    Figure 1-1090 Forwarding entries on the DC gateway and L2GW/L3GW
  5. In symmetric mode, the L2GW/L3GW needs to learn UE routes. Therefore, a route-policy needs to be configured on the DC gateway to enable the DC gateway to advertise UE routes to the L2GW/L3GW after setting the original next hops of these routes as the gateway address. Except UE routes, the DCN does not need to be aware of other external routes. Therefore, another route-policy needs to be configured on the DC gateway to ensure that the DC gateway advertises only loopback routes and default routes to the L2GW/L3GW.

    Figure 1-1091 Forwarding entries on the DC gateway and L2GW/L3GW
  6. As the border gateway of the DCN, the DC gateway can exchange Internet routes with external PEs, such as routes to server IP addresses on the Internet.

    Figure 1-1092 Forwarding entries on the DC gateway and L2GW/L3GW
  7. To implement load balancing during traffic transmission, load balancing and Add-Path can be configured on the DC gateway and L2GW/L3GW. This balances both north-south and east-west traffic.

    • North-south traffic balancing: Take DCGW1 in Figure 1-1086 as an example. DCGW1 can receive EVPN routes to VNF2 from L2GW/L3GW1 and L2GW/L3GW2. By default, after load balancing is configured, DCGW1 sends half of traffic destined for VNF2 to L2GW/L3GW1 and half of traffic destined for VNF2 to L2GW/L3GW2. However, L2GW/L3GW1 has only one link to VNF2, while L2GW/L3GW2 has two links to VNF2. As a result, the traffic is not evenly balanced. To address this issue, the Add-Path function must be configured on the L2GW/L3GWs. After Add-Path is configured, L2GW/L3GW2 advertises two routes with the same destination address to DCGW1 to implement load balancing.

    • East-west traffic balancing: Take L2GW/L3GW1 in Figure 1-1086 as an example. Because Add-Path is configured on L2GW/L3GW2, L2GW/L3GW1 receives two EVPN routes from L2GW/L3GW2. In addition, L2GW/L3GW1 has a static route or IGP route with the next hop being IPU3. The destination address of these three routes is the IP address of VNF2. To implement load balancing, hybrid load balancing among EVPN routes and routes of other routing protocols needs to be deployed.

Traffic Forwarding Process
Figure 1-1093 shows the process of forwarding north-south traffic (from a UE to the Internet).
  1. Upon receipt of UE traffic, the base station encapsulates these packets and redirect them to a GPRS tunneling protocol (GTP) tunnel whose destination address is the VNF IP address. The encapsulated packets reach the DC gateway through IP forwarding.

  2. After receiving these packets, the DC gateway searches the VRF table and finds that the next hop of the forwarding entry corresponding to the VNF address is an IPU address and the outbound interface is a VXLAN tunnel. The DC gateway then performs VXLAN encapsulation and forwards the packets to the L2GW/L3GW at Layer 3.

  3. Upon receipt of these packets, the L2GW/L3GW finds the corresponding VPN instance based on the L3 VNI, searches for a matching route in the VPN instance's routing table based on the VNF address, and forwards the packets to the VNF.

  4. After the packets reach the VNF, the VNF removes their GTP tunnel header, searches the routing table based on their destination IP addresses, and forwards them to the L2GW/L3GW through the VNF's default gateway.

  5. After the packets reach the L2GW/L3GW, the L2GW/L3GW searches their VRF table for a matching forwarding entry. Over the default route advertised by the DC gateway to the L2GW/L3GW, the packets are encapsulated with the L3 VNI and then forwarded to the DC gateway through the VXLAN tunnel.

  6. Upon receipt, the DC gateway searches the corresponding VRF table for a matching forwarding entry based on the L3 VNI and forwards these packets to the Internet.

Figure 1-1093 Process of forwarding north-south traffic from a UE to the Internet
Figure 1-1094 shows the process of forwarding north-south traffic from the Internet to a UE through the VNF.
  1. A device on the Internet sends response traffic to a UE. The destination address of the response traffic is the destination address of the UE route. The route is advertised by the VNF to the DC gateway through the VPN BGP peer relationship, and the DC gateway in turn advertises the route to the Internet. Therefore, the response traffic must first be forwarded to the VNF first.

  2. After the response traffic reaches the DC gateway, the DC gateway searches the routing table for forwarding entries corresponding to UE routes. These routes are learned by the DC gateway from the VNF over the VPN BGP peer relationship. These routes finally recurse to VXLAN tunnels, the response packets are encapsulated into VXLAN packets and forwarded to the L2GW/L3GW at Layer 3.

  3. After these packets reach the L2GW/L3GW, the L2GW/L3GW finds the corresponding VPN instance based on the L3 VNI, searches for a route corresponding to the UE address in the VPN instance's routing table, and forwards these packets to the VNF.

  4. Upon receipt, the VNF processes them and finds the base station corresponding to the destination address of the UE. The VNF then encapsulates tunnel information into these packets (with the base station as the destination) and forwards these packets to the L2GW/L3GW through the default gateway.

  5. Upon receipt, the L2GW/L3GW searches its VRF table for the default route advertised by the DC gateway to the L2GW/L3GW. Then, the L2GW/L3GW encapsulates these packets with the L3 VNI and forwards them to the DC gateway over a VXLAN tunnel.

  6. Upon receipt, the DC gateway searches its VRF table for the default (or specific) route based on the L3 VNI and forwards these packets to the destination base station. The base station then decapsulates these packets and sends them to the target UE.

Figure 1-1094 Process of forwarding north-south traffic from the Internet to a UE
During this process, the VNF may send the received packets to another VNF for value-added service processing, based on the packet information. In this case, east-west traffic is generated. Figure 1-1095 shows the process of forwarding east-west traffic (from VNF1 to VNF2), which differs from the north-south traffic forwarding process in packet processing after packets reach VNF1:
  1. VNF1 sends a received packet to VNF2 for processing. VNF2 re-encapsulates the packet by using its own address as the destination address of the packet and sends the packet to the L2GW/L3GW1 over the default route.

  2. Upon receipt, the L2GW/L3GW1 searches its VRF table and finds that multiple load-balancing routes exist. Some routes use the IPU as the outbound interface, and some routes use L2GW/L3GW2 as the next hop.

  3. If these routes work in hybrid load-balancing mode, L2GW/L3GW1 preferentially selects only the routes with the outbound interfaces being IPUs and steers packets to VNF2 to prevent loops. If these routes do not work in hybrid load-balancing mode, L2GW/L3GW1 forwards packets in load-balancing route. Packets are encapsulated into VXLAN packets before they are sent to L2GW/L3GW2 at Layer 2. After these packets reach L2GW/L3GW2, L2GW/L3GW2 finds the corresponding BD based on the L2 VNI, then finds the destination MAC address, and finally forwards these packets to VNF2.
  4. Upon receipt, VNF2 processes the packet and forwards it to the Internet server. The subsequent forwarding process is the same as the process for forwarding north-south traffic.

Figure 1-1095 Process of forwarding east-west traffic (from VNF1 to VNF2)

Application Scenarios for VXLAN

Application for Communication Between Terminal Users on a VXLAN

Service Description

Currently, data centers are expanding on a large scale for enterprises and carriers, with increasing deployment of virtualization and cloud computing. In addition, to accommodate more services while reducing maintenance costs, data centers are employing large Layer 2 and virtualization technologies.

As server virtualization is implemented in the physical network infrastructure for data centers, VXLAN, an NVO3 technology, has adapted to the trend by providing virtualization solutions for data centers.

Networking Description

On the network shown in Figure 1-1096, an enterprise has VMs deployed in different data centers. Different network segments run different services. The VMs running the same service or different services in different data centers need to communicate with each other. For example, VMs of the financial department residing on the same network segment need to communicate, and VMs of the financial and engineering departments residing on different network segments also need to communicate.

Figure 1-1096 Communication between terminal users on a VXLAN
Feature Deployment
As shown in Figure 1-1096:
  • Deploy Device 1 and Device 2 as Layer 2 VXLAN gateways and establish a VXLAN tunnel between Device 1 and Device 2 to allow communication between terminal users on the same network segment.
  • Deploy Device 3 as a Layer 3 VXLAN gateway and establish a VXLAN tunnel between Device 1 and Device 3 and between Device 2 and Device 3 to allow communication between terminal users on different network segments.

Configure VXLAN on devices to trigger VXLAN tunnel establishment and dynamic learning of ARP and MAC address entries. By now, terminal users on the same network segment and different network segments can communicate through the Layer 2 and Layer 3 VXLAN gateways based on ARP and routing entries.

Application for Communication Between Terminal Users on a VXLAN and Legacy Network

Service Description

Currently, data centers are expanding on a large scale for enterprises and carriers, with increasing deployment of virtualization and cloud computing. In addition, to accommodate more services while reducing maintenance costs, data centers are employing large Layer 2 and virtualization technologies.

As server virtualization is implemented in the physical network infrastructure for data centers, VXLAN, an NVO3 technology, has adapted to the trend by providing virtualization solutions for data centers, allowing intra-VXLAN communication and communication between VXLANs and legacy networks.

Networking Description

On the network shown in Figure 1-1097, an enterprise has VMs deployed for the finance and engineering departments and a legacy network for the human resource department. The finance and engineering departments need to communicate with the human resource department.

Figure 1-1097 Communication between terminal users on a VXLAN and legacy network
Feature Deployment

As shown in Figure 1-1097:

Deploy Device 2 as Layer 2 VXLAN gateway and Device 3 as a Layer 3 VXLAN gateway. The VXLAN gateways are VXLANs' edge devices connecting to legacy networks and are responsible for VXLAN encapsulation and decapsulation. Establish a VXLAN tunnel between Device 2 and Device 3 for VXLAN packet transmission.

When the human resource department sends a packet to VM1 of the financial department, the process is as follows:
  1. Device 1 receives the packet and sends it to Device 3 through IP network.
  2. Upon receipt, Device 3 parses the destination IP address, and searches the routing table for a next hop address. Then, Device 3 searches the ARP or ND table based on the next hop address to determine the destination MAC address, VXLAN tunnel's outbound interface, and VNI.
  3. Device 3 encapsulates the VXLAN tunnel's outbound interface and VNI into the packet and sends the VXLAN packet to Device 2.
  4. Upon receipt, Device 2 decapsulates the VXLAN packet, finds the outbound interface based on the destination MAC address, and forwards the packet to VM1.

Application in VM Migration Scenarios

Service Description

Enterprises configure server virtualization on DCNs to consolidate IT resources, improve resource use efficiency, and reduce network costs. With the wide deployment of server virtualization, an increasing number of VMs are running on physical servers, and many applications are running in virtual environments, which bring great challenges to virtual networks.

Network Description

On the network shown in Figure 1-1098, an enterprise has two servers in the DC: engineering and finance departments on Server1 and the marketing department on Server2.

The computing space on Server1 is insufficient, but Server2 is not fully used. The network administrator wants to migrate the engineering department to Server2 without affecting services.

This scenario applies to IPv4 over IPv4, IPv6 over IPv4, IPv4 over IPv6, and IPv6 over IPv6 networks. Figure 1-1098 shows an IPv4 over IPv4 network.

Figure 1-1098 Department distribution
Feature Deployment

To ensure uninterrupted services during the migration of the engineering department, the IP and MAC addresses of the engineering department must remain unchanged. This requires that the two servers belong to the same Layer 2 network. If conventional migration methods are used, the administrator may have to purchase additional physical devices to distribute traffic and reconfigure VLANs. These methods may also result in network loops and additional system and management costs.

VXLAN can be used to migrate the engineering department to Server2. VXLAN is a network virtualization technology that uses MAC-in-UDP encapsulation. This technology can establish a large Layer 2 network connecting all terminals with reachable IP routes, as long as the physical network supports IP forwarding.

The engineering department is migrated to Server2 through the VXLAN tunnel. Online users are unaware of the migration. After the engineering department is migrated from Server1 to Server2, terminals send gratuitous ARP or RARP packets to update all gateways' MAC addresses and ARP entries of the original VMs to those of the VMs to which the R&D department is migrated.

Terminology for VXLAN

Terms

Term

Description

NVO3

Network Virtualization over L3. A network virtualization technology implemented at Layer 3 for traffic isolation and IP independence between multi-tenants of data centers so independent Layer 2 subnets can be provided for tenants. In addition, NVO3 supports VM deployment and migration on Layer 2 subnets of tenants.

VXLAN

Virtual extensible local area network. An NVO3 network virtualization technology that encapsulates data packets sent from VMs into UDP packets and encapsulates IP and MAC addresses used on the physical network in the outer headers before sending the packets over an IP network. The egress tunnel endpoint then decapsulates the packets and sends the packets to the destination VM.

Acronyms and Abbreviations

Acronym and Abbreviation

Full Name

BD

bridge domain

BUM

broadcast, unknown unicast, and multicast

VNI

VXLAN network identifier

VTEP

VXLAN tunnel endpoints

VXLAN Configuration

This section describes how to configure VXLAN on devices, without any controller.

Overview of VXLAN

VXLAN allows a virtual network to provide access services to a large number of tenants. In addition, tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management.

Background

Server virtualization is widely used in cloud computing scenarios and greatly reduces IT and O&M costs in addition to improving service deployment flexibility. It allows a physical server to be virtualized into multiple virtual machines (VMs), each of which functions as a host. However, a great increase in the number of hosts causes the following problems:
  • VM scale is limited by network specifications.

    On a large Layer 2 network, data packets are forwarded at Layer 2 based on MAC entries. However, the MAC table capacity is limited, which subsequently limits the number of VMs.

  • Network isolation capabilities are limited.

    Most networks currently use VLANs to implement network isolation. However, the deployment of VLANs on large-scale virtualized networks has the following limitations:
    • The VLAN tag field defined in IEEE 802.1Q has only 12 bits and can support only a maximum of 4094 VLANs, which cannot meet user identification requirements of large Layer 2 networks.
    • VLANs on legacy Layer 2 networks cannot adapt to dynamic network adjustment.
  • VM migration scope is limited by the network architecture.

    A running VM may need to be migrated to a new server due to resource issues on the original server (for example, migration may be required if the CPU usage is too high, or memory resources are inadequate). To ensure service continuity during VM migration, the IP address of the VM must remain unchanged. Therefore, the service network must be a Layer 2 network and provide multipathing redundancy backup and reliability.

VXLAN addresses the preceding problems on large Layer 2 networks.

  • Eliminates VM scale limitations imposed by network specifications.

    VXLAN encapsulates data packets sent from VMs into UDP packets and encapsulates IP and MAC addresses used on the physical network into the outer headers. As a result, the network is aware of only the encapsulated parameters and not the inner data. This implementation greatly reduces the MAC address specification requirements of large Layer 2 networks.

  • Provides greater network isolation capabilities.

    VXLAN uses a 24-bit network segment ID, called a VXLAN network identifier (VNI), to identify users. This VNI is similar to a VLAN ID, but supports a maximum of 16M VXLAN segments.

  • Eliminates VM migration scope limitations imposed by network architecture.

    VXLAN uses MAC-in-UDP encapsulation to extend Layer 2 networks. It encapsulates Ethernet packets into IP packets for these Ethernet packets to be transmitted over routes, and does not need to be aware of VMs' MAC addresses. Because there is no limitation on Layer 3 network architecture, Layer 3 networks are scalable and have strong automatic fault rectification and load balancing capabilities. This allows for VM migration irrespective of the network architecture.

Related Concepts

Figure 1-1099 VXLAN architecture

VXLAN allows a virtual network to provide access services to a large number of tenants. In addition, tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management. Table 1-474 describes VXLAN concepts.

Table 1-474 VXLAN concepts

Concept

Description

Underlay and overlay networks

VXLAN allows virtual Layer 2 or Layer 3 networks (overlay networks) to be built over existing physical networks (underlay networks). Overlay networks use encapsulation technologies to transmit tenant packets between sites over Layer 3 forwarding paths provided by underlay networks. Tenants are aware of only overlay networks.

Network virtualization edge (NVE)

A network entity that is deployed at the network edge and implements network virtualization functions.

NOTE:

vSwitches on devices and servers can function as NVEs.

VXLAN tunnel endpoint (VTEP)

A VXLAN tunnel endpoint that encapsulates and decapsulates VXLAN packets. It is represented by an NVE.

A VTEP connects to a physical network and is assigned a physical network IP address. This IP address is irrelevant to virtual networks.

In VXLAN packets, the source IP address is the local node's VTEP address, and the destination IP address is the remote node's VTEP address. This pair of VTEP addresses corresponds to a VXLAN tunnel.

VXLAN network identifier (VNI)

A VXLAN segment identifier similar to a VLAN ID. VMs on different VXLAN segments cannot communicate directly at Layer 2.

A VNI identifies only one tenant. Even if multiple terminal users belong to the same VNI, they are considered one tenant. A VNI consists of 24 bits and supports a maximum of 16M tenants.

A VNI can be a Layer 2 or Layer 3 VNI.

  • A Layer 2 VNI is mapped to a BD for intra-segment transmission of VXLAN packets.

  • A Layer 3 VNI is bound to a VPN instance for inter-segment transmission of VXLAN packets.

Bridge domain (BD)

A Layer 2 broadcast domain through which VXLAN data packets are forwarded.

On a VXLAN network, a VNI can be mapped to a BD so that the BD can function as a VXLAN network entity to forward data packets.

VBDIF interface

A Layer 3 logical interface created for a BD. Configuring IP addresses for VBDIF interfaces allows communication between VXLANs on different network segments and between VXLANs and non-VXLANs and implements Layer 2 network access to a Layer 3 network.

Gateway

A device that ensures communication between VXLANs identified by different VNIs and between VXLANs and non-VXLANs (similar to a VLAN).

A VXLAN gateway can be a Layer 2 or Layer 3 gateway.
  • Layer 2 gateway: allows tenants to access VXLANs and intra-segment communication on a VXLAN.

  • Layer 3 gateway: allows inter-segment VXLAN communication and access to external networks.

NVE Deployment Mode

On VXLANs, VTEPs are represented by NVEs, and therefore VXLAN tunnels can be established after NVEs are deployed. The following NVE deployment modes are available where NVEs are deployed.

  • Hardware mode: On the network shown in Figure 1-1100, all NVEs are deployed on NVE-capable devices, which perform VXLAN encapsulation and decapsulation.

    Figure 1-1100 Hardware mode
  • Software mode: On the network shown in Figure 1-1101, all NVEs are deployed on vSwitches, which perform VXLAN encapsulation and decapsulation.

    Figure 1-1101 Software mode
  • Hybrid mode: On the network shown in Figure 1-1102, some NVEs are deployed on vSwitches, and others on NVE-capable devices. Both vSwitches and NVE-capable devices may perform VXLAN encapsulation and decapsulation.

    Figure 1-1102 Hybrid mode

This document describes how to configure VXLAN when NVEs are deployed on NVE-capable devices. If software mode is used, devices only need to transparently transmit VXLAN packets.

Configuration Precautions for VXLAN

Feature Requirements

Table 1-475 Feature requirements

Feature Requirements

Series

Models

A VXLAN tunnel does not support MTU configuration, and packets cannot be fragmented before entering the VXLAN tunnel. Although packets entering a VXLAN tunnel can be fragmented based on the MTU of the outbound interface, the outbound VXLAN tunnel node can reassemble only a few packets. Therefore, you need to properly plan the MTU of the network-side interface to prevent packets from being fragmented after entering the VXLAN tunnel.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

Restrictions of EVPN control plane of VXLAN networks are as follows:

1. BDs, VNIs, and EVPNs support only 1:1 binding.

2. A BD must be bound to a VNI before being bound to an EVPN.

3. VNI peer statistics collection and VNI statistics collection use the same statistical resource and cannot be configured together. Traffic statistics by VNI+peer support only the split-horizon-mode of VNIs, and do not support common VNIs.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

Restrictions for VXLAN dual-active access are as follows:

1. Currently, only Eth-Trunk interfaces are supported for active-active reliability.

2. Active-active reliability does not support shutdown of sub-interfaces. (Upstream Eth-Trunk traffic is not switched. As a result, traffic is interrupted.) Downstream local bias pruning is based on the main interface and does not switch the process.)

3. The shutdown bd scenario is not supported.

4. The configurations of active-active interfaces must be the same.

5. Dynamic ESIs are not supported.

6. After MAC FRR is enabled, MAC addresses are deleted because MAC addresses need to be learned in FRR format.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

When a VXLAN tunnel is bound to a VNI, the VNI is bound to a BD, and a VBDIF interface is created to function as a Layer 3 gateway, the VXLAN tunnel does not support the multicast function on the VBDIF interface.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

VNI-based HQoS supports only level-3 scheduling (GQ, SQ, and FQ), and does not support DP and VI level scheduling.

Configuring interface-based HQoS is recommended.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

Only two VXLANv4 fragments can be reassembled on the same board. Inter-board reassembly is not supported.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

VXLAN tunnels with the same VNI do not support both IPv4 and IPv6.

If IPv4 and IPv6 VXLAN tunnels coexist between two devices:

1. Packets are preferentially transmitted over the IPv4 VXLAN tunnel.

2. Packet loss or excess packets may occur during tunnel switching.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

IPv6 VXLAN tunnels do not support packet redundancy avoidance during BUM traffic switchback.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

After a BD accesses a VSI or VXLAN, Layer 2 sub-interfaces cannot be bound to the BD.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

VXLAN does not meet DHCP snooping requirements.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

After a BD accesses a VSI or VXLAN, a VBDIF interface cannot be created.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

If an EVPN instance has been bound to a BD, the binding relationship between the EVPN instance and VNI cannot be modified or deleted.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

If the number of VXLAN tunnels exceeds the upper limit, new tunnels cannot be created, and an alarm is generated to notify the user of the tunnel creation failure cause.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

After a distributed gateway is configured, you need to specify the non-gateway IP address of the local device as the source IP address of ICMP Echo Request packets to be sent when pinging the host address from the gateway.

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

A VNI can be bound to only one service instance (BD/VRF/EVPL).

NetEngine 8000 M

NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8

Configuring IPv6 VXLAN in Centralized Gateway Mode for Static Tunnel Establishment

IPv6 VXLAN can be deployed in centralized gateway mode so that all inter-subnet traffic is forwarded through Layer 3 gateways, thereby implementing centralized traffic management.

Usage Scenario

To allow intra- and inter-subnet communication between a tenant's VMs located in different geological locations on an IPv6 network, properly deploy Layer 2 and Layer 3 gateways on the network and establish IPv6 VXLAN tunnels.

On the network shown in Figure 1-1103, Server2 and Server3 belong to the same network segment and access the IPv6 VXLAN through Device1 and Device2, respectively. Server1 and Server2 belong to different network segments and both access the IPv6 VXLAN through Device1.
  • To allow VM1 on Server2 and VM1 on Server3 to communicate, deploy Layer 2 gateways on Device1 and Device2 and establish an IPv6 VXLAN tunnel between Device1 and Device2. This ensures that the VMs on the same network segment can communicate.
  • To allow VM1 on Server1 and VM1 on Server3 to communicate, deploy a Layer 3 gateway on Device3 and establish one IPv6 VXLAN tunnel between Device1 and Device3 and another one between Device2 and Device3. This ensures that the VMs on different network segments can communicate.

The VMs and Layer 3 VXLAN gateway can be allocated either IPv4 or IPv6 addresses. This means that either an IPv4 or IPv6 overlay network can be used with IPv6 VXLAN. Figure 1-1103 shows an IPv4 overlay network.

Figure 1-1103 Configuring IPv6 VXLAN in centralized gateway mode

Layer 3 gateways must be deployed on the IPv6 VXLAN if VMs must communicate with VMs on other network segments or with external networks. Layer 3 gateways do not need to be deployed for VMs communicating on the same network segment.

The following table lists the difference in centralized gateway configuration between IPv4 and IPv6 overlay networks.

Configuration Task

IPv4 Overlay Network

IPv6 Overlay Network

Configure a Layer 3 gateway on an IPv6 VXLAN.

Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway.

Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway.

Pre-configuration Tasks

Before configuring IPv6 VXLAN in centralized gateway mode for static tunnel establishment, complete the following tasks:
  • Configure an IPv6 routing protocol to achieve Layer 3 connectivity on the IPv6 network.

Configuring a VXLAN Service Access Point

On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. A Layer 2 sub-interface can transmit data packets through a BD after being associated with it.

Context

As described in Table 1-476, Layer 2 sub-interfaces can have different encapsulation types configured to transmit various types of data packets.
Table 1-476 Traffic encapsulation types

Traffic Encapsulation Type

Description

dot1q

This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
  • The VLAN ID encapsulated by a Layer 2 sub-interface cannot be the same as that permitted by the Layer 2 main interface of the sub-interface.
  • The VLAN IDs encapsulated by a Layer 2 sub-interface and a Layer 3 sub-interface cannot be the same.

untag

This type of sub-interface accepts only packets that do not carry VLAN tags. When setting the encapsulation type to untag for a Layer 2 sub-interface, note the following:
  • The physical interface where the involved sub-interface resides must have only default configurations.
  • Only Layer 2 physical interfaces and Eth-Trunk interfaces can have untag Layer 2 sub-interfaces created.
  • Only one untag Layer 2 sub-interface can be created on a main interface.

default

This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
  • The main interface where the involved sub-interface resides cannot be added to any VLAN.
  • Only Layer 2 physical interfaces and Eth-Trunk interfaces can have default Layer 2 sub-interfaces created.
  • If a default Layer 2 sub-interface is created on a main interface, the main interface cannot have other types of Layer 2 sub-interfaces configured.

qinq

This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags.

A service access point needs to be configured on a Layer 2 gateway.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bridge-domain bd-id

    A BD is created, and the BD view is displayed.

  3. (Optional) Run description description

    A BD description is configured.

  4. Run quit

    Return to the system view.

  5. Run interface interface-type interface-number.subnum mode l2

    A Layer 2 sub-interface is created, and the sub-interface view is displayed.

    Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.

  6. Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }

    A traffic encapsulation type is configured for the Layer 2 sub-interface.

  7. Run rewrite pop { single | double }

    The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.

    If the received packets each carry a single VLAN tag, specify single.

    If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.

  8. Run bridge-domain bd-id

    The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.

    If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.

  9. Run commit

    The configuration is committed.

Configuring an IPv6 VXLAN Tunnel

VXLAN is a tunneling technology that uses MAC-in-UDP encapsulation to extend large Layer 2 networks. If an underlay network is an IPv6 network, you can configure an IPv6 VXLAN tunnel for a virtual network to access a large number of tenants.

Context

After you configure local and remote VNIs and VTEP IPv6 addresses, an IPv6 VXLAN tunnel is statically created. This configuration is simple, and no protocol configurations are involved. To ensure the proper forwarding of IPv6 VXLAN packets, IPv6 VXLAN tunnels must be configured on VXLAN gateways.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bridge-domain bd-id

    The BD view is displayed.

    bd-id specified in this command must be the same as bd-id specified in Step 2 in the service access point configuration.

  3. Run vxlan vni vni-id

    A VNI is created and associated with the BD.

    To interconnect a VXLAN with a VPLS network, run the vxlan vni vni-id split-horizon-mode command on the edge device belonging to both networks to create a VNI, associate the VNI with a BD, and implement split horizon forwarding.

  4. Run quit

    Return to the system view.

  5. Run interface nve nve-number

    An NVE interface is created, and the NVE interface view is displayed.

  6. Run source ipv6-address

    Configure an IPv6 address for the source VTEP.

    Either a physical or loopback interface's address can be specified as a source VTEP's IPv6 address. Using the loopback interface's address is recommended.

  7. Run vni vni-id head-end peer-list { ipv6-address } &<1-10>

    Configure an IPv6 ingress replication list.

    With this function, the ingress of an IPv6 VXLAN tunnel replicates and sends a copy of any received BUM packets to each VTEP in the ingress replication list (a collection of remote VTEP IPv6 addresses).

    Currently, BUM packets can be forwarded only through ingress replication. This means that non-Huawei devices must have ingress replication configured to establish IPv6 VXLAN tunnels with Huawei devices. If ingress replication is not configured, the tunnels fail to be established.

  8. Run commit

    The configuration is committed.

(Optional) Configuring a Layer 3 Gateway on an IPv6 VXLAN

To allow users on different network segments to communicate, deploy a Layer 3 gateway and specify the IP address of its VBDIF interface as the default gateway address of the users.

Context

On an IPv6 VXLAN, a BD can be mapped to a VNI (identifying a tenant) in 1:1 mode to transmit VXLAN data packets. You can create a VBDIF interface (logical Layer 3 interface) for each BD to implement communication between VXLAN segments, between VXLAN segments and non-VXLAN segments, and between Layer 2 and Layer 3 networks. After you configure an IP address for a VBDIF interface, the interface functions as the gateway for tenants in the BD to forward packets at Layer 3 based on the IP address.

A VBDIF interface needs to be configured on the Layer 3 gateway of an IPv6 VXLAN for communication between different network segments only; it is not needed for communication on the same network segment.

The DHCP relay function can be configured on a VBDIF interface so that hosts can request IP addresses from an external DHCP server.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface vbdif bd-id

    A VBDIF interface is created, and the VBDIF interface view is displayed.

  3. Configure an IP address for the VBDIF interface to implement Layer 3 communication.
    • For an IPv4 overlay network, run ip address ip-address { mask | mask-length } [ sub ]

      An IPv4 address is configured for the VBDIF interface.

    • For an IPv6 overlay network, perform the following operations:
      1. Run ipv6 enable

        The IPv6 function is enabled for the interface.

      2. Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64

        A global unicast address is configured for the interface.

  4. (Optional) Run mac-address mac-address

    A MAC address is configured for the VBDIF interface.

  5. (Optional) Run bandwidth bandwidth

    Bandwidth is configured for the VBDIF interface.

  6. Run commit

    The configuration is committed.

Follow-up Procedure

Configure a static route to the IP address of the VBDIF interface, or configure a dynamic routing protocol to advertise this IP address, so that Layer 3 connectivity can be achieved on the overlay network.

(Optional) Configuring a Static MAC Address Entry

Using static MAC address entries to forward user packets helps reduce BUM traffic on the network and prevent bogus attacks.

Context

When the source NVE on a VXLAN tunnel receives BUM packets, the NVE sends these packets along paths specified in static MAC address entries if there are such entries. This helps reduce BUM traffic on the network and prevent unauthorized data access from bogus users.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mac-address static mac-address bridge-domain bd-id source-ipv6 sourceIpv6 peer-ipv6 peerIPv6 vni vni-id

    A static MAC address entry is configured.

  3. Run commit

    The configuration is committed.

(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface

MAC address learning limiting helps improve VXLAN network security.

Context

Configure the maximum number of MAC addresses that a device can learn to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum, no more addresses can be learned. However, you can also configure the device to discard packets after learning the maximum allowed number of MAC addresses, improving network security.

Disable MAC address learning for a BD if a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in the BD, reducing the number of MAC address entries. You can also disable MAC address learning on Layer 2 gateways after the VXLAN network topology becomes stable and MAC address learning is complete.

MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.

Procedure

  • Limit MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *

      A MAC address learning limit rule is configured.

    4. (Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold

      The threshold percentages for MAC address limit alarm generation and clearing are configured.

    5. Run commit

      The configuration is committed.

  • Disable MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-address learning disable

      MAC address learning is disabled.

    4. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring IPv6 VXLAN in centralized gateway mode for static tunnel establishment, check IPv6 VXLAN tunnel, VNI, and VBDIF interface information.

Prerequisites

IPv6 VXLAN in centralized gateway mode has been configured for static tunnel establishment.

Procedure

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display vxlan peer [ vni vni-id ] command to check IPv6 ingress replication lists of a VNI or all VNIs.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.

Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN

IPv6 VXLAN can be deployed in centralized gateway mode so that all inter-subnet traffic is forwarded through Layer 3 gateways, thereby implementing centralized traffic management.

Usage Scenario

To allow intra- and inter-subnet communication between a tenant's VMs located in different geological locations on an IPv6 network, properly deploy Layer 2 and Layer 3 gateways on the network and establish IPv6 VXLAN tunnels.

On the network shown in Figure 1-1104, Server2 and Server3 belong to the same network segment and access the IPv6 VXLAN through Device1 and Device2, respectively. Server1 and Server2 belong to different network segments and both access the IPv6 VXLAN through Device1.
  • To allow VM1 on Server2 and VM1 on Server3 to communicate, deploy Layer 2 gateways on Device1 and Device2 and establish an IPv6 VXLAN tunnel between Device1 and Device2. This ensures that the VMs on the same network segment can communicate.
  • To allow VM1 on Server1 and VM1 on Server3 to communicate, deploy a Layer 3 gateway on Device3 and establish one IPv6 VXLAN tunnel between Device1 and Device3 and another one between Device2 and Device3. This ensures that the VMs on different network segments can communicate.

The VMs and Layer 3 VXLAN gateway can be allocated either IPv4 or IPv6 addresses. This means that either an IPv4 or IPv6 overlay network can be used with IPv6 VXLAN. Figure 1-1104 shows an IPv4 overlay network.

Figure 1-1104 Configuring IPv6 VXLAN in centralized gateway mode

Layer 3 gateways must be deployed on the IPv6 VXLAN if VMs must communicate with VMs on other network segments or with external networks. Layer 3 gateways do not need to be deployed for VMs communicating on the same network segment.

The following table lists the difference in centralized gateway configuration between IPv4 and IPv6 overlay networks.

Configuration Task

IPv4 Overlay Network

IPv6 Overlay Network

Configure a Layer 3 gateway on an IPv6 VXLAN.

Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway.

Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway.

Pre-configuration Tasks

Before configuring IPv6 VXLAN in centralized gateway mode for static tunnel establishment, complete the following tasks:
  • Configure an IPv6 routing protocol to achieve Layer 3 connectivity on the IPv6 network.

Configuring a VXLAN Service Access Point

On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. A Layer 2 sub-interface can transmit data packets through a BD after being associated with it.

Context

As described in Table 1-477, Layer 2 sub-interfaces can have different encapsulation types configured to transmit various types of data packets.
Table 1-477 Traffic encapsulation types

Traffic Encapsulation Type

Description

dot1q

This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
  • The VLAN ID encapsulated by a Layer 2 sub-interface cannot be the same as that permitted by the Layer 2 main interface of the sub-interface.
  • The VLAN IDs encapsulated by a Layer 2 sub-interface and a Layer 3 sub-interface cannot be the same.

untag

This type of sub-interface accepts only packets that do not carry VLAN tags. When setting the encapsulation type to untag for a Layer 2 sub-interface, note the following:
  • The physical interface where the involved sub-interface resides must have only default configurations.
  • Only Layer 2 physical interfaces and Eth-Trunk interfaces can have untag Layer 2 sub-interfaces created.
  • Only one untag Layer 2 sub-interface can be created on a main interface.

default

This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
  • The main interface where the involved sub-interface resides cannot be added to any VLAN.
  • Only Layer 2 physical interfaces and Eth-Trunk interfaces can have default Layer 2 sub-interfaces created.
  • If a default Layer 2 sub-interface is created on a main interface, the main interface cannot have other types of Layer 2 sub-interfaces configured.

qinq

This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags.

A service access point needs to be configured on a Layer 2 gateway.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bridge-domain bd-id

    A BD is created, and the BD view is displayed.

  3. (Optional) Run description description

    A BD description is configured.

  4. Run quit

    Return to the system view.

  5. Run interface interface-type interface-number.subnum mode l2

    A Layer 2 sub-interface is created, and the sub-interface view is displayed.

    Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.

  6. Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }

    A traffic encapsulation type is configured for the Layer 2 sub-interface.

  7. Run rewrite pop { single | double }

    The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.

    If the received packets each carry a single VLAN tag, specify single.

    If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.

  8. Run bridge-domain bd-id

    The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.

    If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.

  9. Run commit

    The configuration is committed.

Configuring a VXLAN Tunnel

To allow VXLAN tunnel establishment using EVPN, establish a BGP EVPN peer relationship, configure an EVPN instance, and configure ingress replication.

Context

In centralized VXLAN gateway scenarios, perform the following steps on the Layer 2 and Layer 3 VXLAN gateways to use EVPN for establishing VXLAN tunnels:
  1. Configure a BGP EVPN peer relationship. Configure VXLAN gateways to establish BGP EVPN peer relationships so that they can exchange EVPN routes. If an RR has been deployed, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR.

  2. (Optional) Configure an RR. The deployment of RRs reduces the number of BGP EVPN peer relationships to be established, simplifying configuration. A live-network device can be used as an RR, or a standalone RR can be deployed. Layer 3 VXLAN gateways are generally used as RRs, and Layer 2 VXLAN gateways as RR clients.

  3. Configure an EVPN instance. EVPN instances are used to receive and advertise EVPN routes.

  4. Configure ingress replication. After ingress replication is configured for a VNI, the system uses BGP EVPN to construct a list of remote VTEPs. After a VXLAN gateway receives BUM packets, its sends a copy of the BUM packets to every VXLAN gateway in the list.

BUM packet forwarding is implemented only using ingress replication. To establish a VXLAN tunnel between a Huawei device and a non-Huawei device, ensure that the non-Huawei device also has ingress replication configured. Otherwise, communication fails.

Procedure

  1. Configure a BGP EVPN peer relationship.
    1. Run bgp as-number

      BGP is enabled, and the BGP view is displayed.

    2. (Optional) Run router-id as-number

      A BGP router ID is set.

    3. Run peer ipv4-address as-number as-number

      The peer device is configured as a BGP peer.

    4. (Optional) Run peer ipv4-address connect-interface interface-type interface-number [ ipv4-source-address ]

      A source interface and a source address are specified to set up a TCP connection with the BGP peer.

      When loopback interfaces are used to establish a BGP connection, running the peer connect-interface command on both ends is recommended to ensure the connectivity. If this command is run on only one end, the BGP connection may fail to be established.

    5. (Optional) Run peer ipv4-address ebgp-max-hop [ hop-count ]

      The maximum number of hops is set for an EBGP EVPN connection.

      In most cases, a directly connected physical link must be available between EBGP EVPN peers. If you want to establish EBGP EVPN peer relationships between indirectly connected devices, run the peer ebgp-max-hop command. The command also can configure the maximum number of hops for an EBGP EVPN TCP connection.

      When the IP address of loopback interface to establish an EBGP EVPN peer relationship, run the peer ebgp-max-hop (of which the value of hop-count is not less than 2) command. Otherwise, the peer relationship fails to be established.

    6. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    7. Run peer { group-name | ipv4-address } enable

      The device is enabled to exchange EVPN routes with a specified peer or peer group.

    8. Run peer { group-name | ipv4-address } advertise encap-type vxlan

      The device is enabled to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer or peer group.

    9. (Optional) Run peer { group-name | ipv4-address } route-policy route-policy-name { import | export }

      A routing policy is specified for routes received from or to be advertised to a BGP EVPN peer or peer group.

      After the routing policy is applied, the routes received from or to be advertised to a specified BGP EVPN peer or peer group will be filtered, ensuring that only desired routes are imported or advertised. This configuration helps manage routes and reduce required routing entries and system resources.

    10. (Optional) Run peer { group-name | ipv4-address } mac-limit number [ percentage ] [ alert-only | idle-forever | idle-timeout times ]

      The maximum number of MAC advertisement routes that can be received from each peer is configured.

      If an EVPN instance may import many invalid MAC advertisement routes from peers and these routes occupy a large proportion of the total MAC advertisement routes. If the received MAC advertisement routes exceed the specified maximum number, the system displays an alarm, instructing users to check the validity of the MAC advertisement routes received in the EVPN instance.

    11. (Optional) Perform the following operations to enable the function to advertise the routes carrying the large-community attribute to BGP EVPN peers:

      The large-community attribute includes a 2-byte or 4-byte AS number and two 4-byte LocalData fields, allowing the administrator to flexibly use route-policies. Before enabling the function to advertise the routes carrying the large-community attribute to BGP EVPN peers, configure the route-policy related to the large-community attribute and use the route-policy to set the large-community attribute.

      1. Run peer { ipv4-address | group-name } route-policy route-policy-name export

        The outbound route-policy of the BGP EVPN peer is configured.

      2. Run peer { ipv4-address | group-name } advertise-large-community

        The device is enabled to advertise the routes carrying the large-community attribute to BGP EVPN peers or peer groups.

        If the routes carrying the large-community attribute does not need to be advertised to one BGP EVPN peer in the peer group, run the peer ipv4-address advertise-large-community disable command.

    12. (Optional) Run peer ipv4-address graceful-restart static-timer restart-time

      The maximum duration from the time the local device finds that the peer device is restarted to the time a BGP EVPN session is re-established is set.

      BGP GR prevents traffic interruptions caused by re-establishment of a BGP peer relationship. You can run either the graceful-restart timer restart time or peer graceful-restart static-timer command to set this maximum wait time.

      • To set the maximum wait time for re-establishing all BGP peer relationships, run the graceful-restart timer restart command in the BGP view. The maximum wait time can be set to 3600s at most.

      • To set the maximum wait time for re-establishing a specified BGP-EVPN peer relationship, run the peer graceful-restart static-timer command in the BGP EVPN view. The maximum wait time can be set to a value greater than 3600s.

      If both the graceful-restart timer restart time and peer graceful-restart static-timer commands are run, the latter configuration takes effect.

      This step can be performed only after GR has been enabled using the graceful-restart command in the BGP view.

    13. (Optional) Run the peer peerIpv4Addr path-attribute-treat attribute-id { id [ to id2 ] } &<1-255> { discard | withdraw | treat-as-unknown } command to configure a special mode for processing BGP EVPN path attributes. Alternatively, run the peer peerIpv4Addr treat-with-error attribute-id id accept-zero-value command to configure a mode for processing incorrect BGP EVPN path attributes.

      A BGP EVPN Update message contains various path attributes. If a local device receives Update messages containing malformed path attributes, the involved BGP EVPN sessions may flap. To enhance reliability, you can perform this step to configure a processing mode for specified BGP EVPN path attributes or incorrect path attributes.

      The path-attribute-treat parameter specifies a path attribute processing mode, which can be any of the following ones:
      • Discarding specified attributes

      • Withdrawing the routes with specified attributes

      • Processing specified attributes as unknown attributes

      The treat-with-error parameter specifies a processing mode for incorrect path attributes. The mode can be as follows:

      • Accepting the path attributes with a value of 0.

    14. Run quit

      Exit from the BGP-EVPN address family view.

    15. Run quit

      Exit from the BGP view.

  2. (Optional) Configure a Layer 3 VXLAN gateway as an RR. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR, reducing the number of BGP EVPN peer relationships to be established and simplifying configuration.
    1. Run bgp as-number

      The BGP view is displayed.

    2. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    3. Run peer { ipv4-address | group-name } enable

      The device is enabled to exchange EVPN routes with a specified peer or peer group.

    4. (Optional) Run peer { ipv4-address | group-name } next-hop-invariable

      The device is prevented from changing the next hop address of a route when advertising the route to an EBGP peer.

    5. Run peer { ipv4-address | group-name } reflect-client

      The device is configured as an RR and an RR client is specified.

    6. Run undo policy vpn-target

      The function to filter received EVPN routes based on VPN targets is disabled. If you do not perform this step, the RR will fail to receive and reflect the routes sent by clients.

    7. Run quit

      Exit from the BGP-EVPN address family view.

    8. Run quit

      Exit from the BGP view.

  3. Configure an EVPN instance.
    1. Run evpn vpn-instance vpn-instance-name bd-mode

      A BD EVPN instance is created, and the EVPN instance view is displayed.

    2. Run route-distinguisher route-distinguisher

      An RD is configured for the EVPN instance.

    3. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the EVPN instance. The export VPN target of the local end must be the same as the import VPN target of the remote end, and the import VPN target of the local end must be the same as the export VPN target of the remote end.

    4. (Optional) Run import route-policy policy-name

      The current EVPN instance is associated with an import routing policy.

      To control route import more precisely, perform this step to associate the EVPN instance with an import routing policy and set attributes for eligible routes.

    5. (Optional) Run export route-policy policy-name

      The current EVPN instance is associated with an export routing policy.

      To control route export more precisely, perform this step to associate the EVPN instance with an export routing policy and set attributes for eligible routes.

    6. (Optional) Run tnl-policy policy-name

      The EVPN instance is associated with a tunnel policy.

      This configuration allows data packets between PEs to be forwarded through a TE tunnel.

    7. (Optional) Run mac limit number { simply-alert | mac-unchanged }

      The maximum number of MAC addresses allowed by an EVPN instance is configured.

      After a device learns a large number of MAC addresses, system performance may deteriorate when the device is busy processing services. This is because MAC addresses consume system resources. To improve system security and reliability, run the mac limit command to configure the maximum number of MAC addresses allowed by an EVPN instance. If the number of MAC addresses learned by an EVPN instance exceeds the maximum number, the system displays an alarm message, instructing you to check the validity of MAC addresses in the EVPN instance.

    8. (Optional) Run mac-route no-advertise

      The device is disabled from sending local MAC routes with the current VNI to the EVPN peer.

      In Layer 3 VXLAN gateway scenarios where Layer 2 traffic forwarding is not involved, perform this step to disable local MAC routes from being advertised to the EVPN peer. This configuration prevents the EVPN peer from receiving MAC routes, thereby conserving device resources.

    9. (Optional) Run local mac-only-route no-generate

      The device is disabled from generating an EVPN MAC route when the local MAC address exists in both a MAC address entry and an ARP/ND entry.

      If a MAC address entry and an ARP/ND entry on the device both contain the local MAC address, the device generates both an EVPN MAC/IP route and an EVPN MAC route by default. To optimize memory utilization, perform this step so that the device generates only the EVPN MAC/IP route. To ensure normal Layer 2 traffic forwarding, also run the mac-ip route generate-mac command on the peer device to enable the function to generate MAC address entries based on MAC/IP routes.

    10. (Optional) Run mac-ip route generate-mac

      The function to generate MAC address entries based on MAC/IP routes is enabled.

      If the peer device is configured not to advertise MAC routes (using the mac-route no-advertise command) or not to generate MAC routes (using the local mac-only-route no-generate command), the local device cannot generate MAC address entries by default. To ensure normal Layer 2 traffic forwarding, perform this step on the local device to enable the function to generate MAC entries based on MAC/IP routes.

    11. Run quit

      Exit from the EVPN instance view.

    12. Run bridge-domain bd-id

      The BD view is displayed.

    13. Run vxlan vni vni-id split-horizon-mode

      A VNI is created and associated with the BD, and split horizon is applied to the BD.

    14. Run evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ]

      A specified EVPN instance is bound to the BD. By specifying different bd-tag values, you can bind multiple BDs with different VLANs to the same EVPN instance and isolate services in the BDs.

    15. Run quit

      Return to the system view.

  4. Configure an ingress replication list.
    1. Run interface nve nve-number

      An NVE interface is created, and the NVE interface view is displayed.

    2. Run source ip-address

      An IP address is configured for the source VTEP.

    3. Run vni vni-id head-end peer-list protocol bgp

      An ingress replication list is configured.

      After the ingress of a VXLAN tunnel receives broadcast, unknown unicast, and multicast (BUM) packets, it replicates these packets and sends a copy to each VTEP in the ingress replication list. The ingress replication list is a collection of remote VTEP IP addresses to which the ingress of a VXLAN tunnel should send replicated BUM packets.

    4. Run quit

      Return to the system view.

  5. Run commit

    The configuration is committed.

Configuring a Layer 3 VXLAN Gateway

To allow users on different network segments to communicate, a Layer 3 VXLAN gateway must be deployed, and the default gateway address of the users must be the IP address of the VBDIF interface of the Layer 3 gateway.

Context

A tenant is identified by a VNI. VNIs can be mapped to BDs in 1:1 mode so that a BD can function as a VXLAN network entity to transmit VXLAN data packets. A VBDIF interface is a Layer 3 logical interface created for a BD. After an IP address is configured for a VBDIF interface of a BD, the VBDIF interface can function as the gateway for tenants in the BD for Layer 3 forwarding. VBDIF interfaces allow Layer 3 communication between VXLANs on different network segments and between VXLANs and non-VXLANs, and implement Layer 2 network access to a Layer 3 network.

VBDIF interfaces are configured on Layer 3 VXLAN gateways for inter-segment communication, and are not needed in the case of intra-segment communication.

The DHCP relay function can be configured on the VBDIF interface so that hosts can request IP addresses from the external DHCP server.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface vbdif bd-id

    A VBDIF interface is created, and the VBDIF interface view is displayed.

  3. Configure an IP address for the VBDIF interface to implement Layer 3 interworking.
    • On IPv4 overlay networks, run ip address ip-address { mask | mask-length } [ sub ].

      An IPv4 address is configured for the VBDIF interface.

    • On IPv6 overlay networks, perform the following operations:
      1. Run ipv6 enable

        IPv6 is enabled for the VBDIF interface.

      2. Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length }

        Or, ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64

        A global unicast address is configured for the VBDIF interface.

  4. (Optional) Run mac-address mac-address

    A MAC address is configured for the VBDIF interface.

  5. (Option) Run bandwidth bandwidth

    The bandwidth is configured for the VBDIF interface.

  6. Run commit

    The configuration is committed.

(Optional) Configuring a Static MAC Address Entry

Using static MAC address entries to forward user packets helps reduce BUM traffic on the network and prevent bogus attacks.

Context

When the source NVE on a VXLAN tunnel receives BUM packets, the NVE sends these packets along paths specified in static MAC address entries if there are such entries. This helps reduce BUM traffic on the network and prevent unauthorized data access from bogus users.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mac-address static mac-address bridge-domain bd-id source-ipv6 sourceIpv6 peer-ipv6 peerIPv6 vni vni-id

    A static MAC address entry is configured.

  3. Run commit

    The configuration is committed.

(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface

MAC address learning limiting helps improve VXLAN network security.

Context

Configure the maximum number of MAC addresses that a device can learn to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum, no more addresses can be learned. However, you can also configure the device to discard packets after learning the maximum allowed number of MAC addresses, improving network security.

Disable MAC address learning for a BD if a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in the BD, reducing the number of MAC address entries. You can also disable MAC address learning on Layer 2 gateways after the VXLAN network topology becomes stable and MAC address learning is complete.

MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.

Procedure

  • Limit MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *

      A MAC address learning limit rule is configured.

    4. (Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold

      The threshold percentages for MAC address limit alarm generation and clearing are configured.

    5. Run commit

      The configuration is committed.

  • Disable MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-address learning disable

      MAC address learning is disabled.

    4. Run commit

      The configuration is committed.

Verifying the Configuration of VXLAN in Centralized Gateway Mode Using BGP EVPN

After configuring VXLAN in centralized gateway mode for dynamic tunnel establishment, check VXLAN tunnel, VNI, and VBDIF interface information.

Prerequisites

VXLAN in centralized gateway mode has been configured for dynamic tunnel establishment.

Procedure

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to check BGP EVPN peer information.
  • Run the display vxlan peer [ vni vni-id ] command to check ingress replication lists of a VNI or all VNIs.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check VNI information.
  • Run the display interface vbdif [ bd-id ] command to check VBDIF interface information and statistics.
  • Run the display mac-limit bridge-domain bd-id command to check MAC address limiting configurations of a BD.
  • Run the display bgp evpn all routing-table command to check EVPN route information.

Configuring IPv6 VXLAN in Centralized Gateway Mode Using BGP EVPN

IPv6 VXLAN can be deployed in centralized gateway mode so that all inter-subnet traffic is forwarded through Layer 3 gateways, thereby implementing centralized traffic management.

Usage Scenario

To allow intra- and inter-subnet communication between a tenant's VMs located in different geological locations on an IPv6 network, properly deploy Layer 2 and Layer 3 gateways on the network and establish IPv6 VXLAN tunnels.

On the network shown in Figure 1-1105, Server2 and Server3 belong to the same network segment and access the IPv6 VXLAN through Device1 and Device2, respectively. Server1 and Server2 belong to different network segments and both access the IPv6 VXLAN through Device1.
  • To allow VM1 on Server2 and VM1 on Server3 to communicate, deploy Layer 2 gateways on Device1 and Device2 and establish an IPv6 VXLAN tunnel between Device1 and Device2. This ensures that the VMs on the same network segment can communicate.
  • To allow VM1 on Server1 and VM1 on Server3 to communicate, deploy a Layer 3 gateway on Device3 and establish one IPv6 VXLAN tunnel between Device1 and Device3 and another one between Device2 and Device3. This ensures that the VMs on different network segments can communicate.

The VMs and Layer 3 VXLAN gateway can be allocated either IPv4 or IPv6 addresses. This means that either an IPv4 or IPv6 overlay network can be used with IPv6 VXLAN. Figure 1-1105 shows an IPv4 overlay network.

Figure 1-1105 Configuring IPv6 VXLAN in centralized gateway mode

Layer 3 gateways must be deployed on the IPv6 VXLAN if VMs must communicate with VMs on other network segments or with external networks. Layer 3 gateways do not need to be deployed for VMs communicating on the same network segment.

The following table lists the difference in centralized gateway configuration between IPv4 and IPv6 overlay networks.

Configuration Task

IPv4 Overlay Network

IPv6 Overlay Network

Configure a Layer 3 gateway on an IPv6 VXLAN.

Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway.

Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway.

Pre-configuration Tasks

Before configuring IPv6 VXLAN in centralized gateway mode using BGP EVPN, complete the following task:
  • Configure an IPv6 routing protocol to achieve Layer 3 connectivity on the IPv6 network.

Configuring a VXLAN Service Access Point

On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. After a Layer 2 sub-interface is associated with a bridge domain (BD), which is used as a broadcast domain on the IPv6 VXLAN, the sub-interface can transmit data packets through this BD.

Context

As described in Table 1-478, Layer 2 sub-interfaces can have different encapsulation types configured to transmit various types of data packets.
Table 1-478 Traffic encapsulation types

Traffic Encapsulation Type

Description

dot1q

This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
  • The VLAN ID encapsulated by a Layer 2 sub-interface cannot be the same as that permitted by the Layer 2 interface where the sub-interface resides.
  • The VLAN IDs encapsulated by a Layer 2 sub-interface and a Layer 3 sub-interface cannot be the same.

untag

This type of sub-interface accepts only packets that do not carry any VLAN tag. The untag traffic encapsulation type has the following restrictions:
  • The physical interface where the involved sub-interface resides must have only default configurations.
  • Only Layer 2 physical interfaces and Layer 2 Eth-Trunk interfaces can have untag Layer 2 sub-interfaces created.
  • Only one untag Layer 2 sub-interface can be created on a main interface.

default

This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
  • The main interface where the involved sub-interface resides cannot be added to any VLAN.
  • Only Layer 2 physical interfaces and Layer 2 Eth-Trunk interfaces can have default Layer 2 sub-interfaces created.
  • If a default Layer 2 sub-interface is created on a main interface, the main interface cannot have other types of Layer 2 sub-interfaces configured.

qinq

This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags.

Configure a service access point on a Layer 2 gateway:

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bridge-domain bd-id

    A BD is created, and the BD view is displayed.

  3. (Optional) Run description description

    A BD description is configured.

  4. Run quit

    Return to the system view.

  5. Run interface interface-type interface-number.subnum mode l2

    A Layer 2 sub-interface is created, and the sub-interface view is displayed.

    Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.

  6. Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }

    A traffic encapsulation type is configured for the Layer 2 sub-interface.

  7. Run rewrite pop { single | double }

    The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.

    If the received packets each carry a single VLAN tag, specify single.

    If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.

  8. Run bridge-domain bd-id

    The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.

    If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.

  9. Run commit

    The configuration is committed.

Configuring an IPv6 VXLAN Tunnel

Using BGP EVPN to establish an IPv6 VXLAN tunnel between VTEPs involves a series of operations. These include establishing a BGP EVPN peer relationship, configuring an EVPN instance, and configuring ingress replication.

Context

To enable BGP EVPN to establish IPv6 VXLAN tunnels in centralized gateway scenarios, complete the following tasks on the involved gateways:
  1. Configure a BGP EVPN peer relationship. After the gateways on an IPv6 VXLAN establish such a relationship, they can exchange EVPN routes. If an RR is deployed on the network, each gateway only needs to establish a BGP EVPN peer relationship with the RR.

  2. (Optional) Configure an RR. The deployment of RRs simplifies configurations because fewer BGP EVPN peer relationships need to be established. An existing device can be configured to also function as an RR, or a new device can be deployed for this specific purpose. Layer 3 gateways on an IPv6 VXLAN are generally used as RRs, and Layer 2 gateways used as RR clients.

  3. Configure an EVPN instance. EVPN instances are used to receive and advertise EVPN routes.

  4. Configure ingress replication. After ingress replication is configured on a VXLAN gateway, the gateway uses BGP EVPN to construct a list of remote VTEP peers that share the same VNI with itself. After the gateway receives BUM packets, its sends a copy of the BUM packets to each gateway in the list.

Currently, BUM packets can be forwarded only through ingress replication. This means that non-Huawei devices must have ingress replication configured to establish IPv6 VXLAN tunnels with Huawei devices. If ingress replication is not configured, the tunnels fail to be established.

Procedure

  1. Configure a BGP EVPN peer relationship.
    1. Run bgp as-number

      BGP is enabled, and the BGP view is displayed.

    2. (Optional) Run router-id as-number

      A BGP router ID is set.

    3. Run peer ipv6-address as-number as-number

      An IPv6 BGP peer is specified.

    4. (Optional) Run peer ipv6-address connect-interface interface-type interface-number [ ipv6-source-address ]

      A source interface and a source IPv6 address used to set up a TCP connection with the BGP peer are specified.

      If loopback interfaces are used to establish a BGP connection, you are advised to run the peer connect-interface command on both ends of the link to ensure connectivity. If this command is run on only one end, the BGP connection may fail to be established.

    5. (Optional) Run peer ipv6-address ebgp-max-hop [ hop-count ]

      The maximum number of hops allowed for establishing an EBGP EVPN peer relationship is configured.

      The default value of hop-count is 255. In most cases, a directly connected physical link must be available between EBGP EVPN peers. If you want to establish EBGP EVPN peer relationships between indirectly connected devices, run the peer ebgp-max-hop command. This command allows the devices to establish a TCP connection across multiple hops.

      When loopback interfaces are used to establish an EBGP EVPN peer relationship, the peer ebgp-max-hop command must be run, with hop-count being at least 2. Otherwise, the peer relationship fails to be established.

    6. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    7. Run peer { group-name | ipv6-address } enable

      The device is enabled to exchange EVPN routes with a specified peer or peer group.

    8. Run peer { group-name | ipv6-address } advertise encap-type vxlan

      The device is enabled to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer or peer group.

    9. (Optional) Run peer { group-name | ipv6-address } route-policy route-policy-name { import | export }

      A route-policy is specified for routes to be received from or advertised to the BGP EVPN peer or peer group.

      The route-policy helps the device import or advertise only desired routes. It facilitates route management and reduces the routing table size and system resource consumption.

    10. (Optional) Run peer { group-name | ipv6-address } mac-limit number [ percentage ] [ alert-only | idle-forever | idle-timeout times ]

      The maximum number of MAC advertisement routes that can be imported from the peer or peer group is configured.

      If an EVPN instance imports many inapplicable MAC advertisement routes from a peer or peer group and they account for a large proportion of the total number of MAC advertisement routes, you are advised to run the peer { group-name | ipv6-address } mac-limit number [ percentage ] [ alert-only | idle-forever | idle-timeout times ] command. This command limits the maximum number of MAC advertisement routes that can be imported. If the imported MAC advertisement routes exceed the specified maximum number, the device displays an alarm, prompting you to check the validity of the MAC advertisement routes imported to the EVPN instance.

    11. (Optional) Run peer peerIpv6Addr graceful-restart static-timer restart-time

      The maximum hold-off time for re-establishing BGP peer relationships, namely, the maximum duration from the time the local device finds that the peer device restarts to the time the local device re-establishes a BGP peer relationship with the peer device, is configured.

      Graceful restart (GR) prevents traffic interruption caused by the re-establishment of BGP peer relationships. To set the maximum hold-off time, run either of the following commands:
      • To set the maximum hold-off time for re-establishing all BGP peer relationships, run the graceful-restart timer restart command in the BGP view. The hold-off time can be set to 3600s at the most.

      • To set the maximum hold-off time for re-establishing a BGP EVPN peer relationship, run the peer graceful-restart static-timer command in the BGP view. Use this command if you want to set a hold-off time longer than 3600s.

      If both the graceful-restart timer restart time and peer graceful-restart static-timer commands are run, the peer graceful-restart static-timer command configuration takes precedence.

      This step can be performed only after GR has been enabled using the graceful-restart command in the BGP view.

    12. (Optional) Run the peer peerIpv6Addr path-attribute-treat attribute-id { id [ to id2 ] } &<1-255> { discard | withdraw | treat-as-unknown } command to configure a special mode for processing BGP EVPN path attributes. Alternatively, run the peer peerIpv6Addr treat-with-error attribute-id id accept-zero-value command to configure a mode for processing incorrect BGP EVPN path attributes.

      A BGP EVPN Update message contains various path attributes. If a local device receives Update messages containing malformed path attributes, the involved BGP EVPN sessions may flap. To enhance reliability, you can perform this step to configure a processing mode for specified BGP EVPN path attributes or incorrect path attributes.

      The path-attribute-treat parameter specifies a path attribute processing mode, which can be any of the following ones:
      • Discarding specified attributes

      • Withdrawing the routes with specified attributes

      • Processing specified attributes as unknown attributes

      The treat-with-error parameter specifies a processing mode for incorrect path attributes. The mode can be as follows:

      • Accepting the path attributes with a value of 0.

  2. (Optional) Configure the Layer 3 gateway as an RR on the IPv6 VXLAN. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. This simplifies configurations because fewer BGP EVPN peer relationships need to be established.
    1. Run peer { ipv6-address | group-name } reflect-client

      The device is configured as an RR, and RR clients are specified.

    2. (Optional) Run peer { group-name | ipv6-address } next-hop-invariable

      The device is enabled to keep the next hop address of a route unchanged when advertising the route to an EBGP EVPN peer.

    3. Run undo policy vpn-target

      The device is disabled from filtering received EVPN routes based on VPN targets. If you do not perform this step, the RR will fail to receive and reflect the routes sent by clients.

    4. Run quit

      Exit the BGP-EVPN address family view.

    5. Run quit

      Exit the BGP view.

  3. Configure an EVPN instance.
    1. Run evpn vpn-instance vpn-instance-name bd-mode

      A BD EVPN instance is created, and the EVPN instance view is displayed.

    2. Run route-distinguisher route-distinguisher

      An RD is configured for the EVPN instance.

    3. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the EVPN instance. The import and export VPN targets of the local end must be the same as the export and import VPN targets of the remote end, respectively.

    4. (Optional) Run import route-policy policy-name

      The EVPN instance is associated with an import route-policy.

      Perform this step to associate the EVPN instance with an import route-policy and set attributes for eligible routes. This enables you to control routes to be imported into the EVPN instance more precisely.

    5. (Optional) Run export route-policy policy-name

      The EVPN instance is associated with an export route-policy.

      Perform this step to associate the EVPN instance with an export route-policy and set attributes for eligible routes. This enables you to control routes to be advertised more precisely.

    6. (Optional) Run mac limit number { simply-alert | mac-unchanged

      The maximum number of MAC addresses allowed in the EVPN instance is set.

      A device consumes more system resources as it learns more MAC addresses, meaning that the device may fail to operate when busy processing services. To limit the maximum number of MAC addresses allowed in an EVPN instance and thereby improving device security and reliability, run the mac limit command. After this configuration, if the number of MAC addresses exceeds the preset value, an alarm is triggered to prompt you to check the validity of existing MAC addresses.

    7. (Optional) Run mac-route no-advertise

      The device is disabled from sending local MAC routes with the current VNI to the EVPN peer.

      In Layer 3 VXLAN gateway scenarios where Layer 2 traffic forwarding is not involved, perform this step to disable local MAC routes from being advertised to the EVPN peer. This configuration prevents the EVPN peer from receiving MAC routes, thereby conserving device resources.

    8. (Optional) Run local mac-only-route no-generate

      The device is disabled from generating an EVPN MAC route when the local MAC address exists in both a MAC address entry and an ARP/ND entry.

      If a MAC address entry and an ARP/ND entry on the device both contain the local MAC address, the device generates both an EVPN MAC/IP route and an EVPN MAC route by default. To optimize memory utilization, perform this step so that the device generates only the EVPN MAC/IP route. To ensure normal Layer 2 traffic forwarding, also run the mac-ip route generate-mac command on the peer device to enable the function to generate MAC address entries based on MAC/IP routes.

    9. (Optional) Run mac-ip route generate-mac

      The function to generate MAC address entries based on MAC/IP routes is enabled.

      If the peer device is configured not to advertise MAC routes (using the mac-route no-advertise command) or not to generate MAC routes (using the local mac-only-route no-generate command), the local device cannot generate MAC address entries by default. To ensure normal Layer 2 traffic forwarding, perform this step on the local device to enable the function to generate MAC entries based on MAC/IP routes.

    10. Run quit

      Exit the EVPN instance view.

    11. Run bridge-domain bd-id

      The BD view is displayed.

    12. Run vxlan vni vni-id split-horizon-mode

      A VNI is created and associated with the BD, and split horizon is specified for packet forwarding.

    13. Run evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ]

      A specified EVPN instance is bound to the BD. By specifying different bd-tag values, you can bind multiple BDs to the same EVPN instance. In this way, VLAN services of different BDs can access the same EVPN instance while being isolated.

    14. Run quit

      Return to the system view.

  4. Configure ingress replication.
    1. Run interface nve nve-number

      An NVE interface is created, and the NVE interface view is displayed.

    2. Run source ipv6-address

      An IPv6 address is configured for the source VTEP.

    3. Run vni vni-id head-end peer-list protocol bgp

      Ingress replication is configured.

      With this function, the ingress of an IPv6 VXLAN tunnel replicates and sends a copy of any received BUM packets to each VTEP in the ingress replication list (a collection of remote VTEP IPv6 addresses).

    4. Run quit

      Return to the system view.

  5. (Optional) Enable the device to add its router ID (a private extended community attribute) to locally generated EVPN routes.
    1. Run evpn

      The global EVPN configuration view is displayed.

    2. Run router-id-extend private enable

      The device is enabled to add its router ID to locally generated EVPN routes.

      If both IPv4 and IPv6 VXLAN tunnels need to be established between two devices, one device's IP address is repeatedly added to the ingress replication list of the other device while the IPv4 and IPv6 VXLAN tunnels are being established. As a result, each device forwards two copies of BUM traffic to its peer, leading to duplicate traffic. To address this problem, run the router-id-extend private enable command on each device. This enables the device to add its router ID to EVPN routes. After receiving the EVPN routes, the peer device checks whether they carry the same router ID. If they do, the EVPN routes are from the same device. In this case, the peer device adds only the IP address of the device whose EVPN routes carry the IPv4 VXLAN tunnel identifier to the ingress replication list, thereby preventing duplicate traffic.

  6. Run commit

    The configuration is committed.

(Optional) Configuring a Layer 3 Gateway on an IPv6 VXLAN

To allow users on different network segments to communicate, deploy a Layer 3 gateway and specify the IP address of its VBDIF interface as the default gateway address of the users.

Context

On an IPv6 VXLAN, a BD can be mapped to a VNI (identifying a tenant) in 1:1 mode to transmit VXLAN data packets. VBDIF interfaces, which are Layer 3 logical interfaces created for a BD, can be used to implement communication between VXLANs on different network segments or between VXLANs and non-VXLANs, or they can be used for Layer 2 network access to a Layer 3 network. When configured with an IP address, the VBDIF interface of a BD functions as a tenant's gateway within the BD to transmit Layer 3 packets.

A VBDIF interface needs to be configured on the Layer 3 gateway of an IPv6 VXLAN for communication between different network segments only; it is not needed for communication on the same network segment.

The DHCP relay function can be configured on a VBDIF interface so that hosts can request IP addresses from an external DHCP server.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface vbdif bd-id

    A VBDIF interface is created, and the VBDIF interface view is displayed.

  3. Configure an IP address for the VBDIF interface to implement Layer 3 interworking.
    • For an IPv4 overlay network, run the ip address ip-address { mask | mask-length } [ sub ] command to configure an IPv4 address for the VBDIF interface.
    • For an IPv6 overlay network, perform the following operations:
      1. Run the ipv6 enable command to enable the IPv6 function for the interface.

      2. Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64 command to configure a global unicast address for the interface.

  4. (Optional) Run mac-address mac-address

    A MAC address is configured for the VBDIF interface.

  5. (Optional) Run bandwidth bandwidth

    Bandwidth is configured for the VBDIF interface.

  6. Run commit

    The configuration is committed.

Follow-up Procedure

Configure a static route to the IP address of the VBDIF interface, or configure a dynamic routing protocol to advertise this IP address, so that Layer 3 connectivity can be achieved on the overlay network.

(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface

MAC address learning limiting helps improve VXLAN network security.

Context

The maximum number of MAC addresses that a device can learn can be configured to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum number of MAC addresses allowed, no more addresses can be learned. The device can also be configured to discard packets after learning the maximum allowed number of MAC addresses, improving network security.

If a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in a BD, MAC address learning for the BD can be disabled to conserve MAC address table space. After the network topology of a VXLAN becomes stable and MAC address learning is complete, MAC address learning can also be disabled.

MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.

Procedure

  • Configure MAC address learning limiting.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-limit { action { discard | forward } | maximum max [ rate interval ] }*

      A MAC address learning limit rule is configured.

    4. (Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold

      The threshold percentages for MAC address limit alarm generation and clearing are configured.

    5. Run commit

      The configuration is committed.

  • Disable MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-address learning disable

      MAC address learning is disabled.

    4. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring IPv6 VXLAN in centralized gateway mode using BGP EVPN, verify information about the IPv6 VXLAN tunnels, VNI status, and VBDIF interfaces.

Prerequisites

IPv6 VXLAN in centralized gateway mode has been configured using BGP EVPN.

Procedure

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display evpn vpn-instance command to check EVPN instance information.
  • Run the display bgp evpn peer [ [ ipv6-address ] verbose ] command to check information about BGP EVPN peers.
  • Run the display vxlan peer [ vni vni-id ] command to check the ingress replication lists of all VNIs or a specified one.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.

Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN

Distributed VXLAN gateways can be configured to address problems that occur in centralized gateway networking. Such problems include sub-optimal forwarding paths and bottlenecks on Layer 3 gateways in terms of ARP or ND entry specifications.

Usage Scenario

In legacy networking, a centralized Layer 3 gateway is deployed on a spine node. On the network shown in Figure 1-1106, packets across different networks must be forwarded through a centralized Layer 3 gateway, resulting in the following problems:
  • Forwarding paths are not optimal. All Layer 3 traffic must be transmitted to the centralized Layer 3 gateway for forwarding.
  • The ARP or ND entry specification is a bottleneck. ARP or ND entries for tenants must be generated on the Layer 3 gateway, but only a limited number of ARP or ND entries are allowed by the Layer 3 gateway, impeding data center network expansion.
Figure 1-1106 Centralized VXLAN gateway networking

To address these problems, distributed VXLAN gateways can be configured. On the network shown in Figure 1-1107, Server1 and Server2 on different subnets both connect to Leaf1. When Server1 and Server2 communicate, traffic is forwarded only through Leaf1, not through any spine node.

Figure 1-1107 Distributed VXLAN gateway networking
Distributed VXLAN gateways have the following characteristics:
  • Flexible deployment. A leaf node can function as both Layer 2 and Layer 3 VXLAN gateways.

  • Improved network expansion capabilities. Unlike a centralized Layer 3 gateway, which has to learn the ARP or ND entries of all servers on a network, a distributed gateway needs to learn the ARP or ND entries of only the servers attached to it. This addresses the problem of the ARP or ND entry specifications being a bottleneck for packet forwarding.

Either IPv4 or IPv6 addresses can be configured for the VMs and Layer 3 VXLAN gateway. This means that a VXLAN overlay network can be an IPv4 or IPv6 network. Figure 1-1107 shows an IPv4 overlay network.

If only VMs on the same subnet need to communicate with each other, Layer 3 VXLAN gateways do not need to be deployed. If VMs on different subnets need to communicate with each other or VMs on the same subnet need to communicate with external networks, Layer 3 VXLAN gateways must be deployed.

The following table lists the differences in distributed gateway configuration between IPv4 and IPv6 overlay networks.

Configuration Task

IPv4 Overlay Network

IPv6 Overlay Network

Configure a VPN instance for route leaking with an EVPN instance.

Enable the IPv4 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv4 address family view.

Enable the IPv6 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv6 address family view.

Configure an IPv6 Layer 3 VXLAN gateway.

Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway.

Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway.

Configure a gateway on an IPv6 VXLAN to advertise a specific type of route.

  • For IP prefix routes, perform the configuration in the BGP-VPN instance IPv4 address family view.

  • For IRB routes, run the arp collect host enable command.

  • For IP prefix routes, run the arp vlink-direct-route advertise command in the IPv4 address family view of the VPN instance to which the involved VBDIF interface is bound.
  • For IP prefix routes, perform the configuration in the BGP-VPN instance IPv6 address family view.

  • For IRBv6 routes, run the ipv6 nd collect host enable command.

  • For IP prefix routes, run the nd vlink-direct-route advertise command in the IPv6 address family view of the VPN instance to which the involved VBDIF interface is bound.

Pre-configuration Tasks

Before configuring VXLAN in distributed gateway mode using BGP EVPN, complete the following task:

  • Configure IP connectivity on the network.

Configuring a VXLAN Service Access Point

On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. A Layer 2 sub-interface can transmit data packets through a BD after being associated with it.

Context

As described in Table 1-479, Layer 2 sub-interfaces can have different encapsulation types configured to transmit various types of data packets.
Table 1-479 Traffic encapsulation types

Traffic Encapsulation Type

Description

dot1q

This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
  • The VLAN ID encapsulated by a Layer 2 sub-interface cannot be the same as that permitted by the Layer 2 main interface of the sub-interface.
  • The VLAN IDs encapsulated by a Layer 2 sub-interface and a Layer 3 sub-interface cannot be the same.

untag

This type of sub-interface accepts only packets that do not carry VLAN tags. When setting the encapsulation type to untag for a Layer 2 sub-interface, note the following:
  • The physical interface where the involved sub-interface resides must have only default configurations.
  • Only Layer 2 physical interfaces and Eth-Trunk interfaces can have untag Layer 2 sub-interfaces created.
  • Only one untag Layer 2 sub-interface can be created on a main interface.

default

This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
  • The main interface where the involved sub-interface resides cannot be added to any VLAN.
  • Only Layer 2 physical interfaces and Eth-Trunk interfaces can have default Layer 2 sub-interfaces created.
  • If a default Layer 2 sub-interface is created on a main interface, the main interface cannot have other types of Layer 2 sub-interfaces configured.

qinq

This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags.

A service access point needs to be configured on a Layer 2 gateway.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bridge-domain bd-id

    A BD is created, and the BD view is displayed.

  3. (Optional) Run description description

    A BD description is configured.

  4. Run quit

    Return to the system view.

  5. Run interface interface-type interface-number.subnum mode l2

    A Layer 2 sub-interface is created, and the sub-interface view is displayed.

    Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.

  6. Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }

    A traffic encapsulation type is configured for the Layer 2 sub-interface.

  7. Run rewrite pop { single | double }

    The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.

    If the received packets each carry a single VLAN tag, specify single.

    If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.

  8. Run bridge-domain bd-id

    The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.

    If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.

  9. Run commit

    The configuration is committed.

Configuring a VXLAN Tunnel

To allow VXLAN tunnel establishment using EVPN, configure an EVPN instance, establish a BGP EVPN peer relationship, and configure ingress replication.

Context

VXLAN packets are transmitted through VXLAN tunnels. In distributed VXLAN gateway scenarios, perform the following steps on a VXLAN gateway to use EVPN for establishing VXLAN tunnels:
  1. Configure a BGP EVPN peer relationship. Configure VXLAN gateways to establish BGP EVPN peer relationships so that they can exchange EVPN routes. If an RR has been deployed, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR.

  2. (Optional) Configure an RR. If you configure an RR, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. The deployment of RRs reduces the number of BGP EVPN peer relationships to be established, simplifying configuration. An existing device can be configured to also function as an RR, or a new device can be deployed for this specific purpose. Spine nodes are generally used as RRs, and leaf nodes used as RR clients.

  3. Configure an EVPN instance. EVPN instances are used to receive and advertise EVPN routes.

  4. Configure ingress replication. After ingress replication is configured for a VNI, the system uses BGP EVPN to construct a list of remote VTEPs. After a VXLAN gateway receives BUM packets, its sends a copy of the BUM packets to every VXLAN gateway in the list.

BUM packet forwarding is implemented only using ingress replication. To establish a VXLAN tunnel between a Huawei device and a non-Huawei device, ensure that the non-Huawei device also has ingress replication configured. Otherwise, communication fails.

Procedure

  1. Configure a BGP EVPN peer relationship. If an RR has been deployed, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. If the spine node and gateway reside in different ASs, the gateway must establish an EBGP EVPN peer relationship with the spine node.
    1. Run bgp as-number

      BGP is enabled, and the BGP view is displayed.

    2. (Optional) Run router-id as-number

      A BGP router ID is set.

    3. Run peer ipv4-address as-number as-number

      The peer device is configured as a BGP peer.

    4. (Optional) Run peer ipv4-address connect-interface interface-type interface-number [ ipv4-source-address ]

      A source interface and a source address are specified to set up a TCP connection with the BGP peer.

      When loopback interfaces are used to establish a BGP connection, running the peer connect-interface command on both ends is recommended to ensure the connectivity. If this command is run on only one end, the BGP connection may fail to be established.

    5. (Optional) Run peer ipv4-address ebgp-max-hop [ hop-count ]

      The maximum number of hops is set for an EBGP EVPN connection.

      In most cases, a directly connected physical link must be available between EBGP EVPN peers. If you want to establish EBGP EVPN peer relationships between indirectly connected devices, run the peer ebgp-max-hop command. The command also can configure the maximum number of hops for an EBGP EVPN connection.

      When the IP address of loopback interface to establish an EBGP EVPN peer relationship, run the peer ebgp-max-hop (of which the value of hop-count is not less than 2) command. Otherwise, the peer relationship fails to be established.

    6. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    7. Run peer { ipv4-address | group-name } enable

      The device is enabled to exchange EVPN routes with a specified peer or peer group.

    8. Run peer { ipv4-address | group-name } advertise encap-type vxlan

      The device is enabled to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer.

    9. (Optional) Run peer { group-name | ipv4-address } route-policy route-policy-name { import | export }

      A routing policy is specified for routes received from or to be advertised to a BGP EVPN peer or peer group.

      After the routing policy is applied, the routes received from or to be advertised to a specified BGP EVPN peer or peer group will be filtered, ensuring that only desired routes are imported or advertised. This configuration helps manage routes and reduce required routing entries and system resources.

    10. (Optional) Run peer { ipv4-address | group-name } next-hop-invariable

      The device is prevented from changing the next hop address of a route when advertising the route to an EBGP EVPN peer. If the spine node and gateway have established an EBGP EVPN peer relationship, run the peer next-hop-invariable command to ensure that the next hops of routes received by the gateway point to other gateways.

    11. (Optional) Run peer { group-name | ipv4-address } mac-limit number [ percentage ] [ alert-only | idle-forever | idle-timeout times ]

      The maximum number of MAC advertisement routes that can be received from each peer is configured.

      If an EVPN instance may import many invalid MAC advertisement routes from peers and these routes occupy a large proportion of the total MAC advertisement routes. If the received MAC advertisement routes exceed the specified maximum number, the system displays an alarm, instructing users to check the validity of the MAC advertisement routes received in the EVPN instance.

    12. (Optional) Perform the following operations to enable the function to advertise the routes carrying the large-community attribute to BGP EVPN peers:

      The large-community attribute includes a 2-byte or 4-byte AS number and two 4-byte LocalData fields, allowing the administrator to flexibly use route-policies. Before enabling the function to advertise the routes carrying the large-community attribute to BGP EVPN peers, configure the route-policy related to the large-community attribute and use the route-policy to set the large-community attribute.

      1. Run peer { ipv4-address | group-name } route-policy route-policy-name export

        The outbound route-policy of the BGP EVPN peer is configured.

      2. Run peer { ipv4-address | group-name } advertise-large-community

        The device is enabled to advertise the routes carrying the large-community attribute to BGP EVPN peers or peer groups.

        If the routes carrying the large-community attribute does not need to be advertised to one BGP EVPN peer in the peer group, run the peer ipv4-address advertise-large-community disable command.

    13. (Optional) Run peer ipv4-address graceful-restart static-timer restart-time

      The maximum duration from the time the local device finds that the peer device is restarted to the time a BGP EVPN session is re-established is set.

      BGP GR prevents traffic interruptions caused by re-establishment of a BGP peer relationship. You can run either the graceful-restart timer restart time or peer graceful-restart static-timer command to set this maximum wait time.

      • To set the maximum wait time for re-establishing all BGP peer relationships, run the graceful-restart timer restart command in the BGP view. The maximum wait time can be set to 3600s at most.

      • To set the maximum wait time for re-establishing a specified BGP-EVPN peer relationship, run the peer graceful-restart static-timer command in the BGP EVPN view. The maximum wait time can be set to a value greater than 3600s.

      If both the graceful-restart timer restart time and peer graceful-restart static-timer commands are run, the latter configuration takes effect.

      This step can be performed only after GR has been enabled using the graceful-restart command in the BGP view.

    14. (Optional) Run the peer peerIpv4Addr path-attribute-treat attribute-id { id [ to id2 ] } &<1-255> { discard | withdraw | treat-as-unknown } command to configure a special mode for processing BGP EVPN path attributes. Alternatively, run the peer peerIpv4Addr treat-with-error attribute-id id accept-zero-value command to configure a mode for processing incorrect BGP EVPN path attributes.

      A BGP EVPN Update message contains various path attributes. If a local device receives Update messages containing malformed path attributes, the involved BGP EVPN sessions may flap. To enhance reliability, you can perform this step to configure a processing mode for specified BGP EVPN path attributes or incorrect path attributes.

      The path-attribute-treat parameter specifies a path attribute processing mode, which can be any of the following ones:
      • Discarding specified attributes

      • Withdrawing the routes with specified attributes

      • Processing specified attributes as unknown attributes

      The treat-with-error parameter specifies a processing mode for incorrect path attributes. The mode can be as follows:

      • Accepting the path attributes with a value of 0.

    15. Run quit

      Exit from the BGP-EVPN address family view.

    16. Run quit

      Exit from the BGP view.

  2. (Optional) Configure an RR. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR, reducing the number of BGP EVPN peer relationships to be established and simplifying configuration.
    1. Run bgp as-number

      The BGP view is displayed.

    2. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    3. Run peer { ipv4-address | group-name } enable

      The device is enabled to exchange EVPN routes with a specified peer or peer group.

    4. (Optional) Run peer { ipv4-address | group-name } next-hop-invariable

      The device is prevented from changing the next hop address of a route when advertising the route to an EBGP EVPN peer.

    5. Run peer { ipv4-address | group-name } reflect-client

      The device is configured as an RR and an RR client is specified.

    6. Run undo policy vpn-target

      The function to filter received EVPN routes based on VPN targets is disabled. If you do not perform this step, the RR will fail to receive and reflect the routes sent by clients.

    7. Run quit

      Exit from the BGP-EVPN address family view.

    8. Run quit

      Exit from the BGP view.

  3. Configure an EVPN instance.
    1. Run evpn vpn-instance vpn-instance-name bd-mode

      A BD EVPN instance is created, and the EVPN instance view is displayed.

    2. Run route-distinguisher route-distinguisher

      An RD is configured for the EVPN instance.

    3. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the EVPN instance. The export VPN target of the local end must be the same as the import VPN target of the remote end, and the import VPN target of the local end must be the same as the export VPN target of the remote end.

    4. (Optional) Run import route-policy policy-name

      The current EVPN instance is associated with an import routing policy.

      To control route import more precisely, perform this step to associate the EVPN instance with an import routing policy and set attributes for eligible routes.

    5. (Optional) Run export route-policy policy-name

      The current EVPN instance is associated with an export routing policy.

      To control route export more precisely, perform this step to associate the EVPN instance with an export routing policy and set attributes for eligible routes.

    6. (Optional) Run tnl-policy policy-name

      The EVPN instance is associated with a tunnel policy.

      This configuration enables PEs to use TE tunnels to transmit data packets.

    7. (Optional) Run mac limit number { simply-alert | mac-unchanged }

      The maximum number of MAC addresses allowed by an EVPN instance is configured.

      After a device learns a large number of MAC addresses, system performance may deteriorate when the device is busy processing services. This is because MAC addresses consume system resources. To improve system security and reliability, run the mac limit command to configure the maximum number of MAC addresses allowed by an EVPN instance. If the number of MAC addresses learned by an EVPN instance exceeds the maximum number, the system displays an alarm message, instructing you to check the validity of MAC addresses in the EVPN instance.

    8. (Optional) Run mac-route no-advertise

      The device is disabled from sending local MAC routes with the current VNI to the EVPN peer.

      In Layer 3 VXLAN gateway scenarios where Layer 2 traffic forwarding is not involved, perform this step to disable local MAC routes from being advertised to the EVPN peer. This configuration prevents the EVPN peer from receiving MAC routes, thereby conserving device resources.

    9. (Optional) Run local mac-only-route no-generate

      The device is disabled from generating an EVPN MAC route when the local MAC address exists in both a MAC address entry and an ARP/ND entry.

      If a MAC address entry and an ARP/ND entry on the device both contain the local MAC address, the device generates both an EVPN MAC/IP route and an EVPN MAC route by default. To optimize memory utilization, perform this step so that the device generates only the EVPN MAC/IP route. To ensure normal Layer 2 traffic forwarding, also run the mac-ip route generate-mac command on the peer device to enable the function to generate MAC address entries based on MAC/IP routes.

    10. (Optional) Run mac-ip route generate-mac

      The function to generate MAC address entries based on MAC/IP routes is enabled.

      If the peer device is configured not to advertise MAC routes (using the mac-route no-advertise command) or not to generate MAC routes (using the local mac-only-route no-generate command), the local device cannot generate MAC address entries by default. To ensure normal Layer 2 traffic forwarding, perform this step on the local device to enable the function to generate MAC entries based on MAC/IP routes.

    11. Run quit

      Exit from the EVPN instance view.

    12. Run bridge-domain bd-id

      The BD view is displayed.

      By default, no BD is created.

    13. Run vxlan vni vni-id split-horizon-mode

      A VNI is created and associated with the BD, and split horizon is applied to the BD.

    14. Run evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ]

      A specified EVPN instance is bound to the BD. By specifying different bd-tag values, you can bind multiple BDs with different VLANs to the same EVPN instance and isolate services in the BDs.

    15. Run quit

      Return to the system view.

  4. Configure an ingress replication list.
    1. Run interface nve nve-number

      An NVE interface is created, and the NVE interface view is displayed.

    2. Run source ip-address

      An IP address is configured for the source VTEP.

    3. Run vni vni-id head-end peer-list protocol bgp

      An ingress replication list is configured.

      After the ingress of a VXLAN tunnel receives BUM packets, it replicates these packets and sends a copy to each VTEP in the ingress replication list. The ingress replication list is a collection of remote VTEP IP addresses to which the ingress of a VXLAN tunnel should send replicated BUM packets.

    4. Run quit

      Exit the NVE interface view.

  5. (Optional) Configure MAC addresses for NVE interfaces.

    In distributed VXLAN gateway (EVPN BGP) scenarios, if you want to use active-active VXLAN gateways to load-balance traffic, configure the same VTEP MAC address on the two VXLAN gateways. Otherwise, the two gateways cannot forward traffic properly on the VXLAN network.

    1. Run interface nve nve-number

      The NVE interface view is displayed.

    2. Run mac-address mac-address

      A MAC address is configured for the NVE interface.

    3. Run quit

      Exit from the NVE interface view.

  6. Run commit

    The configuration is committed.

(Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance

To enable communication between VMs on different subnets, configure a VPN instance for route leaking with an EVPN instance. This configuration enables Layer 3 connectivity. To isolate multiple tenants, you can use different VPN instances.

Procedure

  • For an IPv4 overlay network, perform the following operations:
    1. Run system-view

      The system view is displayed.

    2. Run ip vpn-instance vpn-instance-name

      A VPN instance is created, and the VPN instance view is displayed.

    3. Run vxlan vni vni-id

      A VNI is created and associated with the VPN instance.

    4. Run ipv4-family

      The VPN instance IPv4 address family is enabled, and its view is displayed.

    5. Run route-distinguisher route-distinguisher

      An RD is configured for the VPN instance IPv4 address family.

    6. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the VPN instance IPv4 address family.

      If the current node needs to exchange L3VPN routes with other nodes in the same VPN instance, perform this step to configure VPN targets for the VPN instance.

    7. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

      The VPN target extended community attribute for EVPN routes is configured for the VPN instance IPv4 address family.

      When the local device advertises an EVPN IP prefix route to a peer, the route carries all the VPN targets in the export VPN target list configured for EVPN routes in this step. When the local device advertises an EVPN IRB route to a peer, the route carries all the VPN targets in the export VPN target list of the EVPN instance in BD mode.

      The IRB route or IP prefix route received by the local device can be added to the routing table of the local VPN instance IPv4 address family only when the VPN targets carried in the IRB route or IP prefix route overlap those in the import VPN target list configured for EVPN routes in this step.

    8. (Optional) Run import route-policy policy-name evpn

      The VPN instance IPv4 address family is associated with an import route-policy that is used to filter EVPN routes imported to the VPN instance IPv4 address family.

      To control import of EVPN routes to the VPN instance IPv4 address family more precisely, perform this step to associate the VPN instance IPv4 address family with an import route-policy and set attributes for eligible EVPN routes.

    9. (Optional) Run export route-policy policy-name evpn

      The VPN instance IPv4 address family is associated with an export route-policy that is used to filter EVPN routes advertised by the VPN instance IPv4 address family.

      To control EVPN route advertisement by the VPN instance IPv4 address family more precisely, perform this step to associate the VPN instance IPv4 address family with an export route-policy and set attributes for eligible EVPN routes.

    10. Run quit

      Exit the VPN instance IPv4 address family view.

    11. Run quit

      Exit the VPN instance view.

    12. Run commit

      The configuration is committed.

  • For an IPv6 overlay network, perform the following operations:
    1. Run system-view

      The system view is displayed.

    2. Run ip vpn-instance vpn-instance-name

      A VPN instance is created, and its view is displayed.

    3. Run vxlan vni vni-id

      A VNI is created and associated with the VPN instance.

    4. Run ipv6-family

      The VPN instance IPv6 address family is enabled and its view is displayed.

    5. Run route-distinguisher route-distinguisher

      An RD is configured for the VPN instance IPv6 address family.

    6. (Optional) Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the VPN instance IPv6 address family.

      If the current node needs to exchange L3VPN routes with other nodes in the same VPN instance, perform this step to configure VPN targets for the VPN instance.

    7. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

      The VPN target extended community attribute for EVPN routes is configured for the VPN instance IPv6 address family.

      When the local device advertises an EVPN IPv6 prefix route to a peer, the route carries all the VPN targets in the export VPN target list configured for EVPN routes in this step. When the local device advertises an EVPN IRBv6 route to a peer, the route carries all the VPN targets in the export VPN target list of the EVPN instance in BD mode.

      The IRBv6 route or IPv6 prefix route received by the local device can be added to the routing table of the local VPN instance IPv6 address family only when the VPN targets carried in the IRBv6 route or IPv6 prefix route overlap those in the import VPN target list configured for EVPN routes in this step.

    8. (Optional) Run import route-policy policy-name evpn

      The VPN instance IPv6 address family is associated with an import route-policy that is used to filter EVPN routes imported to the VPN instance IPv6 address family.

      To control import of EVPN routes to the VPN instance IPv6 address family more precisely, perform this step to associate the VPN instance IPv6 address family with an import route-policy and set attributes for eligible EVPN routes.

    9. (Optional) Run the export route-policy policy-name evpn command to apply an export route-policy to the VPN instance IPv6 address family, so that this address family can filter EVPN routes to be advertised. Perform this step to apply an export route-policy to the VPN instance IPv6 address family and configure the attributes of eligible EVPN routes. This enables the device to more precisely control EVPN routes to be advertised.

    10. Run quit

      Exit the VPN instance IPv6 address family view.

    11. Run quit

      Exit the VPN instance view.

    12. Run commit

      The configuration is committed.

(Optional) Configuring a Layer 3 Gateway on the VXLAN

To enable communication between VMs on different subnets, configure Layer 3 gateways on the VXLAN, enable the distributed gateway function, and configure host route advertisement.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface vbdif bd-id

    A VBDIF interface is created, and the VBDIF interface view is displayed.

  3. Run ip binding vpn-instance vpn-instance-name

    The VBDIF interface is bound to a VPN instance.

  4. Configure an IP address for the VBDIF interface to implement Layer 3 communication.
    • For an IPv4 overlay network, run the ip address ip-address { mask | mask-length } [ sub ] command to configure an IPv4 address for the VBDIF interface.
    • For an IPv6 overlay network, perform the following operations:
      1. Run the ipv6 enable command to enable IPv6 for the VBDIF interface.

      2. Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64 command to configure a global unicast address for the VBDIF interface.

  5. (Optional) Run bandwidth bandwidth

    Bandwidth is configured for the VBDIF interface.

  6. (Optional) Run mac-address mac-address

    A MAC address is configured for the VBDIF interface.

    By default, the MAC address of a VBDIF interface is the system MAC address. On a network where distributed or active-active Layer 3 gateways need to be simulated into one, you need to run the mac-address command to configure the same MAC address for the VBDIF interfaces of these Layer 3 gateways.

    If VMs on the same subnet connect to different Layer 3 gateways on a VXLAN, the VBDIF interfaces of the Layer 3 gateways must have the same IP address and same MAC address configured. In this way, the configurations of the Layer 3 gateways do not need to be changed when the VMs' locations are changed, reducing the maintenance workload.

  7. Run vxlan anycast-gateway enable

    The distributed gateway function is enabled.

    After the distributed gateway function is enabled on a Layer 3 gateway, this gateway discards network-side ARP or NS messages and learns those only from the user side.

  8. Perform one of the following steps to configure host route advertisement.

    Table 1-480 Configuring host route advertisement

    Overlay Network Type

    Type of Route to Be Advertised Between Gateways

    Host Route Advertisement Configuration

    IPv4

    IRB route

    Run the arp collect host enable command in the VBDIF interface view.

    IP prefix route

    Run the arp vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv4 address family view of the VPN instance to which the VBDIF interface is bound.

    IPv6

    IRB route

    Run the ipv6 nd collect host enable command in the VBDIF interface view.

    IP prefix route

    Run the nd vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv6 address family view of the VPN instance to which the VBDIF interface is bound.

  9. Run commit

    The configuration is committed.

(Optional) Configuring VXLAN Gateways to Advertise Specific Types of Routes

To enable communication between VMs on different subnets, configure VXLAN gateways to exchange IRB or IP prefix routes. This configuration enables the gateways to learn the IP routes of the related hosts or the subnets where the hosts reside.

Context

By default, VXLAN gateways can exchange MAC routes, but must be configured to exchange IRB or IP prefix routes if VMs need to communicate across subnets. If an RR is deployed on the network, IRB or IP prefix routes must be exchanged only between the VXLAN gateways and RR.

Host routes can be advertised through IRB routes (recommended), IP prefix routes, or both. In contrast, subnet routes of hosts can be advertised through IP prefix routes only.

Procedure

  • Configure IRB route advertisement.
    1. Run system-view

      The system view is displayed.

    2. Run bgp as-number

      The BGP view is displayed.

    3. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    4. Run one of the following commands based on the overlay network type to configure IRB route advertisement:

      • For an IPv4 overlay network, run the peer { ipv6-address | group-name } advertise irb command.
      • For an IPv6 overlay network, run the peer { ipv6-address | group-name } advertise irbv6 command.

    5. Run commit

      The configuration is committed.

  • Configure IP prefix route advertisement.
    1. Run system-view

      The system view is displayed.

    2. Run bgp as-number

      The BGP view is displayed.

    3. Run either of the following commands based on the overlay network type:

      • For an IPv4 overlay network, run the ipv4-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv4 address family view.
      • For an IPv6 overlay network, run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv6 address family view.

    4. Run either of the following commands based on the overlay network type:

      • For an IPv4 overlay network, run the import-route { direct | isis process-id | ospf process-id | rip process-id | static } [ med med | route-policy route-policy-name ] * command to import routes of other protocols to the BGP-VPN instance IPv4 address family.
      • For an IPv6 overlay network, run the import-route { direct | isis process-id | ospfv3 process-id | ripng process-id | static } [ med med | route-policy route-policy-name ] * command to import routes of other protocols to the BGP-VPN instance IPv6 address family.

      To advertise host IP routes, configure import of direct routes. To advertise the route to the subnet where hosts reside, configure a dynamic routing protocol (such as OSPF or OSPFv3) and then run either of the preceding commands based on the overlay network type to import the route into the dynamic routing protocol.

    5. Run advertise l2vpn evpn

      IP prefix route advertisement is configured.

      IP prefix routes are used to advertise host IP routes as well as the route to the subnet where the hosts reside. If many specific host routes exist, a VXLAN gateway can be configured to advertise an IP prefix route, which carries the routing information of the subnet where the hosts reside. After route advertisement, import the route to the target BGP-VPN instance address family, as this reduces the number of routes to be saved on the involved VXLAN gateway.

      • A VXLAN gateway can advertise subnet routes only if the attached subnets are unique across the entire network.

      • After IP prefix route advertisement is configured, run the arp vlink-direct-route advertise or nd vlink-direct-route advertise command to advertise host routes. After this configuration, VM migration is restricted.

    6. Run commit

      The configuration is committed.

(Optional) Configuring a Static MAC Address Entry

Using static MAC address entries to forward user packets helps reduce BUM traffic on the network and prevent bogus attacks.

Context

When the source NVE on a VXLAN tunnel receives BUM packets, the NVE sends these packets along paths specified in static MAC address entries if there are such entries. This helps reduce BUM traffic on the network and prevent unauthorized data access from bogus users.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run mac-address static mac-address bridge-domain bd-id source-ipv6 sourceIpv6 peer-ipv6 peerIPv6 vni vni-id

    A static MAC address entry is configured.

  3. Run commit

    The configuration is committed.

(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface

MAC address learning limiting helps improve VXLAN network security.

Context

Configure the maximum number of MAC addresses that a device can learn to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum, no more addresses can be learned. However, you can also configure the device to discard packets after learning the maximum allowed number of MAC addresses, improving network security.

Disable MAC address learning for a BD if a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in the BD, reducing the number of MAC address entries. You can also disable MAC address learning on Layer 2 gateways after the VXLAN network topology becomes stable and MAC address learning is complete.

MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.

Procedure

  • Limit MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *

      A MAC address learning limit rule is configured.

    4. (Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold

      The threshold percentages for MAC address limit alarm generation and clearing are configured.

    5. Run commit

      The configuration is committed.

  • Disable MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-address learning disable

      MAC address learning is disabled.

    4. Run commit

      The configuration is committed.

Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN

After configuring VXLAN in distributed gateway mode using BGP EVPN, verify the configuration, and you can find that VXLAN tunnels are dynamically established and are in the Up state.

Prerequisites

VXLAN in distributed gateway mode has been configured using BGP EVPN.

Procedure

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to check BGP EVPN peer information.
  • Run the display vxlan peer [ vni vni-id ] command to check ingress replication lists of a VNI or all VNIs.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check VNI information.
  • Run the display interface vbdif [ bd-id ] command to check VBDIF interface information and statistics.
  • Run the display mac-limit bridge-domain bd-id command to check MAC address limiting configurations of a BD.
  • Run the display bgp evpn all routing-table command to check EVPN route information.

Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN

Distributed IPv6 VXLAN gateways can be configured to address problems that occur in centralized gateway networking. Such problems include sub-optimal forwarding paths and bottlenecks on Layer 3 gateways in terms of ARP or ND entry specifications.

Usage Scenario

On the network shown in Figure 1-1108, Server1 and Server2 on different subnets both connect to Leaf1. When Server1 and Server2 communicate, traffic is forwarded only through Leaf1, not through any spine node.

Distributed IPv6 VXLAN gateways have the following characteristics:
  • Flexible deployment. A leaf node can function as both Layer 2 and Layer 3 IPv6 VXLAN gateways.

  • Improved network expansion capabilities. Unlike a centralized Layer 3 gateway, which has to learn the ARP or ND entries of all servers on a network, a distributed gateway needs to learn the ARP or ND entries of only the servers attached to it. This addresses the problem of the ARP or ND entry specifications being a bottleneck for packet forwarding.

Figure 1-1108 Distributed gateways for an IPv6 VXLAN

Either IPv4 or IPv6 addresses can be configured for the VMs and Layer 3 VXLAN gateway. This means that a VXLAN overlay network can be an IPv4 or IPv6 network. Figure 1-1108 shows an IPv4 overlay network.

If only VMs on the same subnet need to communicate with each other, Layer 3 IPv6 VXLAN gateways do not need to be deployed. If VMs on different subnets need to communicate with each other or VMs on the same subnet need to communicate with external networks, Layer 3 IPv6 VXLAN gateways must be deployed.

The following table lists the differences in distributed gateway configuration between IPv4 and IPv6 overlay networks.

Configuration Task

IPv4 Overlay Network

IPv6 Overlay Network

Configure a VPN instance for route leaking with an EVPN instance.

Enable the IPv4 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv4 address family view.

Enable the IPv6 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv6 address family view.

Configure a Layer 3 gateway on an IPv6 VXLAN.

Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway.

Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway.

Configure IPv6 VXLAN gateways to exchange specific types of routes.

  • For IP prefix routes, perform the configuration in the BGP-VPN instance IPv4 address family view.

  • For IRB routes, run the arp collect host enable command.

  • For IP prefix routes, run the arp vlink-direct-route advertise command in the IPv4 address family view of the VPN instance to which the involved VBDIF interface is bound.
  • For IP prefix routes, perform the configuration in the BGP-VPN instance IPv6 address family view.

  • For IRBv6 routes, run the ipv6 nd collect host enable command.

  • For IP prefix routes, run the nd vlink-direct-route advertise command in the IPv6 address family view of the VPN instance to which the involved VBDIF interface is bound.

Pre-configuration Tasks

Before configuring IPv6 VXLAN in distributed gateway mode using BGP EVPN, complete the following task:

  • Configure IPv6 connectivity on the network.

Configuring a VXLAN Service Access Point

On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. After a Layer 2 sub-interface is associated with a BD, which is used as a broadcast domain on the IPv6 VXLAN, the sub-interface can transmit data packets through this BD.

Context

As described in Table 1-481, Layer 2 sub-interfaces can have different encapsulation types configured to transmit various types of data packets.
Table 1-481 Traffic encapsulation types

Traffic Encapsulation Type

Description

dot1q

This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
  • The VLAN ID encapsulated by a Layer 2 sub-interface cannot be the same as that permitted by the Layer 2 interface where the sub-interface resides.
  • The VLAN IDs encapsulated by a Layer 2 sub-interface and a Layer 3 sub-interface cannot be the same.

untag

This type of sub-interface accepts only packets that do not carry any VLAN tag. The untag traffic encapsulation type has the following restrictions:
  • The physical interface where the involved sub-interface resides must have only default configurations.
  • Only Layer 2 physical interfaces and Layer 2 Eth-Trunk interfaces can have untag Layer 2 sub-interfaces created.
  • Only one untag Layer 2 sub-interface can be created on a main interface.

default

This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
  • The main interface where the involved sub-interface resides cannot be added to any VLAN.
  • Only Layer 2 physical interfaces and Layer 2 Eth-Trunk interfaces can have default Layer 2 sub-interfaces created.
  • If a default Layer 2 sub-interface is created on a main interface, the main interface cannot have other types of Layer 2 sub-interfaces configured.

qinq

This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run bridge-domain bd-id

    A BD is created, and the BD view is displayed.

  3. (Optional) Run description description

    A BD description is configured.

  4. Run quit

    Return to the system view.

  5. Run interface interface-type interface-number.subnum mode l2

    A Layer 2 sub-interface is created, and the sub-interface view is displayed.

    Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.

  6. Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }

    A traffic encapsulation type is configured for the Layer 2 sub-interface.

  7. Run rewrite pop { single | double }

    The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.

    If the received packets each carry a single VLAN tag, specify single.

    If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.

  8. Run bridge-domain bd-id

    The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.

    If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.

  9. Run commit

    The configuration is committed.

Configuring an IPv6 VXLAN Tunnel

Using BGP EVPN to establish an IPv6 VXLAN tunnel between VTEPs involves a series of operations. These include establishing a BGP EVPN peer relationship, configuring an EVPN instance, and configuring ingress replication.

Context

To enable BGP EVPN to establish IPv6 VXLAN tunnels in distributed gateway scenarios, complete the following tasks on the involved gateways:
  1. Configure a BGP EVPN peer relationship. After the gateways on an IPv6 VXLAN establish such a relationship, they can exchange EVPN routes. If an RR is deployed on the network, each gateway only needs to establish a BGP EVPN peer relationship with the RR.

  2. (Optional) Configure an RR. The deployment of RRs simplifies configurations because fewer BGP EVPN peer relationships need to be established. An existing device can be configured to also function as an RR, or a new device can be deployed for this specific purpose. Spine nodes on an IPv6 VXLAN are generally used as RRs, and leaf nodes used as RR clients.

  3. Configure an EVPN instance. EVPN instances can be used to manage EVPN routes received from and advertised to BGP EVPN peers.

  4. Configure ingress replication. After ingress replication is configured on an IPv6 VXLAN gateway, the gateway uses BGP EVPN to construct a list of remote VTEP peers that share the same VNI with itself. After the gateway receives BUM packets, its sends a copy of the BUM packets to each gateway in the list.

Currently, BUM packets can be forwarded only through ingress replication. This means that non-Huawei devices must have ingress replication configured to establish IPv6 VXLAN tunnels with Huawei devices. If ingress replication is not configured, the tunnels fail to be established.

Procedure

  1. Configure a BGP EVPN peer relationship.
    1. Run bgp as-number

      BGP is enabled, and the BGP view is displayed.

    2. (Optional) Run router-id router-id

      A BGP router ID is set.

    3. Run peer ipv6-address as-number as-number

      An IPv6 BGP peer is specified.

    4. (Optional) Run peer ipv6-address connect-interface interface-type interface-number [ ipv6-source-address ]

      A source interface and a source IPv6 address used to set up a TCP connection with the BGP peer are specified.

      If loopback interfaces are used to establish a BGP connection, you are advised to run the peer connect-interface command on both ends of the link to ensure connectivity. If this command is run on only one end, the BGP connection may fail to be established.

    5. (Optional) Run peer ipv6-address ebgp-max-hop [ hop-count ]

      The maximum number of hops allowed for establishing an EBGP EVPN peer relationship is configured.

      The default value of hop-count is 255. In most cases, a directly connected physical link must be available between EBGP EVPN peers. If you want to establish EBGP EVPN peer relationships between indirectly connected devices, run the peer ebgp-max-hop command. This command allows the devices to establish a TCP connection across multiple hops.

      When loopback interfaces are used to establish an EBGP EVPN peer relationship, the peer ebgp-max-hop command must be run, with hop-count being at least 2. Otherwise, the peer relationship fails to be established.

    6. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    7. Run peer { group-name | ipv6-address } enable

      The device is enabled to exchange EVPN routes with a specified peer or peer group.

    8. Run peer { group-name | ipv6-address } advertise encap-type vxlan

      The device is enabled to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer or peer group.

    9. (Optional) Run peer { group-name | ipv6-address } route-policy route-policy-name { import | export }

      A route-policy is specified for routes to be received from or advertised to the BGP EVPN peer or peer group.

      The route-policy helps the device import or advertise only desired routes. It facilitates route management and reduces the routing table size and system resource consumption.

    10. (Optional) Run peer { group-name | ipv6-address } mac-limit number [ percentage ] [ alert-only | idle-forever | idle-timeout times ]

      The maximum number of MAC advertisement routes that can be imported from the peer or peer group is configured.

      If an EVPN instance imports many inapplicable MAC advertisement routes from a peer or peer group and they account for a large proportion of the total number of MAC advertisement routes, you are advised to run the peer { group-name | ipv6-address } mac-limit number [ percentage ] [ alert-only | idle-forever | idle-timeout times ] command. This command limits the maximum number of MAC advertisement routes that can be imported. If the imported MAC advertisement routes exceed the specified maximum number, the device displays an alarm, prompting you to check the validity of the MAC advertisement routes imported to the EVPN instance.

    11. (Optional) Run the peer peerIpv6Addr path-attribute-treat attribute-id { id [ to id2 ] } &<1-255> { discard | withdraw | treat-as-unknown } command to configure a special mode for processing BGP EVPN path attributes. Alternatively, run the peer peerIpv6Addr treat-with-error attribute-id id accept-zero-value command to configure a mode for processing incorrect BGP EVPN path attributes.

      A BGP EVPN Update message contains various path attributes. If a local device receives Update messages containing malformed path attributes, the involved BGP EVPN sessions may flap. To enhance reliability, you can perform this step to configure a processing mode for specified BGP EVPN path attributes or incorrect path attributes.

      The path-attribute-treat parameter specifies a path attribute processing mode, which can be any of the following ones:
      • Discarding specified attributes

      • Withdrawing the routes with specified attributes

      • Processing specified attributes as unknown attributes

      The treat-with-error parameter specifies a processing mode for incorrect path attributes. The mode can be as follows:

      • Accepting the path attributes with a value of 0.

  2. (Optional) Configure the spine node as an RR. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. This simplifies configurations because fewer BGP EVPN peer relationships need to be established.
    1. Run peer { ipv6-address | group-name } reflect-client

      The device is configured as an RR, and RR clients are specified.

    2. (Optional) Run peer { group-name | ipv6-address } next-hop-invariable

      The device is enabled to keep the next hop address of a route unchanged when advertising the route to an EBGP EVPN peer.

    3. Run undo policy vpn-target

      The device is disabled from filtering received EVPN routes based on VPN targets. If you do not perform this step, the RR will fail to receive and reflect the routes sent by clients.

    4. Run quit

      Exit the BGP-EVPN address family view.

    5. Run quit

      Exit the BGP view.

  3. Configure an EVPN instance.
    1. Run evpn vpn-instance vpn-instance-name bd-mode

      A BD EVPN instance is created, and the EVPN instance view is displayed.

    2. Run route-distinguisher route-distinguisher

      An RD is configured for the EVPN instance.

    3. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the EVPN instance.

      The import and export VPN targets of the local end must be the same as the export and import VPN targets of the remote end, respectively.

    4. (Optional) Run import route-policy policy-name

      The EVPN instance is associated with an import route-policy.

      Perform this step to associate the EVPN instance with an import route-policy and set attributes for eligible routes. This enables you to control routes to be imported into the EVPN instance more precisely.

    5. (Optional) Run export route-policy policy-name

      The EVPN instance is associated with an export route-policy.

      Perform this step to associate the EVPN instance with an export route-policy and set attributes for eligible routes. This enables you to control routes to be advertised more precisely.

    6. (Optional) Run mac limit number { simply-alert | mac-unchanged }

      The maximum number of MAC addresses allowed in the EVPN instance is set.

      A device consumes more system resources as it learns more MAC addresses, meaning that the device may fail to operate when busy processing services. To limit the maximum number of MAC addresses allowed in an EVPN instance and thereby improving device security and reliability, run the mac limit command. After this configuration, if the number of MAC addresses exceeds the preset value, an alarm is triggered to prompt you to check the validity of existing MAC addresses.

    7. (Optional) Run mac-route no-advertise

      The device is disabled from sending local MAC routes with the current VNI to the EVPN peer.

      In Layer 3 VXLAN gateway scenarios where Layer 2 traffic forwarding is not involved, perform this step to disable local MAC routes from being advertised to the EVPN peer. This configuration prevents the EVPN peer from receiving MAC routes, thereby conserving device resources.

    8. (Optional) Run local mac-only-route no-generate

      The device is disabled from generating an EVPN MAC route when the local MAC address exists in both a MAC address entry and an ARP/ND entry.

      If the local MAC address exists in both a MAC address entry and an ARP/ND entry on the device, the device generates both an EVPN MAC/IP route and an EVPN MAC route by default. To optimize memory utilization, perform this step so that the device generates only the EVPN MAC/IP route. To ensure normal Layer 2 traffic forwarding, also run the mac-ip route generate-mac command on the peer device to enable the function to generate MAC address entries based on MAC/IP routes.

    9. (Optional) Run mac-ip route generate-mac

      The function to generate MAC address entries based on MAC/IP routes is enabled.

      If the peer device is configured not to advertise MAC routes (using the mac-route no-advertise command) or not to generate MAC routes (using the local mac-only-route no-generate command), the local device cannot generate MAC address entries by default. To ensure normal Layer 2 traffic forwarding, perform this step on the local device to enable the function to generate MAC entries based on MAC/IP routes.

    10. Run quit

      Exit the EVPN instance view.

    11. Run bridge-domain bd-id

      The BD view is displayed.

    12. Run vxlan vni vni-id split-horizon-mode

      A VNI is created and associated with the BD, and split horizon is specified for packet forwarding.

    13. Run evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ]

      The specified EVPN instance is bound to the BD.

      By specifying different bd-tag values, you can bind multiple BDs to the same EVPN instance. In this way, VLAN services of different BDs can access the same EVPN instance while being isolated.

    14. Run quit

      Return to the system view.

  4. Configure ingress replication.
    1. Run interface nve nve-number

      An NVE interface is created, and the NVE interface view is displayed.

    2. Run source ipv6-address

      An IPv6 address is configured for the source VTEP.

    3. Run vni vni-id head-end peer-list protocol bgp

      Ingress replication is configured.

      With this function, the ingress of an IPv6 VXLAN tunnel replicates and sends a copy of any received BUM packets to each VTEP in the ingress replication list (a collection of remote VTEP IPv6 addresses).

    4. Run quit

      Return to the system view.

  5. (Optional) Configure a MAC address for the NVE interface.

    To use active-active VXLAN gateways in distributed VXLAN gateway (EVPN BGP) scenarios, configure the same VTEP MAC address on the two gateways.

    1. Run interface nve nve-number

      The desired NVE interface view is displayed.

    2. Run mac-address mac-address

      A MAC address is configured for the NVE interface.

    3. Run quit

      Exit the NVE interface view.

  6. (Optional) Enable the device to add its router ID (a private extended community attribute) to locally generated EVPN routes.
    1. Run evpn

      The global EVPN configuration view is displayed.

    2. Run router-id-extend private enable

      The device is enabled to add its router ID to locally generated EVPN routes.

      If both IPv4 and IPv6 VXLAN tunnels need to be established between two devices, one device's IP address is repeatedly added to the ingress replication list of the other device while the IPv4 and IPv6 VXLAN tunnels are being established. As a result, each device forwards two copies of BUM traffic to its peer, leading to duplicate traffic. To address this problem, run the router-id-extend private enable command on each device. This enables the device to add its router ID to EVPN routes. After receiving the EVPN routes, the peer device checks whether they carry the same router ID. If they do, the EVPN routes are from the same device. In this case, the peer device adds only the IP address of the device whose EVPN routes carry the IPv4 VXLAN tunnel identifier to the ingress replication list, thereby preventing duplicate traffic.

  7. Run commit

    The configuration is committed.

(Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance

To enable communication between VMs on different subnets, configure a VPN instance for route leaking with an EVPN instance. This configuration enables Layer 3 connectivity. To isolate multiple tenants, you can use different VPN instances.

Procedure

  • For an IPv4 overlay network, perform the following operations:
    1. Run system-view

      The system view is displayed.

    2. Run ip vpn-instance vpn-instance-name

      A VPN instance is created, and the VPN instance view is displayed.

    3. Run vxlan vni vni-id

      A VNI is created and associated with the VPN instance.

    4. Run ipv4-family

      The VPN instance IPv4 address family is enabled, and its view is displayed.

    5. Run route-distinguisher route-distinguisher

      An RD is configured for the VPN instance IPv4 address family.

    6. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the VPN instance IPv4 address family.

      If the current node needs to exchange L3VPN routes with other nodes in the same VPN instance, perform this step to configure VPN targets for the VPN instance.

    7. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

      The VPN target extended community attribute for EVPN routes is configured for the VPN instance IPv4 address family.

      When the local device advertises an EVPN IP prefix route to a peer, the route carries all the VPN targets in the export VPN target list configured for EVPN routes in this step. When the local device advertises an EVPN IRB route to a peer, the route carries all the VPN targets in the export VPN target list of the EVPN instance in BD mode.

      The IRB route or IP prefix route received by the local device can be added to the routing table of the local VPN instance IPv4 address family only when the VPN targets carried in the IRB route or IP prefix route overlap those in the import VPN target list configured for EVPN routes in this step.

    8. (Optional) Run import route-policy policy-name evpn

      The VPN instance IPv4 address family is associated with an import route-policy that is used to filter EVPN routes imported to the VPN instance IPv4 address family.

      To control EVPN route import to the VPN instance IPv4 address family more precisely, perform this step to associate the VPN instance IPv4 address family with an import route-policy and set attributes for eligible routes.

    9. (Optional) Run the export route-policy policy-name evpn command to apply an export route-policy to the VPN instance IPv6 address family, so that this address family can filter EVPN routes to be advertised. Perform this step to apply an export route-policy to the VPN instance IPv6 address family and configure the attributes of eligible EVPN routes. This enables the device to more precisely control EVPN routes to be advertised.

    10. Run quit

      Exit the VPN instance IPv4 address family view.

    11. Run quit

      Exit the VPN instance view.

    12. Run commit

      The configuration is committed.

  • For an IPv6 overlay network, perform the following operations:
    1. Run system-view

      The system view is displayed.

    2. Run ip vpn-instance vpn-instance-name

      A VPN instance is created, and the VPN instance view is displayed.

    3. Run vxlan vni vni-id

      A VNI is created and associated with the VPN instance.

    4. Run ipv6-family

      The VPN instance IPv6 address family is enabled, and its view is displayed.

    5. Run route-distinguisher route-distinguisher

      An RD is configured for the VPN instance IPv6 address family.

    6. (Optional) Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

      VPN targets are configured for the VPN instance IPv6 address family.

      If the current node needs to exchange L3VPN routes with other nodes in the same VPN instance, perform this step to configure VPN targets for the VPN instance.

    7. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

      The VPN target extended community attribute for EVPN routes is configured for the VPN instance IPv6 address family.

      When the local device advertises an EVPN IPv6 prefix route to a peer, the route carries all the VPN targets in the export VPN target list configured for EVPN routes in this step. When the local device advertises an EVPN IRBv6 route to a peer, the route carries all the VPN targets in the export VPN target list of the EVPN instance in BD mode.

      The IRBv6 route or IPv6 prefix route received by the local device can be added to the routing table of the local VPN instance IPv6 address family only when the VPN targets carried in the IRBv6 route or IPv6 prefix route overlap those in the import VPN target list configured for EVPN routes in this step.

    8. (Optional) Run import route-policy policy-name evpn

      The VPN instance IPv6 address family is associated with an import route-policy that is used to filter EVPN routes imported to the VPN instance IPv6 address family.

      To control import of EVPN routes to the VPN instance IPv6 address family more precisely, perform this step to associate the VPN instance IPv6 address family with an import route-policy and set attributes for eligible EVPN routes.

    9. (Optional) Run the export route-policy policy-name evpn command to apply an export route-policy to the VPN instance IPv6 address family, so that this address family can filter EVPN routes to be advertised. Perform this step to apply an export route-policy to the VPN instance IPv6 address family and configure the attributes of eligible EVPN routes. This enables the device to more precisely control EVPN routes to be advertised.

    10. Run quit

      Exit the VPN instance IPv6 address family view.

    11. Run quit

      Exit the VPN instance view.

    12. Run commit

      The configuration is committed.

(Optional) Configuring a Layer 3 Gateway on the IPv6 VXLAN

To enable communication between VMs on different subnets, configure Layer 3 gateways on the IPv6 VXLAN, enable the distributed gateway function, and configure host route advertisement.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface vbdif bd-id

    A VBDIF interface is created, and the VBDIF interface view is displayed.

  3. Run ip binding vpn-instance vpn-instance-name

    The VBDIF interface is bound to a VPN instance.

  4. Configure an IP address for the VBDIF interface to implement Layer 3 interworking.
    • For an IPv4 overlay network, run the ip address ip-address { mask | mask-length } [ sub ] command to configure an IPv4 address for the VBDIF interface.
    • For an IPv6 overlay network, perform the following operations:
      1. Run the ipv6 enable command to enable the IPv6 function for the interface.

      2. Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64 command to configure a global unicast address for the interface.

  5. (Optional) Run mac-address mac-address

    A MAC address is configured for the VBDIF interface.

    By default, the MAC address of a VBDIF interface is the system MAC address. On a network where distributed or active-active gateways need to be simulated into one, you need to run the mac-address command to configure the same MAC address for the VBDIF interfaces of these Layer 3 gateways.

    If VMs on the same subnet connect to different Layer 3 gateways on an IPv6 VXLAN, the VBDIF interfaces of the Layer 3 gateways must have the same IP address and MAC address configured. In this way, the configurations of the Layer 3 gateways do not need to be changed when the VMs' locations are changed. This reduces the maintenance workload.

  6. (Optional) Run bandwidth bandwidth

    Bandwidth is configured for the VBDIF interface.

  7. Run vxlan anycast-gateway enable

    The distributed gateway function is enabled.

    After the distributed gateway function is enabled on a Layer 3 gateway, this gateway discards network-side ARP or NS messages and learns those only from the user side.

  8. Perform one of the following steps to configure host route advertisement.

    Table 1-482 Configuring host route advertisement

    Overlay Network Type

    Type of Route to Be Advertised Between Gateways

    Configuration Method

    IPv4

    IRB route

    Run the arp collect host enable command in the VBDIF interface view.

    IP prefix route

    Run the arp vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv4 address family view of the VPN instance to which the VBDIF interface is bound.

    IPv6

    IRB route

    Run the ipv6 nd collect host enable command in the VBDIF interface view.

    IP prefix route

    Run the nd vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv6 address family view of the VPN instance to which the VBDIF interface is bound.

  9. Run commit

    The configuration is committed.

(Optional) Configuring IPv6 VXLAN Gateways to Advertise Specific Types of Routes

To enable communication between VMs on different subnets, configure IPv6 VXLAN gateways to exchange IRB or IP prefix routes. This configuration enables the gateways to learn the IP routes of the related hosts or the subnets where the hosts reside.

Context

By default, IPv6 VXLAN gateways can exchange MAC routes, but must be configured to exchange IRB or IP prefix routes if VMs need to communicate across different subnets. If an RR is deployed on the network, IRB or IP prefix routes must be exchanged only between the VXLAN gateways and RR.

Host routes can be advertised through IRB routes (recommended), IP prefix routes, or both. In contrast, subnet routes of hosts can be advertised only through IP prefix routes.

Procedure

  • Configure IRB route advertisement.
    1. Run system-view

      The system view is displayed.

    2. Run bgp as-number

      The BGP view is displayed.

    3. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    4. Run either of the following commands based on the overlay network type to configure IRB route advertisement:

      • For an IPv4 overlay network, run the peer { ipv6-address | group-name } advertise irb command.
      • For an IPv6 overlay network, run the peer { ipv6-address | group-name } advertise irbv6 command.

    5. Run commit

      The configuration is committed.

  • Configure IP prefix route advertisement.
    1. Run system-view

      The system view is displayed.

    2. Run bgp as-number

      The BGP view is displayed.

    3. Run either of the following commands based on the overlay network type:

      • For an IPv4 overlay network, run the ipv4-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv4 address family view.
      • For an IPv6 overlay network, run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv6 address family view.

    4. Run either of the following commands based on the overlay network type:

      • For an IPv4 overlay network, run the import-route { direct | isis process-id | ospf process-id | rip process-id | static } [ med med | route-policy route-policy-name ] * command to import routes of other protocols to the BGP-VPN instance IPv4 address family.
      • For an IPv6 overlay network, run the import-route { direct | isis process-id | ospfv3 process-id | ripng process-id | static } [ med med | route-policy route-policy-name ] * command to import routes of other protocols to the BGP-VPN instance IPv6 address family.

      To advertise host IP routes, configure import of direct routes. To advertise the route of a subnet where hosts reside, configure a dynamic routing protocol (such as OSPF or OSPFv3) and then run one of the preceding commands based on the overlay network type to import the route into the dynamic routing protocol.

    5. Run advertise l2vpn evpn

      IP prefix route advertisement is configured.

      IP prefix routes are used to advertise host IP routes as well as the route of the subnet where the hosts reside. If many specific host routes exist, a VXLAN gateway can be configured to advertise an IP prefix route, which carries the routing information of the subnet where the hosts reside. After route advertisement, import the route to the target BGP-VPN instance address family. This reduces the number of routes to be saved on the involved VXLAN gateway.

      • A VXLAN gateway can advertise subnet routes only if the subnets attached to the gateway are unique on the entire network.

      • After IP prefix route advertisement is configured, run the arp vlink-direct-route advertise or nd vlink-direct-route advertise command to advertise host routes. After this configuration, VM migration is restricted.

    6. Run commit

      The configuration is committed.

(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface

MAC address learning limiting helps improve VXLAN network security.

Context

The maximum number of MAC addresses that a device can learn can be configured to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum number of MAC addresses allowed, no more addresses can be learned. The device can also be configured to discard packets after learning the maximum allowed number of MAC addresses, improving network security.

If a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in a BD, MAC address learning for the BD can be disabled to conserve MAC address table space. After the network topology of a VXLAN becomes stable and MAC address learning is complete, MAC address learning can also be disabled.

MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.

Procedure

  • Configure MAC address learning limiting.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *

      A MAC address learning limit rule is configured.

    4. (Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold

      The threshold percentages for MAC address limit alarm generation and clearing are configured.

    5. Run commit

      The configuration is committed.

  • Disable MAC address learning.

    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      The BD view is displayed.

    3. Run mac-address learning disable

      MAC address learning is disabled.

    4. Run commit

      The configuration is committed.

Verifying the Configuration

After configuring IPv6 VXLAN in distributed gateway mode using BGP EVPN, verify information about the IPv6 VXLAN tunnels, VNI status, and VBDIF interfaces.

Prerequisites

IPv6 VXLAN in distributed gateway mode has been configured using BGP EVPN.

Procedure

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display evpn vpn-instance command to check EVPN instance information.
  • Run the display bgp evpn peer [ [ ipv6-address ] verbose ] command to check information about BGP EVPN peers.
  • Run the display vxlan peer [ vni vni-id ] command to check the ingress replication lists of all VNIs or a specified one.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.

Configuring Three-Segment VXLAN to Implement DCI

Three-Segment VXLAN can be configured to enable communication between VMs in different DCs.

Usage Scenario

To meet the requirements of geographical redundancy, inter-regional operations, and user access, an increasing number of enterprises are deploying data centers (DCs) across multiple regions. Data Center Interconnect (DCI) is a solution that enables intercommunication between the VMs of multiple DCs. Using technologies such as VXLAN and BGP EVPN, DCI securely and reliably transmits DC packets over carrier networks. Three-segment VXLAN can be configured to enable communications between VMs in different DCs.

Pre-configuration Tasks

Before configuring three-segment VXLAN to implement DCI, complete the following tasks:

  • Configure an IGP in DCs.

Configuring Three-Segment VXLAN to Implement Layer 3 Interworking

The three-segment VXLAN can be configured to enable communications between inter-subnet VMs in DCs that belong to different ASs.

Context

As shown in Figure 1-1109, BGP EVPN must be configured to create VXLAN tunnels between distributed gateways in each DC and to create VXLAN tunnels between leaf nodes so that the inter-subnet VMs in DC A and DC B can communicate with each other.

When DC A and DC B belong to the same BGP AS, Leaf2 or Leaf3 does not forward EVPN routes received from an IBGP EVPN peer to other IBGP EVPN peers. Therefore, it is necessary to configure Leaf2 and Leaf3 as route reflectors (RRs).

Figure 1-1109 Configuring the three-segment VXLAN tunnels

Procedure

  1. Configure BGP EVPN within DC A and DC B to establish VXLAN tunnels. For details, see Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN.
  2. Configure BGP EVPN on Leaf2 and Leaf3 to establish a VXLAN tunnel between them. For details, see Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN.
  3. (Optional) Configure Leaf2 and Leaf3 as RRs. For details, see Configuring a BGP Route Reflector.
  4. Configure Leaf2 and Leaf3 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.
    1. Run the bgp as-number command to enter the BGP view.
    2. Run the l2vpn-family evpn command to enter the BGP-EVPN address family view.
    3. Run the peer { ipv4-address | group-name } import reoriginate command to enable the function to re-originate routes received from BGP EVPN peers.
    4. Run the peer { ipv4-address | group-name } advertise route-reoriginated evpn { mac-ip | ip | mac-ipv6 | ipv6 } command to enable the function to advertise re-originated EVPN routes to BGP EVPN peers.

      After route re-origination is enabled, Leaf2 or Leaf3 changes the next hop of a received EVPN route to itself, replaces the router MAC address in the gateway MAC address attribute with its own router MAC address, and replaces the Layer 3 VNI with the VPN instance Layer 3 VNI.

    5. Run the quit command to return to the BGP view.
  5. (Optional) Configure local EVPN route leaking on Leaf2 and Leaf3. To use different VPN instances for different service access in a data center, and to shield the VPN instance allocation within the data center from outside by using an external VPN instance for communication with other data centers, perform the following steps on each edge leaf node:
    1. Run the ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv4/IPv6 address family view.

      Here, vpn-instance-name specifies the name of the source VPN instance for local route leaking, which corresponds to the name of the VPN instance used to provide access for different services in the local data center.

    2. Run the local-cross export evpn-rt-match command to allow the locally imported routes and routes received from VPN peers to be leaked to other local VPN instances.
    3. Run the local-cross allow-remote-cross-route command to allow the routes imported from the remote EVPN instance to be leaked to other local VPN instances.
    4. Run the quit command to return to the BGP view.
    5. Run the ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv4 or IPv6 address family view.

      Here, vpn-instance-name specifies the name of the destination VPN instance for local route leaking, which corresponds to the name of the VPN instance used for communication with the external network.

    6. Run the advertise l2vpn evpn include-local-cross-route command to enable the VPN instance to advertise routes leaked from other local VPN instances as EVPN IP prefix routes.

      By default, locally leaked routes in a VPN instance are not advertised to peers through BGP EVPN. After this step is performed, the external VPN instance can advertise routes leaked from other local service VPN instances to peers through EVPN IP prefix routes. In this way, the external VPN instance can communicate with other data centers.

      The EVPN ERT of the source VPN instance must be in the EVPN IRT list of the destination VPN instance, so that local route leaking can be correctly implemented.

  6. Run the commit command to commit the configuration.

Configuring Three-Segment VXLAN to Implement Layer 2 Interworking

Three-segment VXLAN tunnels can be configured to enable communication between VMs that belong to the same subnet but different DCs.

Context

On the network shown in Figure 1-1110, VXLAN tunnels are configured both within DC A and DC B and between transit leaf nodes in both DCs. To enable communication between VM 1 and VM 2, implement Layer 2 communication between DC A and DC B. If the VXLAN tunnels within DC A and DC B use the same VXLAN Network Identifier (VNI), this VNI can also be used to establish a VXLAN tunnel between Transit Leaf 1 and Transit Leaf 2. In practice, however, different DCs have their own VNI spaces, and therefore the VXLAN tunnels within DC A and DC B mostly likely use different VNIs. To configure a VXLAN tunnel between Transit Leaf 1 and Transit Leaf 2 in such cases, perform a VNI conversion.

For example, in Figure 1-1110, the VXLAN tunnel in DC A uses the VNI 10, and that in DC B uses the VNI 20. Transit Leaf 2's VNI (20) must be configured as the outbound VNI on Transit Leaf 1, and Transit Leaf 1's VNI (10) as the outbound VNI on Transit Leaf 2. After the configuration is complete, Layer 2 packets can be forwarded properly. Take DC A sending packets to DC B as an example. After receiving VXLAN packets within DC A, Transit Leaf 1 decapsulates the packets and then uses the outbound VNI 20 to re-encapsulate the packets before sending them to Transit Leaf 2. Upon receipt, Transit Leaf 2 forwards them as normal VXLAN packets.

Figure 1-1110 Configuring three-segment VXLAN to implement Layer 2 interworking
  • Layer 2 communication between VMs in different DCs is implemented here, therefore avoiding the need to configure a Layer 3 gateway.

  • If DC A and DC B belong to the same AS, configure an RR on the edge device. If DC A and DC B do not belong to the same AS, establish an EBGP EVPN peer relationship between edge devices.

Procedure

  1. Configure BGP EVPN within DC A and DC B to establish VXLAN tunnels. For details, see Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN. There is no need to configure a Layer 3 VXLAN gateway.
  2. Configure BGP EVPN on Transit Leaf 1 and Transit Leaf 2 to establish a VXLAN tunnel between them. For details, see Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN. There is no need to configure a Layer 3 VXLAN gateway.
  3. Configure Transit Leaf 1 and Transit Leaf 2 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.

    1. Run bgp as-number

      The BGP view is displayed.

    2. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    3. Run peer { group-name | ipv4-address } split-group split-group-name

      A split horizon group (SHG) to which BGP EVPN peers (or peer groups) belong is configured.

      In Layer 2 interworking scenarios, to prevent forwarding BUM traffic from causing a loop, an SHG must be configured. Separately specify the name of the SHG between Transit Leaf 1 and Transit Leaf 2 on each, so that devices within DC A and DC B belong to the default SHG and Transit Leaf 1 and Transit Leaf 2 belong to the specified SHG. In this manner, when a transit leaf node receives BUM traffic, it does not forward traffic to a device belonging to the same SHG, therefore preventing loops.

    4. Run peer { ipv4-address | group-name } import reoriginate

      The function to re-originate routes received from BGP EVPN peers is enabled.

      Enable on transit leaf nodes the function to re-originate routes received from BGP EVPN peers within DCs and between the DCs (between transit leaf nodes).

    5. Run peer { ipv4-address | group-name } advertise route-reoriginated evpn mac

      The function to advertise re-originated EVPN routes to BGP EVPN peers is enabled.

      In Layer 2 interworking scenarios, configure the function to advertise only re-originated MAC routes to BGP EVPN peers. Enable on transit leaf nodes the function to advertise re-originated MAC routes to BGP EVPN peers within DCs and between the DCs (between transit leaf nodes).

    6. Run commit

      The configuration is committed.

Verifying the Configuration of Using Three-Segment VXLAN to Implement DCI

After configuring three-segment VXLAN to implement DCI, verify the configuration, such as EVPN instances and VXLAN tunnel information.

Prerequisites

Configurations of using three-segment VXLAN to implement DCI have been complete.

Procedure

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to check BGP EVPN peer information.
  • Run the display vxlan peer [ vni vni-id ] command to check ingress replication lists of a VNI or all VNIs.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check VNI information.
  • Run the display interface vbdif [ bd-id ] command to check VBDIF interface information and statistics.
  • Run the display mac-limit bridge-domain bd-id command to check MAC address limiting configurations of a BD.
  • Run the display bgp evpn all routing-table command to check EVPN route information.

Configuring the Static VXLAN Active-Active Scenario

In the scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. In this way, carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in case of a fault.

Context

On the network shown in Figure 1-1111, CE1 is dual-homed to PE1 and PE2. PE1 and PE2 use a virtual address as an NVE interface address at the network side, namely, an Anycast VTEP address. In this way, the CPE is aware of only one remote NVE interface. A VTEP address is configured on the CPE to establish a VXLAN tunnel with the Anycast VTEP address so that PE1, PE2, and the CPE can communicate.

The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.

Before an IPv6 network is used to transmit traffic between a CPE and PE, an IPv4 over IPv6 tunnel must be configured between them. To enable a VXLAN tunnel to recurse routes to the IPv4 over IPv6 tunnel, static routes must be configured on the CPE and PE, and the outbound interface of the route destined for the VXLAN tunnel's destination IP address must be set to the IPv4 over IPv6 tunnel interface.

Figure 1-1111 Networking diagram for configuring the static VXLAN active-active scenario

Procedure

  1. Configure AC-side service access.
    1. Configure an Eth-Trunk interface on CE1 to dual-home CE1 to PE1 and PE2.
    2. Configure service access points. For configuration details, see Configuring a VXLAN Service Access Point.
    3. Configure the same Ethernet Segment Identifier (ESI) for the links connecting CE1 to PE1 and PE2.

      1. Run the interface eth-trunk command to enter the Eth-Trunk interface view.
      2. Run the esi command to configure an ESI.
      3. Run the commit command to commit the configuration.

  2. Configure static VXLAN tunnels between the CPE and PEs. For configuration details, see Configuring a VXLAN Tunnel.
  3. Configure a bypass VXLAN tunnel between PE1 and PE2.
    1. Configure a BGP EVPN peer relationship.

      1. Run the bgp as-number command to enable BGP and enter the BGP view.
      2. Run the peer ipv4-address as-number as-number command to configure the peer device as a BGP peer.
      3. Run the l2vpn-family evpn command to enter the BGP-EVPN address family view.
      4. Run the peer { ipv4-address | group-name } enable command to enable the device to exchange EVPN routes with a specified peer or peer group.
      5. Run the peer { ipv4-address | group-name } advertise encap-type vxlan command to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer.
      6. Run the quit command to exit from the BGP-EVPN address family view.
      7. Run the quit command to exit from the BGP view.
      8. Run the commit command to commit the configuration.

    2. Configure a VPN instance or EVPN instance.

      • Layer 2 communication (Configure an EVPN instance.)

        1. Run the evpn vpn-instance vpn-instance-name bd-mode command to create a BD EVPN instance and enter the EVPN instance view.
        2. Run the route-distinguisher route-distinguisher command to configure an RD for the EVPN instance.
        3. Run the vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] command to configure VPN targets for the EVPN instance.

          The export VPN target of the local end must be the same as the import VPN target of the remote end, and the import VPN target of the local end must be the same as the export VPN target of the remote end.

        4. Run the quit command to exit from the EVPN instance view.
        5. Run the bridge-domain bd-id command to enter the BD view.
        6. Run the vxlan vni vni-id split-horizon-mode command to create a VNI, associate the VNI with the BD, and apply split horizon to the BD.
        7. Run the evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ] command to bind a specified EVPN instance to the BD. By specifying different bd-tag, you can bind multiple BDs with different VLANs to the same EVPN instance and isolate services in the BDs.
        8. Run the quit command to exit from the BD view.
        9. Run the commit command to commit the configuration.
      • Layer 3 communication (Configure a VPN instance.)

        1. Run the ip vpn-instance vpn-instance-name command to create a VPN instance and enter the VPN instance view.
        2. Run the ipv4-family [ unicast ] command to enable the IPv4 address family for a VPN instance.
        3. Run the route-distinguisher route-distinguisher command to configure an RD for the VPN instance.
        4. Run the vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] [ evpn ] command to configure VPN target for the EVPN instance.

          The export VPN target of the local end must be the same as the import VPN target of the remote end, and the import VPN target of the local end must be the same as the export VPN target of the remote end.

        5. Run the quit command to exit from the VPN instance ipv4-family view.
        6. Run the quit command to exit from the VPN instance view.
        7. Run the bridge-domain bd-id command to enter the BD view.
        8. Run the vxlan vni vni-id split-horizon-mode command to create a VNI, associate the VNI with the BD, and apply split horizon to the BD.
        9. Run the quit command to exit from the BD view.
        10. Run the commit command to commit the configuration.

    3. Enable the inter-chassis VXLAN function on PE1 and PE2.

      1. Run the evpn command to enter the EVPN view.
      2. Run the bypass-vxlan enable command to enable the inter-chassis VXLAN function.
      3. Run the quit command to exit from the EVPN view.
      4. Run the commit command to commit the configuration.

    4. Configure an ingress replication list.

      1. Run the interface nve nve-number command to enter the NVE interface view.
      2. Run the source ip-address command to configure an IP address for the source VTEP.
      3. Run the vni vni-id head-end peer-list protocol bgp command to configure an ingress replication list.
      4. Run the bypass source ip-address command to configure a source VTEP address for the bypass VLAN tunnel.
      5. Run the mac-address mac-address command to configure a VTEP MAC address.
      6. Run the quit command to exit from the NVE interface view.
      7. Run the commit command to commit the configuration.

  4. Configure FRR on the PEs.

    • Layer 2 communication

      1. Run the evpn command to enter the EVPN view.
      2. Run the vlan-extend private enable command to enable the routes to be advertised to a peer to carry the newly added VLAN extended community attribute.
      3. Run the vlan-extend redirect enable command to enable the function of redirecting the received routes that carry the VLAN private extended community attribute.
      4. Run the local-remote frr enable command to enable FRR for MAC routes between the local and remote ends.‏
      5. Run the quit command to exit from the EVPN view.
      6. Run the commit command to commit the configuration.
    • Layer 3 communication

      1. Run the bgp as-number command to enter the BGP view.
      2. Run the ipv4-family vpn-instance vpn-instance-name command to enable the BGP-VPN instance IPv4 address family and displays the address family view.
      3. Run the auto-frr command to enable BGP auto FRR.
      4. Run the peer { ipv4-address | group-name } enable command to enable the function of exchanging EVPN routes with a specified peer or peer group. The IP address is a CE address.
      5. Run the advertise l2vpn evpn command to enable the VPN instance to advertise EVPN IP prefix routes.
      6. Run the quit command to exit from the BGP-VPN instance IPv4 address family view.
      7. Run the quit command to exit from the BGP view.
      8. Run the commit command to commit the configuration.

  5. (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
    1. Run the evpn enhancement port port-id command to configure a UDP port.

      The same UDP port number must be set for the PEs in the active state.

    2. Run the commit command to commit the configuration.
  6. (Optional) Configure a VXLAN over IPsec tunnel between the CPE and PE to enhance the security for packets traversing an insecure network.

    For configuration details, see the section Example for Configuring VXLAN over IPsec.

Checking the Configuration

After configuring the VXLAN active-active scenario, check information on the VXLAN tunnel, VNI status, and VBDIF. For details, see the section Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN.

Configuring the Dynamic VXLAN Active-Active Scenario

In the scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. In this way, carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in case of a fault.

Context

On the network shown in Figure 1-1112, CE1 is dual-homed to PE1 and PE2. PE1 and PE2 use a virtual address as an NVE interface address at the network side, namely, an Anycast VTEP address. In this way, the CPE is aware of only one remote VTEP IP. A VTEP address is configured on the CPE to establish a dynamic VXLAN tunnel with the Anycast VTEP address so that PE1, PE2, and the CPE can communicate.

The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.

Figure 1-1112 Networking diagram for configuring the dynamic VXLAN active-active scenario

Procedure

  1. Configure AC-side service access.
    1. Configure an Eth-Trunk interface on CE1 to dual-home CE1 to PE1 and PE2.
    2. Configure service access points. For configuration details, see Configuring a VXLAN Service Access Point.
    3. Configure the same Ethernet Segment Identifier (ESI) for the links connecting CE1 to PE1 and PE2.

      1. Run the interface eth-trunk command to enter the Eth-Trunk interface view.
      2. Run the esi esi command to configure an ESI.
      3. Run the commit command to commit the configuration.

  2. Configure static VXLAN tunnels between the CPE and PEs. For configuration details, see the section Configuring a VXLAN Tunnel.
  3. Configure a bypass VXLAN tunnel between PE1 and PE2.
    1. Configure a BGP EVPN peer relationship.

      1. Run the bgp as-number command to enable BGP and enter the BGP view.
      2. Run the peer ipv4-address as-number as-number command to configure the peer device as a BGP peer.
      3. Run the l2vpn-family evpn command to enter the BGP-EVPN address family view.
      4. Run the peer { ipv4-address | group-name } enable command to enable the device to exchange EVPN routes with a specified peer or peer group.
      5. Run the peer { ipv4-address | group-name } advertise encap-type vxlan command to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer.
      6. Run the quit command to exit the BGP-EVPN address family view.
      7. Run the quit command to exit the BGP view.
      8. Run the commit command to commit the configuration.

    2. Configure an EVPN instance.

      1. Run the evpn vpn-instance vpn-instance-name bd-mode command to create a BD EVPN instance and enter the EVPN instance view.
      2. Run the route-distinguisher route-distinguisher command to configure an RD for the EVPN instance.
      3. Run the vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ]

        VPN targets are configured for the EVPN instance. The export VPN target of the local end must be the same as the import VPN target of the remote end, and the import VPN target of the local end must be the same as the export VPN target of the remote end.

      4. Run the quit command to exit the EVPN instance view.
      5. Run the bridge-domain bd-id command to enter the BD view.
      6. Run the vxlan vni vni-id split-horizon-mode command to create a VNI, associate the VNI with the BD, and specify split horizon for packet forwarding.
      7. Run the evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ] command to bind a specified EVPN instance to the BD. By specifying different bd-tag, you can bind multiple BDs with different VLANs to the same EVPN instance and isolate services in the BDs.
      8. Run the quit command to exit the BD view.
      9. Run the commit command to commit the configuration.

    3. Enable the inter-chassis VXLAN function on PE1 and PE2.

      1. Run the evpn command to enter the EVPN view.
      2. Run the bypass-vxlan enable command to enable the inter-chassis VXLAN function.
      3. Run the quit command to exit the EVPN view.
      4. Run the commit command to commit the configuration.

    4. Configure an ingress replication list.

      1. Run the interface nve nve-number command to enter the NVE interface view.
      2. Run the source ip-address command to configure an IP address for the source VTEP.
      3. Run the vni vni-id head-end peer-list protocol bgp command to enable ingress replication.
      4. Run the bypass source ip-address command to configure a source VTEP IP address for the bypass VXLAN tunnel.
      5. Run the mac-address mac-address command to configure a VTEP MAC address.
      6. Run the quit command to exit the NVE interface view.
      7. Run the commit command to commit the configuration.

  4. Configure FRR on the PEs.

    • Layer 2 communication

      1. Run the evpn command to enter the EVPN view.
      2. Run the vlan-extend private enable command to enable routes to be sent to carry the VLAN private extended community attribute.
      3. Run the vlan-extend redirect enable command to enable the function of redirecting received routes the VLAN private extended community attribute.
      4. Run the local-remote frr enable command to enable FRR for MAC routes between the local and remote ends.‏
      5. Run the quit command to exit the EVPN view.
      6. Run the commit command to commit the configuration.
    • Layer 3 communication

      1. Run the bgp as-number command to enter the BGP view.
      2. Run the ipv4-family vpn-instance vpn-instance-name command to enable the BGP-VPN instance IPv4 address family and displays the address family view.
      3. Run the auto-frr command to enable BGP auto FRR.
      4. Run the peer { ipv4-address | group-name } as-number as-number command to specify a peer IP address and the number of the AS where the peer resides.
      5. Run the advertise l2vpn evpn command to enable the VPN instance to advertise EVPN IP prefix routes.
      6. Run the quit command to exit from the BGP-VPN instance IPv4 address family view.
      7. Run the quit command to exit the BGP view.
      8. Run the commit command to commit the configuration.

  5. (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
    1. Run the evpn enhancement port port-id command to configure a UDP port.

      The same UDP port number must be set for the PEs in the active state.

    2. Run the commit command to commit the configuration.
  6. (Optional) Configure a VXLAN over IPsec tunnel between the CPE and PE to enhance the security for packets traversing an insecure network.

    For configuration details, see the section Example for Configuring VXLAN over IPsec.

Checking the Configuration

After configuring the VXLAN active-active scenario, check information on the VXLAN tunnel, VNI status, and VBDIF. For details, see the section Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN.

Configuring the Dynamic IPv6 VXLAN Active-Active Scenario

In scenarios where an IPv6-based data center is interconnected with an enterprise site, a CE can be dual-homed to an IPv6 VXLAN to implement rapid convergence if a fault occurs, thereby enhancing access reliability and improving service stability.

Context

On the network shown in Figure 1-1113, CE1 is dual-homed to PE1 and PE2. Both PEs use the same virtual address as an NVE interface address (namely, an Anycast VTEP address) at the network side. In this way, the CPE is aware of only one remote VTEP address. To allow the CPE to communicate with PE1 and PE2, a VTEP address must be configured on the CPE to establish an IPv6 VXLAN tunnel with the Anycast VTEP address.

The packets from the CPE can reach CE1 through either PE1 or PE2. However, when a single-homed CE (CE2 and CE3 in this example) exists on the network, the packets from the CPE to the single-homed CE may need to detour to the other PE after reaching one PE. To ensure PE1-PE2 reachability, a bypass VXLAN tunnel must be established between PE1 and PE2.

Figure 1-1113 Configuring the dynamic IPv6 VXLAN active-active scenario

Procedure

  1. Configure AC-side service access.
    1. Configure an Eth-Trunk interface on CE1 to dual-home CE1 to PE1 and PE2.
    2. Configure service access to the VXLAN network. For configuration details, see Configuring a VXLAN Service Access Point.
    3. Configure the same Ethernet Segment Identifier (ESI) for the links connecting PE1 and PE2 to CE1.

      1. Run the interface eth-trunk command to enter the Eth-Trunk interface view.
      2. Run the esi esi command to configure an ESI.
      3. Run the commit command to commit the configuration.

  2. Configure an IPv6 VXLAN tunnel between the CPE and each PE using BGP EVPN. For details, see Configuring an IPv6 VXLAN Tunnel.
  3. Configure a bypass VXLAN tunnel between PE1 and PE2.
    1. Configure a BGP EVPN peer relationship.

      1. Run the bgp as-number command to enable BGP and enter the BGP view.
      2. Run the peer ipv6-address as-number as-number command to specify an IPv6 BGP peer.
      3. Run the l2vpn-family evpn command to enter the BGP-EVPN address family view.
      4. Run the peer { group-name | ipv6-address } enable command to enable the device to exchange EVPN routes with a specified peer or peer group.
      5. Run the peer { group-name | ipv6-address } advertise encap-type vxlan command to enable the device to advertise EVPN routes that carry the VXLAN encapsulation attribute to the peer or peer group.
      6. Run the quit command to exit the BGP-EVPN address family view.
      7. Run the quit command to exit the BGP view.
      8. Run the commit command to commit the configuration.

    2. Configure an EVPN instance.

      1. Run the evpn vpn-instance vpn-instance-name bd-mode command to create a BD EVPN instance and enter the EVPN instance view.
      2. Run the route-distinguisher route-distinguisher command to configure an RD for the EVPN instance.
      3. Run the vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] command to configure VPN targets for the EVPN instance. The import and export VPN targets of the local end must be the same as the export and import VPN targets of the remote end, respectively.
      4. Run the quit command to exit the EVPN instance view.
      5. Run the bridge-domain bd-id command to enter the BD view.
      6. Run the vxlan vni vni-id split-horizon-mode command to create a VNI, associate the VNI with the BD, and specify split horizon for packet forwarding.
      7. Run the evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ] command to bind the BD to a specified EVPN instance. By specifying different bd-tag values, you can bind multiple BDs to the same EVPN instance. In this way, VLAN services of different BDs can access the same EVPN instance while being isolated.
      8. Run the quit command to exit the BD view.
      9. Run the commit command to commit the configuration.

    3. Enable the inter-chassis VXLAN function on PE1 and PE2.

      1. Run the evpn command to enter the EVPN view.
      2. Run the bypass-vxlan enable command to enable the inter-chassis VXLAN function.
      3. Run the quit command to exit the EVPN view.
      4. Run the commit command to commit the configuration.

    4. Configure ingress replication.

      1. Run the interface nve nve-number command to enter the NVE interface view.
      2. Run the source ipv6-address command to configure an IPv6 address for the source VTEP.
      3. Run the vni vni-id head-end peer-list protocol bgp command to configure ingress replication.
      4. Run the bypass source ipv6-address command to configure an IPv6 address for the source VTEP of the bypass VXLAN tunnel.
      5. Run the mac-address mac-address command to configure a MAC address for the VTEP.
      6. Run the quit command to exit the NVE interface view.
      7. Run the commit command to commit the configuration.

  4. Configure FRR on each PE.

    • For Layer 2 communication:

      1. Run the evpn command to enter the EVPN view.
      2. Run the vlan-extend private enable command to enable the routes to be sent to a peer to carry the VLAN private extended community attribute.
      3. Run the vlan-extend redirect enable command to enable the function of redirecting the received routes that carry the VLAN private extended community attribute.
      4. Run the local-remote frr enable command to enable FRR for MAC routes between the local and remote ends.‏
      5. Run the quit command to exit the EVPN view.
      6. Run the commit command to commit the configuration.
    • For Layer 3 communication:

      1. Run the bgp as-number command to enter the BGP view.
      2. Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv6 address family view.
      3. Run the auto-frr command to enable BGP auto FRR.
      4. Run the peer { ipv6-address | group-name } as-number as-number command to specify a peer IP address and the number of the AS where the peer resides.
      5. Run the advertise l2vpn evpn command to enable a VPN instance to advertise IP routes to an EVPN instance.
      6. Run the quit command to exit the BGP-VPN instance IPv6 address family view.
      7. Run the quit command to exit the BGP view.
      8. Run the commit command to commit the configuration.

Verifying the Configuration

After configuring a dynamic IPv6 VXLAN active-active scenario, verify the configuration.

  • Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
  • Run the display interface nve [ nve-number | main ] command to check NVE interface information.
  • Run the display evpn vpn-instance command to check EVPN instance information.
  • Run the display bgp evpn peer [ [ ipv6-address ] verbose ] command to check information about BGP EVPN peers.
  • Run the display vxlan peer [ vni vni-id ] command to check the ingress replication lists of all VNIs or a specified one.
  • Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
  • Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.

Configuring VXLAN Accessing BRAS

When telco cloud gateways use VXLAN for user access, you need to configure VXLAN accessing BRAS on the device responsible for user access.

Context

On the network shown in Figure 1-1114, telco cloud gateways (DCGW1 and DCGW2) connect to the aggregation device CPE through VXLAN tunnels. VXLAN is also deployed between the CPE and physical UP (pUP). To enable users to access the external network, configure VXLAN accessing BRAS on the pUP.

Figure 1-1114 VXLAN accessing BRAS

Creating an L2VE Interface

Configure an L2VE interface on the pUP to terminate VXLAN services and bind the interface to a VE group.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface virtual-ethernet ve-number or interface global-ve ve-number

    A VE or global VE interface is created, and its view is displayed.

  3. Run ve-group ve-group-id l2-terminate

    The VE or global VE interface is configured as an L2VE interface that terminates VXLAN services and bound to a VE group.

    • A VE group can contain only one L2VE interface and one L3VE interface. The two VE interfaces in a VE group must reside on the same board.

    • The two global VE interfaces in a VE group can reside on different boards.

  4. Run commit

    The configuration is committed.

Creating an L3VE Interface

Configure an L3VE interface used for BRAS access on the pUP and bind the L3VE interface to a VE group.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface virtual-ethernet ve-number or interface global-ve ve-number

    A VE or global VE interface is created, and its view is displayed.

  3. Run ve-group ve-group-id l3-access

    The VE or global VE interface is configured as an L3VE interface used for BRAS access and bound to a VE group.

    • A VE group can contain only one L2VE interface and one L3VE interface. The two VE interfaces in a VE group must reside on the same board.

    • The two global VE interfaces in a VE group can reside on different boards.

  4. Run commit

    The configuration is committed.

Associating the L2VE Interface with a BD

Associate the L2VE interface with a BD on the pUP, so that VXLAN services can be terminated on the L2VE interface.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run interface virtual-ethernet ve-number.subinterface-number or interface global-ve ve-number.subinterface-number

    The VE or global VE Layer 2 sub-interface view is displayed.

  3. Run encapsulation { dot1q [ vid low-pe-vid [ to high-pe-vid ] ] | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } ] }

    A packet encapsulation type is configured, so that a specific type of interface can transmit data packets of the specified encapsulation type.

  4. Run rewrite pop { single | double }

    The function to remove VLAN tags from received packets is enabled.

    For single-tagged packets received by the Layer 2 sub-interface, specify single to enable the sub-interface to remove the VLAN tag from each packet.

    If the packet encapsulation type is set to QinQ in the previous step, specify double to enable the sub-interface to remove double VLAN tags from each double-tagged packet received.

  5. Run bridge-domain bd-id

    The Layer 2 sub-interface is associated with a BD, so that the sub-interface can forward packets through the BD.

    The BD must have been associated with a VNI in the VXLAN configuration.

  6. Run commit

    The configuration is committed.

Configuring the L3VE Interface as a BAS Interface

Configure the L3VE interface on the pUP as a BAS interface for BRAS access.

Context

A BAS interface on a pUP can be configured in either of the following modes:
  • The BAS interface is directly configured on the pUP. This mode applies to scenarios where CU separation is not deployed.
  • The BAS interface configurations are delivered by a CP to the pUP. This mode applies to CU separation scenarios.

Procedure

  • Directly perform configurations on the pUP. For configuration details, see Configuring PPPoE Access.
  • Use a CP to deliver configurations to the pUP. In this case, the UP plane configurations need to be completed on the CP and delivered to the pUP through a southbound channel. For details, see VNE 9000 (vBRAS-CP) Product Documentation > CU Separation Configuration > User Access Configuration > PPPoE Access Configuration.

Verifying the Configuration

After configuring VXLAN accessing BRAS, you can view binding between the VE interfaces and VE group and VXLAN tunnel information on the pUP.

Prerequisites

VXLAN accessing BRAS has been configured.

Procedure

  1. Run the display virtual-ethernet ve-group [ ve-group-id | slot slot-id ] command to check binding between the VE interfaces and VE group.
  2. Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.

Configuring NFVI Distributed Gateways (Asymmetric Mode)

In the Network Function Virtualization Infrastructure (NFVI) telco cloud solution, the NFVI distributed gateway function enables mobile phone traffic to pass through the data center network (DCN) and to be processed by the virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE). In addition, traffic can be balanced during internal transmission over the DCN.

Usage Scenario

Huawei's NFVI telco cloud solution incorporates Data Center Interconnect (DCI) and DCN solutions. A large volume of mobile phone traffic enters the DCN and accesses its vUGW and vMSE. After being processed by the vUGW and vMSE, the mobile phone traffic is forwarded over the Internet through the DCN to the destination devices. Equally, response traffic sent over the Internet from the destination devices to the mobile phones also undergoes this process. For this to take place and to ensure that the traffic is balanced within the DCN, you need to deploy the NFVI distributed gateway function on the DCN.

Figure 1-1115 or Figure 1-1116 shows the network of NFVI distributed gateways. DC gateways are the boundary gateways of the DCN network and can be used to exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 are connected to virtualized network function (VNF) devices. VNF1 and VNF2 can be deployed as virtualized NEs to implement the vUGW and vMSE functions and connected to the L2GW/L3GW1 and L2GW/L3GW2 through the interface processing unit (IPU).

This networking can be considered a combination of the distributed gateway function and VXLAN active-active/quad-active gateway function.
  • The VXLAN active-active/quad-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. All DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.

  • The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between L2GW/L3GW1 and L2GW/L3GW2.

The deployment method of the VXLAN quad-active gateway function is similar to that of the VXLAN active-active gateway function. If you want to deploy the VXLAN quad-active gateway function on DC gateways, see Configuring the Dynamic VXLAN Active-Active Scenarioor Configuring the Dynamic IPv6 VXLAN Active-Active Scenario.

On the NFVI distributed gateway network, the number of bridge domains (BDs) must be planned according to the number of network segments that the IPUs belong to. For example, if five IPU interfaces correspond to four network segments, four different BDs must be planned. In asymmetric mode, you also need to configure all BDs and VBDIF interfaces on each of the DC gateways and L2GW/L3GWs, and bind all VBDIF interfaces to the same L3VPN instance. In addition, the following functions have to be deployed on the network:
  • A VPN BGP peer relationship is set up between a VNF and DCGW so that the VNF can advertise user equipment (UE) routes to the DCGW.

  • Static VPN routes are configured on L2GW/L3GW1 and L2GW/L3GW2 to connect to the VNFs. The routes' destination IP addresses are the VNFs' IP addresses, and the next hops are the IP addresses of the IPUs.

  • A BGP EVPN peer relationship is established (full-mesh) between any two of the DCGWs and L2GW/L3GWs. An L2GW/L3GW can flood static routes to the VNFs to other devices through BGP EVPN peer relationships. A DCGW can advertise local loopback routes and default routes to the L2GW/L3GWs through the BGP EVPN peer relationships.

  • Traffic between a mobile phone and the Internet that is forwarded through a VNF is called north-south traffic, whereas the traffic between VNF1 and VNF2 is called east-west traffic. To balance both of these, you need to configure load balancing on the DCGWs and L2GW/L3GWs.

The NFVI distributed gateway function is supported for both IPv4 and IPv6 services. If a configuration step is not differentiated in terms of IPv4 and IPv6, this step applies to both IPv4 and IPv6 services.

When the NFVI distributed gateway is used, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M functions as either a DCGW or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.

Figure 1-1115 NFVI distributed gateway network(DC gateway active-active)
Figure 1-1116 NFVI distributed gateway network (DC gateway quad-active)

Pre-configuration Tasks

Before configuring NFVI distributed gateways (asymmetric mode), complete the following tasks:

Configuring an L3VPN Instance on a DCGW

You can configure an L3VPN instance to store and manage received mobile phone routes and VPN routes reachable to VNFs.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run ip vpn-instance vpn-instance-name

    A VPN instance is created, and the VPN instance view is displayed.

  3. Run vxlan vni vni-id

    A VNI is created and associated with the VPN instance.

  4. Enter the VPN instance IPv4/IPv6 address family view.

    • Run ipv4-family

      The VPN instance IPv4 address family view is displayed.

    • Run ipv6-family

      The VPN instance IPv6 address family view is displayed.

  5. Configure an RD for the VPN instance.

    • Run route-distinguisher route-distinguisher

      An RD is configured for the VPN instance IPv4 address family.

    • Run route-distinguisher route-distinguisher

      An RD is configured for the VPN instance IPv6 address family.

  6. Configure VPN targets for the VPN instance.

    • Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

      VPN targets used to import routes into and from the remote device's L3VPN instance are configured for the VPN instance IPv4 address family.

    • Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

      VPN targets used to import routes into and from the remote device's L3VPN instance are configured for the VPN instance IPv6 address family.

    When the local device advertises EVPN routes to the remote device, the EVPN routes carry the export VPN target configured using this command. When the local device receives an EVPN route from the remote end, the route can be imported into the routing table of the VPN instance IPv4/IPv6 address family only if the VPN target carried in the EVPN route is included in the import VPN target list of the VPN instance IPv4/IPv6 address family.

  7. Run quit

    Exit from the VPN instance IPv4/IPv6 address family view.

  8. Run quit

    Exit from the VPN instance view.

  9. Run interface vbdif bd-id

    A VBDIF interface is created, and the VBDIF interface view is displayed.

    The number of VBDIF interfaces to be created is the same as the number of planned BDs.

  10. Run ip binding vpn-instance vpn-instance-name

    The VBDIF interface is bound to the VPN instance.

  11. (Optional) Run ipv6 enable

    IPv6 is enabled on the interface. This step is mandatory if an IPv6 address is planned for the VBDIF interface.

  12. Configure an IPv4/IPv6 address for the VBDIF interface.

    • Run ip address ip-address { mask | mask-length }

      An IPv4 address is configured for the interface.

    • Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length }

      An IPv6 address is configured for the interface.

  13. Run vxlan anycast-gateway enable

    The distributed gateway function is enabled.

  14. Configure a DCGW to generate ARP (ND) entries for Layer 2 forwarding based on ARP/ND information in EVPN routes.

    • Run arp generate-rd-table enable

      The DCGW is enabled to generate ARP entries used for Layer 2 forwarding based on ARP information.

    • Run ipv6 nd generate-rd-table enable

      The DCGW is enabled to generate ND entries used for Layer 2 forwarding based on ND information.

  15. Run commit

    The configuration is committed.

Configuring Route Advertisement on a DC-GW

After route advertisement is configured on a DC-GW, the DC-GW can construct its own forwarding entries based on received EVPN or BGP routes.

Procedure

  1. Configure EVPN on the DC-GW to advertise default static routes and loopback routes in a VPN instance.
    1. Run system-view

      The system view is displayed.

    2. Run interface Loopback interface-number

      The loopback interface view is displayed.

    3. Run ip binding vpn-instance vpn-instance-name

      An L3VPN instance is bound to the loopback interface.

    4. (Optional) Run ipv6 enable

      IPv6 is enabled on the interface. This step is mandatory if the interface requires an IPv6 address.

    5. Configure an IPv4/IPv6 address for the interface.

      • Run ip address ip-address { mask | mask-length }

        An IPv4 address is configured for the interface.

      • Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length }

        An IPv6 address is configured for the interface.

    6. Run quit

      Exit from the loopback interface view.

    7. Configure a default static route for the VPN instance.

      • Run ip route-static vpn-instance vpn-instance-name 0.0.0.0 { 0.0.0.0 | 0 } { nexthop-address | interface-type interface-number [ nexthop-address ] } [ tag tag ]

        A default IPv4 static route is created for the VPN instance.

      • Run ipv6 route-static vpn-instance vpn-instance-name :: 0 { nexthop-ipv6–address | interface-type interface-number [ nexthop-ipv6-address ] } [ tag tag ]

        A default IPv6 static route is created for the VPN instance.

    8. Create a route policy to filter default static routes and loopback routes in the VPN instance. For configuration details, see Configuring a Route-Policy.
    9. Run ip vpn-instance vpn-instance-name

      The VPN instance view is displayed.

    10. Enter the VPN instance IPv4/IPv6 address family view.

      • Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      • Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

    11. Run export route-policy policy-name evpn

      The VPN instance is associated with an export route policy, which filters routes that the VPN instance will advertise to an EVPN instance. This ensures that the VPN instance advertises only its default static routes and loopback routes to the EVPN instance.

    12. Run quit

      Exit from the VPN instance IPv4/IPv6 address family view.

    13. Run quit

      Exit from the VPN instance view.

    14. Create a route-policy to filter the mobile phone routes received by the DC-GW from the L2GW/L3GW and prohibit the advertisement of such mobile phone routes. For details about how to create a route-policy, see Configuring a Route-Policy.
    15. Run bgp { as-number-plain | as-number-dot }

      The BGP view is displayed.

    16. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    17. Run peer { group-name | ipv4-address | ipv6-address } route-policy route-policy-name export

      The route-policy is used to prohibit DC-GWs from advertising mobile phone routes to each other.

    18. Run quit

      Exit from the BGP-EVPN address family view.

    19. Enter the BGP-VPN instance IPv4/IPv6 address family view.

      • Run ipv4-family vpn-instance vpn-instance-name

        The BGP-VPN instance IPv4 address family view is displayed.

      • Run ipv6-family vpn-instance vpn-instance-name

        The BGP VPN instance IPv6 address family view is displayed.

    20. Run import-route direct [ med med | route-policy route-policy-name ] *

      Importing loopback routes in the VPN instance to the BGP-VPN instance IPv4/IPv6 address family view is enabled on the device.

    21. Run network { 0.0.0.0 0 | :: 0 }

      Importing default static routes in the VPN instance to the BGP-VPN instance IPv4/IPv6 address family view is enabled on the device.

    22. Run the advertise l2vpn evpn command to enable the VPN instance to advertise EVPN IP prefix routes.
    23. Run quit

      Exit from the BGP-VPN instance IPv4/IPv6 address family view.

    24. Run quit

      Exit from the BGP view.

  2. Configure the DC-GW to establish a VPN BGP peer relationship with a VNF.
    1. Run route-policy route-policy-name deny node node

      A route policy that denies all routes is created.

    2. Run quit

      Exit from the route-policy view.

    3. Run bgp { as-number-plain | as-number-dot }

      The BGP view is displayed.

    4. Enter the BGP-VPN instance IPv4/IPv6 address family view.

      • Run ipv4-family vpn-instance vpn-instance-name

        The BGP-VPN instance IPv4 address family view is displayed.

      • Run ipv6-family vpn-instance vpn-instance-name

        The BGP VPN instance IPv6 address family view is displayed.

    5. Run peer { ipv4-address | ipv6-address | group-name } as-number

      A VPN BGP peer relationship is established.

    6. Run peer { ipv4-address | ipv6-address | group-name } connect-interface interface-type interface-number [ ipv4-source-address ]

      A source interface and a source IP address are specified to set up a TCP connection between the BGP peers.

    7. Run peer { ipv4-address | ipv6-address | group-name } route-policy route-policy-name export

      The route policy is applied so that the DC-GW does not advertise VPN BGP routes to the VNF to prevent route loops.

    8. Run quit

      Exit from the BGP-VPN instance IPv4/IPv6 address family view.

  3. (Optional) Configure the asymmetric mode for IRB routes. If an L2GW/L3GW is configured to advertise IRB(IRBv6) routes to the DC-GW, you need to configure the IRB asymmetric function on the DC-GW.
    1. Enter the BGP-VPN instance IPv4/IPv6 address family view.

      • Run ipv4-family vpn-instance vpn-instance-name

        The BGP-VPN instance IPv4 address family view is displayed.

      • Run ipv6-family vpn-instance vpn-instance-name

        The BGP VPN instance IPv6 address family view is displayed.

    2. Run irb asymmetric

      The asymmetric mode is enabled for IRB routes.

  4. Run commit

    The configuration is committed.

Configuring Route Advertisement on an L2GW/L3GW

After route advertisement is configured on an L2GW/L3GW, the L2GW/L3GW can construct its own forwarding entries based on received EVPN or BGP routes.

Procedure

  1. Configure an L2GW/L3GW to generate ARP/ND entries for Layer 2 forwarding based on ARP(ND) information in EVPN routes.
    1. Run system-view

      The system view is displayed.

    2. Run interface vbdif bd-id

      A VBDIF interface is created, and the VBDIF interface view is displayed.

    3. Configure the L2GW/L3GW to generate ARP (ND) entries for Layer 2 forwarding based on ARP/ND information in EVPN routes.

      • Run arp generate-rd-table enable

        The L2GW/L3GW is enabled to generate ARP entries used for Layer 2 forwarding based on ARP information.

      • Run ipv6 nd generate-rd-table enable

        The L2GW/L3GW is enabled to generate ND entries used for Layer 2 forwarding based on ND information.

    4. Run quit

      Exit from the VBDIF interface view.

  2. Configure an L3VPN instance on the L2GW/L3GW to advertise static VPN routes reachable to a VNF to EVPN.
    1. Create a route policy to filter the static VPN routes reachable to the VNF from the L3VPN instance. For details on how to configure a route policy, see Configuring a Route-Policy. When you specify an apply clause, run the apply gateway-ip { origin-nexthop | ipv4-address }or apply ipv6 gateway-ip { origin-nexthop | ipv6-address } command to set the original next hop of a VPN static route to the gateway IP address.
    2. Run ip vpn-instance vpn-instance-name

      The view of a VPN instance is displayed.

    3. Enter the VPN instance IPv4/IPv6 address family view.

      • Run ipv4-family

        The VPN instance IPv4 address family view is displayed.

      • Run ipv6-family

        The VPN instance IPv6 address family view is displayed.

    4. Run export route-policy policy-name evpn

      The L3VPN instance is associated with an export route policy, which filters routes that the L3VPN instance will advertise to an EVPN instance. This ensures that the L3VPN instance advertises only static VPN routes reachable to the VNF to EVPN.

    5. Run quit

      Exit from the VPN instance IPv4/IPv6 address family view.

    6. Run quit

      Exit from the VPN instance view.

    7. Run bgp { as-number-plain | as-number-dot }

      The BGP view is displayed.

    8. Enter the BGP-VPN instance IPv4/IPv6 address family view.

      • Run ipv4-family vpn-instance vpn-instance-name

        The BGP-VPN instance IPv4 address family view is displayed.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    9. Run import-route static [ med med | route-policy route-policy-name ] *

      The device is enabled to import static routes into the routing table of the BGP-VPN instance IPv4/IPv6 address family.

    10. Run advertise l2vpn evpn [ import-route-multipath ]

      The VPN instance is enabled to advertise IP routes to the EVPN instance. If load balancing is required, specifying the import-route-multipath parameter is recommended, so that the VPN instance can advertise all routes with the same destination address to the EVPN instance.

    11. (Optional) Run irb asymmetric

      The asymmetric mode is enabled for IRB routes. If L2GW/L3GWs are configured to advertise ARP (ND) routes to each other, skip this step. If L2GW/L3GWs are configured to advertise IRB(IRBv6) routes to each other, perform this step so that the L2GW/L3GWs do not generate IP prefix routes. This helps prevent route loops.

    12. Run quit

      Exit from the BGP-VPN instance IPv4/IPv6 address family view.

  3. Configure the L2GW/L3GW to advertise IRB(IRBv6) or ARP(ND) routes to a DC gateway.
    1. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    2. Run peer { ipv4-address | group-name | ipv6-address } advertise { irb | arp | irbv6 | nd }

      The device is enabled to advertise IRB (IRBv6) or ARP (ND) routes. If ARP (ND) routes need to be advertised, the L2GW/L3GW sends only routes carrying MAC and ARP information to the DC gateway. If IRB(IRBv6) routes need to be advertised, the L2GW/L3GW sends the MAC addresses, ARP (ND) information, and L3 VNI to the DC gateway. In this case, however, the irb asymmetric command must be enabled on the DC gateway so that the DC gateway does not generate IP prefix routes based on the IP address and L3 VNI. This prevents route loops on the network.

    3. Run quit

      Exit from the BGP-EVPN address family view .

    4. Run quit

      Exit from the BGP view.

  4. Run commit

    The configuration is committed.

Configuring Load Balancing

You must configure load balancing to balance traffic over a DCN.

Procedure

  • Configure DCGWs.
    1. Run bgp { as-number-plain | as-number-dot }

      The BGP view is displayed.

    2. Enter the BGP-VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name command to enable the BGP-VPN instance IPv4 address family and displays the address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv6 address family view.

    3. Run maximum load-balancing [ ebgp | ibgp ] number [ ecmp-nexthop-changed ]

      The maximum number of equal-cost routes for load balancing is set.

    4. Run quit

      Exit from the BGP-VPN instance IPv4/IPv6address family view.

    5. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    6. Run peer { ipv4-address | group-name | ipv6-address } capability-advertise add-path { send | receive | both }

      Sending Add-Path routes to or receiving Add-Path routes from a specified peer is enabled on this device.

    7. Run peer { ipv4-address | group-name | ipv6-address } advertise add-path path-number path-number

      The number of routes that the device can send to a specified peer is configured.

    8. Run quit

      Exit from the BGP-EVPN address family view.

    9. Run quit

      Exit from the BGP view.

    10. Run commit

      The configuration is committed.

  • Configure L2GW/L3GWs.
    1. Run bgp { as-number-plain | as-number-dot }

      The BGP view is displayed.

    2. Enter the BGP-VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name command to enable the BGP-VPN instance IPv4 address family and displays the address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    3. Run maximum load-balancing [ ebgp | ibgp ] number [ ecmp-nexthop-changed ]

      The maximum number of equal-cost routes for load balancing is set.

    4. Run quit

      Exit from the BGP-VPN instance IPv4/IPv6 address family view.

    5. Run l2vpn-family evpn

      The BGP-EVPN address family view is displayed.

    6. Run bestroute add-path path-number path-number

      BGP Add-Path is enabled, and the number of routes that the device can select is configured.

    7. Run peer { ipv4-address | group-name | ipv6-address } capability-advertise add-path { send | receive | both }

      Sending Add-Path routes to or receiving Add-Path routes from a specified peer is enabled on this device.

    8. Run peer { ipv4-address | group-name | ipv6-address } advertise add-path path-number path-number

      The number of routes that the device can send to a specified peer is configured.

    9. Run quit

      Exit from the BGP-EVPN address family view.

    10. Run quit

      Exit from the BGP view.

    11. Run commit

      The configuration is committed.

Verifying the NFVI Distributed Gateway Configuration

After configuring the NFVI distributed gateway function, verify the configuration.

Prerequisites

The NFVI distributed gateway configurations have been completed.

Procedure

  1. Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer command on each DCGW to check whether the VPN BGP peer relationships between the DCGW and VNFs are Established.
  2. Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table ordisplay bgp vpnv6 vpn-instance vpn-instance-name routing-table command on each DCGW to check whether the DCGW has received mobile phone routes from the VNF and whether the next hop of the routes is the VNF IP address.
  3. Run the display ip routing-table vpn-instance vpn-instance-name ordisplay ipv6 routing-table vpn-instance vpn-instance-name command on each DCGW to check the DCGW's VPN routing table. The command output shows information about mobile phone routes and the outbound interfaces are VBDIF interfaces.

Configuring NFVI Distributed Gateways (Symmetric Mode)

In the Network Function Virtualization Infrastructure (NFVI) telco cloud solution, the NFVI distributed gateway function enables mobile phone traffic to traverse the DCN in load-balancing mode and to be processed by the virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE) on the DCN.

Usage Scenario

Huawei's NFVI telco cloud solution incorporates DCI and DCN solutions. A large volume of UE traffic enters the DCN and accesses the vUGW and vMSE on the DCN. After being processed by the vUGW and vMSE, the UE traffic is forwarded to destination devices on the Internet. Similarly, response traffic sent over the Internet from the destination devices to UEs also undergoes this process. To meet the preceding requirements and ensure that the UE traffic is load-balanced within the DCN, you need to deploy the NFVI distributed gateway function on DCN devices.

Figure 1-1117 shows the networking diagram of NFVI distributed gateways. DC gateways are the DCN's border gateways and can be used to exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 connect to virtualized network functions (VNFs). VNF1 and VNF2 can be deployed as virtualized NEs to respectively provide vUGW and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through the interface processing unit (IPU).

This networking combines the distributed gateway function and the VXLAN active-active gateway function:

  • The VXLAN active-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. Both DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
  • The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between L2GW/L3GW1 and L2GW/L3GW2.

Figure 1-1117 NFVI distributed gateway networking

On the NFVI distributed gateway network, the number of bridge domains (BDs) must be planned according to the number of network segments that the IPUs belong to. For example, if five IPU interfaces correspond to four network segments, four different BDs must be planned. In symmetric mode, you need to configure all BDs and VBDIF interfaces only on L2GWs/L3GWs and bind all VBDIF interfaces to the same L3VPN instance. In symmetric mode, you also need to perform the following configurations for NFVI distributed gateways:

  • Establish VPN BGP peer relationships between VNFs and DC gateways, so that VNFs can advertise UE routes to DC gateways.
  • Configure VPN static routes on L2GW/L3GW1 and L2GW/L3GW2, or configure L2GWs/L3GWs to establish VPN IGP neighbor relationships with VNFs to obtain VNF routes with next hop addresses being IPU addresses.
  • Establish BGP EVPN peer relationships between any two of the DC gateways and L2GWs/L3GWs. L2GWs/L3GWs can then advertise VNF routes to DC gateways and other L2GWs/L3GWs through BGP EVPN peer relationships. DC gateways can advertise the local loopback route and default route as well as obtained UE routes to L2GWs/L3GWs through BGP EVPN peer relationships.
  • Traffic forwarded between the UE and Internet through VNFs is called north-south traffic, and traffic forwarded between VNF1 and VNF2 is called east-west traffic. To balance both types of traffic, you need to configure load balancing on DC gateways and L2GWs/L3GWs.

In the NFVI distributed gateway scenario, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can function as either a DC gateway or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.

Prerequisites

Before configuring NFVI distributed gateways (symmetric mode), complete the following tasks:

Configuring an L3VPN Instance on a DC Gateway

An L3VPN instance can be configured on a DC gateway to store and manage received UE routes and VPN routes destined for VNFs.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run ip vpn-instance vpn-instance-name

    A VPN instance is created, and the VPN instance view is displayed.

  3. Enter the VPN instance IPv4/IPv6 address family view.

    • Run the ipv4-family command to enter the VPN instance IPv4 address family view.

    • Run the ipv6-family command to enter the VPN instance IPv6 address family view.

  4. Run route-distinguisher route-distinguisher

    An RD is configured for the VPN instance IPv4/IPv6 address family.

  5. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn

    VPN targets for route exchange with L3VPN instances on remote devices are configured for the VPN instance IPv4/IPv6 address family.

    When the local device advertises EVPN routes to a remote device, the EVPN routes carry the export VPN targets configured using this command. The local device allows the EVPN routes received from remote devices to be imported into the local VPN instance IPv4/IPv6 address family routing table only when the VPN targets carried in these routes match the import VPN targets configured using this command.

  6. Run commit

    The configuration is committed.

Configuring Route Advertisement on a DC Gateway

After route advertisement is configured on a DC gateway, other devices can obtain routes to the DC gateway, and the DC gateway can generate its own forwarding entries based on the received EVPN or BGP routes.

Procedure

  1. Configure the DC gateway to establish a VPN BGP peer relationship with a VNF.
    1. Run the route-policy route-policy-name deny node node command to create a route-policy that denies all routes.
    2. Run the quit command to exit the route-policy view.
    3. Run the bgp { as-number-plain | as-number-dot } command to enter the BGP view.
    4. Enter the BGP VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv4 address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    5. Run the peer { ipv4-address | ipv6-address | group-name } as-number command to establish a VPN BGP peer relationship.
    6. Run the peer { ipv4-address | ipv6-address| group-name } connect-interface interface-type interface-number [ ipv4-source-address ] command to specify a source interface and source address for TCP connection setup with the BGP peer.
    7. Run the peer { ipv4-address| ipv6-address | group-name } route-policy route-policy-name export command to apply a route-policy to prevent the DC gateway from advertising VPN BGP routes to VNFs. This helps prevent routing loops.
    8. Run the quit command to exit the BGP VPN instance IPv4/IPv6 address family view.
    9. Run the quit command to exit the BGP view.
  2. Configure the DC gateway to advertise VPN routes through EVPN.
    1. Run the system-view command to display the system view.
    2. Run the interface Loopback interface-number command to enter the loopback interface view.
    3. Run the ip binding vpn-instance vpn-instance-name command to bind the loopback interface to the L3VPN instance.
    4. (Optional) Run the ipv6 enable command to enable IPv6 on the interface. This step is mandatory if the interface requires an IPv6 address.
    5. Configure an IPv4/IPv6 address for the interface.

      • Run the ip address ip-address { mask | mask-length } command to configure an IPv4 address for the interface.

      • Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } command to configure an IPv6 address for the interface.

    6. Run the quit command to exit the loopback interface view.
    7. Configure default VPN static routes.

      • Run the ip route-static vpn-instance vpn-instance-name 0.0.0.0 { 0.0.0.0 | 0 } { nexthop-address | interface-type interface-number [ nexthop-address ] } [ tag tag ] command to create a default VPN IPv4 static route.

      • Run the ipv6 route-static vpn-instance vpn-instance-name :: 0 { nexthop-ipv6–address | interface-type interface-number [ nexthop-ipv6-address ] } [ tag tag ] command to create a default VPN IPv6 static route.

    8. Create a route-policy that can provide the following functions:

      • Filters the default VPN static routes and VPN loopback routes in the L3VPN instance.
      • Filters VPN UE routes received by the DC gateway (through VPN BGP neighbor relationships with VNFs) and applies the apply gateway-ip origin-nexthop or apply ipv6 gateway-ip origin-nexthop configuration to these routes to set the original next hop address of these routes to the gateway address.

      For details, see Routing Policy Configuration.

    9. Run the ip vpn-instance vpn-instance-name command to enter the VPN instance view.
    10. Enter the VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family command to enter the VPN instance IPv4 address family view.

      • Run the ipv6-family command to enter the VPN instance IPv6 address family view.

    11. Run the export route-policy policy-name evpn command to associate the L3VPN instance with an export route-policy that is used to filter routes to be advertised by the current L3VPN instance to the EVPN instance. Ensure that the L3VPN instance advertises only default VPN static routes, VPN loopback routes, and VPN UE routes to the EVPN instance, and the L3VPN instance changes the original next hop of VPN UE routes to the gateway address before advertising these routes.
    12. Run the quit command to exit the VPN instance IPv4/IPv6 address family view.
    13. Run the quit command to exit the VPN instance view.
    14. Create a route-policy on the DC gateway to prevent the gateway from advertising UE routes. For details, see Routing Policy Configuration.
    15. Run the bgp { as-number-plain | as-number-dot } command to enter the BGP view.
    16. Run the l2vpn-family evpn command to enter the BGP EVPN address family view.
    17. Run the peer { group-name | ipv4-address | ipv6-address } route-policy route-policy-name export command to apply a route-policy to each DC gateway, so that DC gateways do not advertise UE routes to each other.
    18. Run the quit command to exit the BGP EVPN address family view.
    19. Enter the BGP VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv4 address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    20. Run the import-route direct [ med med | route-policy route-policy-name ] * command to import VPN loopback routes to the BGP VPN instance IPv4/IPv6 address family.
    21. Run the network { 0.0.0.0 0 | :: 0 } command to import a default VPN static route to the BGP VPN instance IPv4/IPv6 address family.
    22. Run the advertise l2vpn evpn command to enable the VPN instance to advertise EVPN IP prefix routes.
    23. Run the quit command to return to the BGP view.
    24. Run the quit command to exit the BGP view.
  3. Run the commit command to commit the configuration.

Configuring Route Advertisement on L2GWs/L3GWs

After route advertisement is configured on L2GWs/L3GWs, other devices can obtain routes to L2GWs/L3GWs and L2GWs/L3GWs can generate their own forwarding entries based on the received EVPN or BGP routes.

Procedure

  1. Use either of the following methods to configure a VPN route destined for a VNF on an L2GW/L3GW:

  2. Configure an L3VPN instance on each L2GW/L3GW to advertise VPN routes destined for VNFs to the EVPN instance.
    1. Create a route-policy to filter VPN routes destined for VNFs in the L3VPN instance.

      For details about how to create a route-policy, see Routing Policy Configuration. When configuring the apply clause, you need to run the apply gateway-ip { origin-nexthop | ipv4-address } or apply ipv6 gateway-ip { origin-nexthop | ipv6-address } command to set the original next hop addresses of VPN routes to the gateway address.

    2. Run the ip vpn-instance vpn-instance-name command to enter the VPN instance view.
    3. Enter the VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family command to enter the VPN instance IPv4 address family view.

      • Run the ipv6-family command to enter the VPN instance IPv6 address family view.

    4. Run the export route-policy policy-name evpn command to associate the L3VPN instance with an export route-policy that is used to filter the routes to be advertised by the L3VPN instance to the EVPN instance, so that the L3VPN instance advertises only VPN routes destined for VNFs to the EVPN instance.
    5. Run the quit command to return to the VPN instance view.
    6. Run the quit command to return to the system view.
    7. Run the bgp { as-number-plain | as-number-dot } command to enter the BGP view.
    8. Enter the BGP VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv4 address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    9. Run the import-route { static | { ospf | isis } process-id } [ med med | route-policy route-policy-name ] * command to import routes destined for VNFs into the routing table for the BGP VPN instance IPv4 address family, or run the import-route { static | { ospfv3 | isis } process-id } [ med med | route-policy route-policy-name ] * command to import routes destined for VNFs into the routing table for the BGP VPN instance IPv6 address family.
    10. Run the advertise l2vpn evpn [ import-route-multipath ] command to enable the VPN instance to advertise EVPN IP prefix routes. If load balancing is required, you are advised to specify the import-route-multipath keyword, so that a VPN instance can advertise all routes with the same destination address as EVPN IP prefix routes.
    11. Run the quit command to return to the BGP view.
  3. Configure L2GWs/L3GWs to advertise IRB or IRBv6 routes to DC gateways.
    1. Run the l2vpn-family evpn command to enter the BGP EVPN address family view.
    2. Run the peer { ipv4-address | group-name | ipv6-address } advertise { irb | irbv6 } command to configure the device to advertise IRB or IRBv6 routes to DC gateways.
    3. Run the quit command to return to the BGP view.
    4. Run the quit command to return to the system view.
  4. Run the commit command to commit the configuration.

Configuring Load Balancing

To balance traffic on a DCN, configure load balancing.

Procedure

  • Configure north-south load balancing on each DC gateway.
    1. Run the bgp { as-number-plain | as-number-dot } command to enter the BGP view.
    2. Enter the BGP VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv4 address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    3. Run the maximum load-balancing [ ebgp | ibgp ] number [ ecmp-nexthop-changed ] command to configure the maximum number of equal-cost routes for load balancing.
    4. Run the quit command to return to the BGP view.
    5. Run the l2vpn-family evpn command to enter the BGP EVPN address family view.
    6. Run the peer { ipv4-address | group-name | ipv6-address } capability-advertise add-path { send | receive | both } command to enable the device to receive Add-Path routes from or advertise Add-Path routes to the peer L2GW/L3GW.
    7. Run the peer { ipv4-address | group-name | ipv6-address } advertise add-path path-number path-number command to configure the number of preferred routes to be advertised to the peer L2GW/L3GW.
    8. Run the quit command to return to the BGP view.
    9. Run the quit command to return to the system view.
    10. Run the commit command to commit the configuration.
  • Configure north-south load balancing on each L2GW/L3GW.
    1. Run the bgp { as-number-plain | as-number-dot } command to enter the BGP view.
    2. Enter the BGP VPN instance IPv4/IPv6 address family view.

      • Run the ipv4-family vpn-instance vpn-instance-name commadn to enter the BGP VPN instance IPv4 address family view.

      • Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP VPN instance IPv6 address family view.

    3. Run the maximum load-balancing [ ebgp | ibgp ] number [ ecmp-nexthop-changed ] command to configure the maximum number of equal-cost routes for load balancing.
    4. Run the quit command to return to the BGP view.
    5. Run the l2vpn-family evpn command to enter the BGP EVPN address family view.
    6. Run the bestroute add-path path-number path-number command to enable BGP Add-Path, and specify the number of preferred routes.
    7. Run the peer { ipv4-address | group-name | ipv6-address } capability-advertise add-path { send | receive | both } command to enable the device to receive Add-Path routes from or advertise Add-Path routes to the peer DC gateway.
    8. Run the peer { ipv4-address | group-name | ipv6-address } advertise add-path path-number path-number command to configure the number of preferred routes to be advertised to the peer DC gateway.
    9. Run the quit command to return to the BGP view.
    10. Run the quit command to return to the system view.
    11. Run the commit command to commit the configuration.

Verifying the Configuration

After configuring NFVI distributed gateways, verify the configuration.

Prerequisites

All configurations of NFVI distributed gateways (symmetric mode) are complete.

Procedure

  1. Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer command on each DC gateway to check whether the VPN peer relationships between DC gateways and VNFs are in the Established state.
  2. Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on each DC gateway to check whether the DC gateway has received UE routes from VNFs and whether the next hop addresses of these routes are VNF addresses.
  3. Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on each DC gateway to check whether UE routes exist in the VPN routing table and whether the outbound interfaces of these routes are VXLAN or VXLAN6 if such routes exist.

Maintaining VXLAN

This section describes how to clear VXLAN statistics and monitor the VXLAN running status.

Configuring the VXLAN Alarm Function

To learn the VXLAN operating status in time, configure the VXLAN alarm function so that the NMS will be notified of the VXLAN status changes. This facilitates O&M.

Procedure

  1. Run system-view

    The system view is displayed.

  2. Run snmp-agent trap enable feature-name nvo3 [ trap-name { hwnvo3vxlantnldown | hwnvo3vxlantnlup } ]

    The VXLAN alarm function is enabled.

  3. Run commit

    The configuration is committed.

Verifying the Configuration

After the VXLAN alarm function is enabled, check the VXLAN alarm status.

Run the display snmp-agent trap feature-name nvo3 all command to check configurations of all alarm functions of the VXLAN module.

Collecting and Checking VXLAN Packet Statistics

To check the network status or locate network faults, you can enable the traffic statistics function to view VXLAN packet statistics.

Procedure
  • Enable VXLAN packet statistics collection for a BD.
    1. Run system-view

      The system view is displayed.

    2. Run bridge-domain bd-id

      A BD is created, and the BD view is displayed.

    3. Run statistic enable

      VXLAN packet statistics collection is enabled for the BD.

    4. Run commit

      The configuration is committed.

  • Enable VXLAN packet statistics collection for a specific VNI.
    1. Run system-view

      The system view is displayed.

    2. Run vni vni-id

      A VNI is created, and the VNI view is displayed.

    3. Run statistic enable

      VXLAN packet statistics collection is enabled.

    4. Run commit

      The configuration is committed.

  • Enable VNI- and IPv4 VXLAN tunnel-based packet statistics collection.
    1. Run system-view

      The system view is displayed.

    2. Run interface nve nve-number

      An NVE interface is created, and the NVE interface view is displayed.

    3. Run source ip-address

      The IP address of the source VTEP is configured.

    4. Run vni vni-id head-end peer-list ip-address &<1-10>

      An ingress replication list is configured for the VNI.

    5. Run vxlan statistics peer peer-ip vni vni-id [ inbound | outbound ] enable

      VNI- and VXLAN tunnel-based packet statistics collection is enabled.

    6. Run vxlan statistic l3-mode peer peer-ip vni vni-id inbound enable

      Upstream Layer 3 traffic statistics collection by VNI and VXLAN tunnel is enabled.

      Run vxlan statistics l3-mode peer peer-ip [ vni vni-id ] outbound enable

      VNI- and VXLAN tunnel-based downlink Layer 3 traffic statistics collection is enabled.

    7. Run commit

      The configuration is committed.

  • Enable VNI- and IPv6 VXLAN tunnel-based packet statistics collection.
    1. Run the system-view command to enter the system view.
    2. Run the interface nve nve-number command to create an NVE interface and enter its view.
    3. Run the vxlan statistics peer destIpv6Addr vni vni-val [ inbound | outbound ] enable command to enable VNI- and IPv6 VXLAN tunnel-based packet statistics collection.
    4. Run the vxlan statistic l3-mode peer destIpv6Addr vni vni-val inbound enable command to enable VNI- and IPv6 VXLAN tunnel-based Layer 3 uplink traffic statistics collection.
    5. Run the vxlan statistics l3-mode peer destIpv6Addr [ vni vni-val ] outbound enable command to enable VNI- and VXLAN tunnel-based Layer 3 downlink traffic statistics collection.
    6. Run the commit command to commit the configuration.
Follow-up Procedure
  • Run the display bridge-domain bd-id statistics command to view VXLAN packet statistics in the BD.
  • Run the display vxlan statistics vni vni-id command to view VXLAN packet statistics collected by VNI.
  • Run the display vxlan statistics source source-ip peer peer-ip vni vni-id command to check VNI- and VXLAN tunnel-based packet statistics.
  • Run the display vxlan statistics source source-ipv6 peer peer-ipv6 vni vni-val command to check VNI- and IPv6 VXLAN tunnel-based packet statistics.
  • Run the display vxlan statistics l3-mode source source-ip peer peer-ip local-vni vni-id command to check VNI- and VXLAN tunnel-based Layer 3 uplink traffic statistics.
  • Run the display vxlan statistics l3-mode source source-ip peer peer-ip remote-vni vni-id command to check VNI- and VXLAN tunnel-based Layer 3 downlink traffic statistics.
  • Run the display vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 local-vni vni-val command to check VNI- and IPv6 VXLAN tunnel-based Layer 3 uplink traffic statistics.
  • Run the display vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 remote-vni vni-val command to check VNI- and IPv6 VXLAN tunnel-based Layer 3 downlink traffic statistics.

Clearing VXLAN Packet Statistics

This section describes how to clear VXLAN packet statistics in a BD, VXLAN packet statistics collected per VNI, or per VNI and VXLAN tunnel.

Context

Packet statistics cannot be restored after they are cleared. Exercise caution when running the reset commands.

Procedure

  • Run the reset bridge-domain bd-id statistics command in the user view to delete packet statistics in a specified BD.
  • Run the reset vxlan statistics vni vni-id command in the user view to delete VXLAN packet statistics collected per VNI.
  • Run the reset vxlan statistics source source-ip peer peer-ip vni vni-id command in the user view to delete packet statistics collected per VNI and VXLAN tunnel.
  • Run the reset vxlan statistics source source-ipv6 peer peer-ipv6 vni vni-val command in the user view to delete VNI- and IPv6 VXLAN tunnel-based packet statistics.
  • Run the reset vxlan statistics source source-ip peer peer-ip local-vni local-vni-id command in the user view to delete uplink VXLAN packet statistics collected based on the local VNI ID.
  • Run the reset vxlan statistics source source-ipv6 peer peer-ipv6 local-vni vni-val command in the user view to delete uplink IPv6 VXLAN packet statistics collected based on the local VNI ID.
  • Run the reset vxlan statistics source source-ip peer peer-ip remote-vni remote-vni-id command in the user view to delete downstream VXLAN packet statistics collected based on the remote VNI ID.
  • Run the reset vxlan statistics source source-ipv6 peer peer-ipv6 remote-vni vni-val command in the user view to delete downlink IPv6 VXLAN packet statistics collected based on the peer VNI ID.
  • Run the reset vxlan statistics l3-mode source source-ip peer peer-ip local-vni vni-id command in the user view to delete Layer 3 upstream packet statistics collected per VNI and VXLAN tunnel.
  • Run the reset vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 local-vni vni-val command in the user view to delete VNI- and IPv6 VXLAN tunnel-based Layer 3 uplink traffic statistics.
  • Run the reset vxlan statistics l3-mode source source-ip peer peer-ip remote-vni vni-id command in the user view to delete Layer 3 downstream packet statistics collected per VNI and VXLAN tunnel.
  • Run the reset vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 remote-vni vni-val command in the user view to delete VNI- and IPv6 VXLAN tunnel-based Layer 3 downlink traffic statistics.

Checking Statistics about MAC Address Entries in a BD

Statistics about MAC address entries in a BD can be viewed to monitor the VXLAN operating status.

Context

In routine maintenance, run the following commands in any view to check the VXLAN operating status.

Procedure

  • Run the display mac-address [ mac-address ] bridge-domain bd-id command to check statistics about all MAC address entries in a BD.

Clearing Statistics about Dynamic MAC Address Entries in a BD

To view dynamic MAC address entries in a BD within a specified period of time, clear existing dynamic MAC address entry information before starting statistics collection to ensure information accuracy.

Context

Statistics about dynamic MAC address entries in a BD cannot be restored after they are cleared. Exercise caution when running the reset command.

Procedure

  • Run the reset mac-address bridge-domain bd-id command in the user view to clear statistics about dynamic MAC address entries in a BD.

Configuration Examples for VXLAN

This section describes the typical application scenarios of VXLANs, including networking requirements, configuration roadmap, and data preparation, and provides related configuration files.

Example for Configuring Users on the Same Network Segment to Communicate Through a VXLAN Tunnel

This section provides an example for configuring users on the same network segment to communicate through a VXLAN tunnel.

Networking Requirements

On the network shown in Figure 1-1118, an enterprise has VMs deployed in different data centers. VM1 on Server1 belongs to VLAN10, and VM1 on Server2 belongs to VLAN20. VM1 on Server1 and VM1 on Server2 reside on the same network segment. To allow VM1s in different data centers to communicate with each other, configure a VXLAN tunnel between Device1 and Device3.

Figure 1-1118 Configuring users on the same network segment to communicate through a VXLAN tunnel

Interfaces 1 and 2 represent GE 0/1/1 and GE 0/1/2, respectively.



Configuration Roadmap
The configuration roadmap is as follows:
  1. Configure a routing protocol on Device1, Device2, and Device3 to allow them to communicate at Layer 3.
  2. Configure a service access point on Device1 and Device3 to differentiate service traffic.
  3. Configure a VXLAN tunnel on Device1 and Device3 to forward service traffic.
Data Preparation

To complete the configuration, you need the following data:

  • VMs' VLAN IDs (10 and 20)
  • IP addresses of interfaces connecting devices
  • Interior Gateway Protocol (IGP) running between devices (OSPF in this example)
  • BD ID (10)
  • VNI ID (5010)

Procedure

  1. Configure a routing protocol.

    Assign an IP address to each interface on Device1, Device2, and Device3 according to Figure 1-1118.

    # Configure Device1.
    <HUAWEI> system-view
    [~HUAWEI] sysname Device1
    [*HUAWEI] commit
    [~Device1] interface loopback 1
    [*Device1-LoopBack1] ip address 2.2.2.2 32
    [*Device1-LoopBack1] quit
    [*Device1] interface gigabitethernet 0/1/1
    [*Device1-GigabitEthernet0/1/1] ip address 192.168.1.1 24
    [*Device1-GigabitEthernet0/1/1] quit
    [*Device1] ospf
    [*Device1-ospf-1] area 0
    [*Device1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
    [*Device1-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
    [*Device1-ospf-1-area-0.0.0.0] quit
    [*Device1-ospf-1] quit
    [*Device1] commit

    Repeat these steps for Device2 and Device3. For configuration details, see Configuration Files in this section.

    After OSPF is configured, the devices can use OSPF to learn the IP addresses of loopback interfaces of each other and successfully ping each other. The following example shows the command output on Device1 after it pings Device3:
    [~Device1] ping 4.4.4.4
      PING 4.4.4.4: 56  data bytes, press CTRL_C to break
        Reply from 4.4.4.4: bytes=56 Sequence=1 ttl=254 time=5 ms
        Reply from 4.4.4.4: bytes=56 Sequence=2 ttl=254 time=2 ms
        Reply from 4.4.4.4: bytes=56 Sequence=3 ttl=254 time=2 ms
        Reply from 4.4.4.4: bytes=56 Sequence=4 ttl=254 time=3 ms
        Reply from 4.4.4.4: bytes=56 Sequence=5 ttl=254 time=3 ms
    
      --- 4.4.4.4 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 2/3/5 ms

  2. Configure a service access point on Device1 and Device3.

    # Configure Device1.
    [~Device1] bridge-domain 10
    [*Device1-bd10] quit
    [*Device1] interface gigabitethernet0/1/2.1 mode l2
    [*Device1-GigabitEthernet0/1/2.1] encapsulation dot1q vid 10
    [*Device1-GigabitEthernet0/1/2.1] rewrite pop single
    [*Device1-GigabitEthernet0/1/2.1] bridge-domain 10
    [*Device1-GigabitEthernet0/1/2.1] quit
    [*Device1] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  3. Configure a VXLAN tunnel on Device1 and Device3.

    # Configure Device1.
    [~Device1] bridge-domain 10
    [~Device1-bd10] vxlan vni 5010
    [*Device1-bd10] quit
    [*Device1] interface nve 1
    [*Device1-Nve1] source 2.2.2.2
    [*Device1-Nve1] vni 5010 head-end peer-list 4.4.4.4
    [*Device1-Nve1] quit
    [*Device1] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  4. Verify the configuration.

    After completing the configurations, run the display vxlan vni and display vxlan tunnel commands on Device1 and Device3 to check the VNI status and VXLAN tunnel information, respectively. The VNIs are Up on Device1 and Device3. The following example shows the command output on Device1.

    [~Device1] display vxlan vni
    Number of vxlan vni: 1
    VNI            BD-ID            State
    ---------------------------------------
    5010           10               up
    [~Device1] display vxlan tunnel
    Number of vxlan tunnel : 1
    Tunnel ID   Source           Destination      State  Type    Uptime
    -------------------------------------------------------------------
    4026531842  2.2.2.2          4.4.4.4          up     static 0028h16m

    By now, users on the same network can communicate through the VXLAN tunnel.

Configuration Files
  • Device1 configuration file

    #
    sysname Device1
    #
    bridge-domain 10
     vxlan vni 5010
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.1.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
    #
    interface GigabitEthernet0/1/2.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #
    interface Nve1
     source 2.2.2.2
     vni 5010 head-end peer-list 4.4.4.4
    #
    ospf 1
     area 0.0.0.0
      network 2.2.2.2 0.0.0.0
      network 192.168.1.0 0.0.0.255
    #
    return
  • Device2 configuration file

    #
    sysname Device2
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.1.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 192.168.2.1 255.255.255.0
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #
    ospf 1
     area 0.0.0.0
      network 3.3.3.3 0.0.0.0
      network 192.168.1.0 0.0.0.255
      network 192.168.2.0 0.0.0.255
    #
    return
  • Device3 configuration file

    #
    sysname Device3
    #
    bridge-domain 10
     vxlan vni 5010
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.2.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
    #
    interface GigabitEthernet0/1/2.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #
    interface Nve1
     source 4.4.4.4
     vni 5010 head-end peer-list 2.2.2.2
    #
    ospf 1
     area 0.0.0.0
      network 4.4.4.4 0.0.0.0
      network 192.168.2.0 0.0.0.255
    #
    return

Example for Configuring Users on Different Network Segments to Communicate Through a VXLAN Layer 3 Gateway

This section provides an example for configuring users on different network segments to communicate through a VXLAN Layer 3 gateway. To achieve this, the default gateway address of the users must be the IP address of the BDIF interface of the Layer 3 gateway.

Networking Requirements

On the network shown in Figure 1-1119, an enterprise has VMs deployed in different data centers. VM1 on Server1 belongs to VLAN10, and VM1 on Server2 belongs to VLAN20. VM1 on Server1 and VM1 on Server2 reside on different network segments. To allow VM1s in different data centers to communicate with each other, configure a VXLAN tunnel between Device1 and Device2 and one between Device2 and Device3.

Figure 1-1119 Configuring users on different network segments to communicate through a VXLAN Layer 3 gateway

Interfaces 1 and 2 in this example represent GE0/1/1 and GE0/1/2, respectively.


Configuration Roadmap
The configuration roadmap is as follows:
  1. Configure a routing protocol on Device1, Device2, and Device3 to allow them to communicate at Layer 3.
  2. Configure a service access point on Device1 and Device3 to differentiate service traffic.
  3. Configure a VXLAN tunnel on Device1, Device2, and Device3 to forward service traffic.
  4. Configure Device2 as a VXLAN Layer 3 gateway to allow users on different network segments to communicate.
Data Preparation

To complete the configuration, you need the following data:

  • VMs' VLAN IDs (10 and 20)
  • IP addresses of interfaces connecting devices
  • Interior Gateway Protocol (IGP) running between devices (OSPF in this example)
  • BD IDs (10 and 20)
  • VNI IDs (5010 and 5020)

Procedure

  1. Configure a routing protocol.

    Assign an IP address to each interface on Device 1, Device 2, and Device 3 according to Figure 1-1119.

    # Configure Device1.
    <HUAWEI> system-view
    [~HUAWEI] sysname Device1
    [*HUAWEI] commit
    [~Device1] interface loopback 1
    [*Device1-LoopBack1] ip address 2.2.2.2 32
    [*Device1-LoopBack1] quit
    [*Device1] interface gigabitethernet 0/1/1
    [*Device1-GigabitEthernet0/1/1] ip address 192.168.1.1 24
    [*Device1-GigabitEthernet0/1/1] quit
    [*Device1] ospf
    [*Device1-ospf-1] area 0
    [*Device1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
    [*Device1-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
    [*Device1-ospf-1-area-0.0.0.0] quit
    [*Device1-ospf-1] quit
    [*Device1] commit

    The configurations of Device2 and Device3 are similar to the configuration of Device1. For configuration details, see Configuration Files in this section.

    After OSPF is configured, the devices can use OSPF to learn the IP addresses of loopback interfaces of each other and successfully ping each other. The following example shows the command output on Device1 after it pings Device3:
    [~Device1] ping 4.4.4.4
      PING 4.4.4.4: 56  data bytes, press CTRL_C to break
        Reply from 4.4.4.4: bytes=56 Sequence=1 ttl=254 time=5 ms
        Reply from 4.4.4.4: bytes=56 Sequence=2 ttl=254 time=2 ms
        Reply from 4.4.4.4: bytes=56 Sequence=3 ttl=254 time=2 ms
        Reply from 4.4.4.4: bytes=56 Sequence=4 ttl=254 time=3 ms
        Reply from 4.4.4.4: bytes=56 Sequence=5 ttl=254 time=3 ms
    
      --- 4.4.4.4 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 2/3/5 ms

  2. Configure a service access point on Device1 and Device3.

    # Configure Device1.
    [~Device1] bridge-domain 10
    [*Device1-bd10] quit
    [*Device1] interface gigabitethernet0/1/2.1 mode l2
    [*Device1-GigabitEthernet0/1/2.1] encapsulation dot1q vid 10
    [*Device1-GigabitEthernet0/1/2.1] rewrite pop single
    [*Device1-GigabitEthernet0/1/2.1] bridge-domain 10
    [*Device1-GigabitEthernet0/1/2.1] quit
    [*Device1] commit

    The configuration of Device3 is similar to the configuration of Device1. For configuration details, see Configuration Files in this section.

  3. Configure VXLAN tunnels on Device1, Device2, and Device3.

    [~Device1] bridge-domain 10
    [*Device1-bd10] vxlan vni 5010
    [*Device1-bd10] quit
    [*Device1] interface nve 1
    [*Device1-Nve1] source 2.2.2.2
    [*Device1-Nve1] vni 5010 head-end peer-list 3.3.3.3
    [*Device1-Nve1] quit
    [*Device1] commit
    # Configure Device2.
    [~Device2] bridge-domain 10
    [*Device2-bd10] vxlan vni 5010
    [*Device2-bd10] quit
    [*Device2] interface nve 1
    [*Device2-Nve1] source 3.3.3.3
    [*Device2-Nve1] vni 5010 head-end peer-list 2.2.2.2
    [*Device2-Nve1] quit
    [~Device2] bridge-domain 20
    [*Device2-bd20] vxlan vni 5020
    [*Device2-bd20] quit
    [*Device2] interface nve 1
    [*Device2-Nve1] vni 5020 head-end peer-list 4.4.4.4
    [*Device2-Nve1] quit
    [*Device2] commit
    # Configure Device3.
    [~Device3] bridge-domain 20
    [*Device3-bd20] vxlan vni 5020
    [*Device3-bd20] quit
    [*Device3] interface nve 1
    [*Device3-Nve1] source 4.4.4.4
    [*Device3-Nve1] vni 5020 head-end peer-list 3.3.3.3
    [*Device3-Nve1] quit
    [*Device3] commit

  4. Configure Device2 as a VXLAN Layer 3 gateway.

    [~Device2] interface vbdif 10
    [*Device2-Vbdif10] ip address 192.168.10.10 24
    [*Device2-Vbdif10] quit
    [*Device2] interface vbdif 20
    [*Device2-Vbdif20] ip address 192.168.20.10 24
    [*Device2-Vbdif20] quit
    [*Device2-Vbdif20] commit

  5. Verify the configuration.

    After completing the configurations, run the display vxlan vni and display vxlan tunnel commands on Device1, Device2, and Device3 to check the VNI status and VXLAN tunnel information, respectively. The VNIs are Up on Device1, Device2, and Device3. The following example shows the command output on Device2.

    [~Device2] display vxlan vni
    Number of vxlan vni: 2
    VNI            BD-ID            State
    ---------------------------------------
    5010           10               up
    5020           20               up
    [~Device2] display vxlan tunnel
    Number of Vxlan tunnel : 2
    Tunnel ID   Source           Destination        State  Type    Uptime
    ---------------------------------------------------------------------
    4026531841  3.3.3.3          2.2.2.2            up     static 0029h30m
    4026531842  3.3.3.3          4.4.4.4            up     static 0029h44m

    Configure 192.168.10.10/24 as the default gateway IP address of VM1 in VLAN 10 on Server1.

    Configure 192.168.20.10/24 as the default gateway IP address of VM1 in VLAN 20 on Server2.

    After the configuration is complete, VM1 on different network segments can communicate with each other. In addition, to enable Device1 and Device3 to communicate on the overlay network, configure static routes or an IGP to advertise routes to 192.168.10.0/24 and 192.168.20.0/24 to each other. The next hop is the VBDIF interface address on Device2.

Configuration Files
  • Device1 configuration file

    #
    sysname Device1
    #
    bridge-domain 10
     vxlan vni 5010
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.1.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
    #
    interface GigabitEthernet0/1/2.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #
    interface Nve1
     source 2.2.2.2
     vni 5010 head-end peer-list 3.3.3.3
    #
    ospf 1
     area 0.0.0.0
      network 2.2.2.2 0.0.0.0
      network 192.168.1.0 0.0.0.255
    #
    return
  • Device2 configuration file

    #
    sysname Device2
    #
    bridge-domain 10
     vxlan vni 5010
    #
    bridge-domain 20
     vxlan vni 5020
    #
    interface Vbdif10
     ip address 192.168.10.10 255.255.255.0
    #
    interface Vbdif20
     ip address 192.168.20.10 255.255.255.0
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.1.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 192.168.2.1 255.255.255.0
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #
    interface Nve1
     source 3.3.3.3
     vni 5010 head-end peer-list 2.2.2.2
     vni 5020 head-end peer-list 4.4.4.4
    #
    ospf 1
     area 0.0.0.0
      network 3.3.3.3 0.0.0.0
      network 192.168.1.0 0.0.0.255
      network 192.168.2.0 0.0.0.255
    #
    return
  • Device3 configuration file

    #
    sysname Device3
    #
    bridge-domain 20
     vxlan vni 5020
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.2.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
    #
    interface GigabitEthernet0/1/2.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #
    interface Nve1
     source 4.4.4.4
     vni 5020 head-end peer-list 3.3.3.3
    #
    ospf 1
     area 0.0.0.0
      network 4.4.4.4 0.0.0.0
      network 192.168.2.0 0.0.0.255
    #
    return

Example for Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN

This section provides an example for configuring VXLAN in centralized gateway mode for dynamic tunnel establishment so that users on the same network segment or different network segments can communicate.

Networking Requirements

On the network shown in Figure 1-1120, an enterprise has VMs deployed in different areas of a data center. VM 1 on Server 1 belongs to VLAN 10, VM 1 on Server 2 belongs to VLAN 20, and VM 1 on Server 3 belongs to VLAN 30. Server 1 and Server 2 reside on different network segments, whereas Server 2 and Server 3 reside on the same network segment. To allow VM 1s on different servers to communicate with each other, configure IPv6 VXLAN in centralized gateway mode.

Figure 1-1120 Networking for configuring VXLAN in centralized gateway mode for dynamic tunnel establishment

In this example, interfaces 1 through 3 represent GigabitEthernet0/1/1, GigabitEthernet0/1/2, and GigabitEthernet0/1/3, respectively.


Configuration Roadmap
The configuration roadmap is as follows:
  1. Configure a routing protocol on Device 1, Device 2, and Device 3 to allow them to communicate at Layer 3.

  2. Configure a service access point on Device 1 and Device 3 to differentiate service traffic.

  3. Configure a BGP EVPN peer relationship.

  4. Configure EVPN instances.

  5. Configure an ingress replication list.

  6. Configure Device 2 as a Layer 3 VXLAN gateway.

Data Preparation

To complete the configuration, you need the following data.

  • VMs' VLAN IDs (10, 20, and 30)

  • IP addresses of interfaces connecting devices

  • Interior Gateway Protocol (IGP) running between devices (OSPF in this example)

  • BD IDs (10 and 20)
  • VNI IDs (5010 and 5020)
  • EVPN instances' RDs (11:1, 12:1, 21:1, 23:1, and 31:2) and RTs (1:1 and 2:2)

Procedure

  1. Configure a routing protocol.

    Assign an IP address to each interface on Device 1, Device 2, and Device 3 according to Figure 1-1120.

    # Configure Device 1.
    <HUAWEI> system-view
    [~HUAWEI] sysname Device1
    [*HUAWEI] commit
    [~Device1] interface loopback 1
    [*Device1-LoopBack1] ip address 2.2.2.2 32
    [*Device1-LoopBack1] quit
    [*Device1] interface gigabitethernet 0/1/1
    [*Device1-GigabitEthernet0/1/1] ip address 192.168.1.1 24
    [*Device1-GigabitEthernet0/1/1] quit
    [*Device1] ospf
    [*Device1-ospf-1] area 0
    [*Device1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
    [*Device1-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
    [*Device1-ospf-1-area-0.0.0.0] quit
    [*Device1-ospf-1] quit
    [*Device1] commit

    The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.

    After OSPF is configured, the devices can use OSPF to learn the IP addresses of each other's loopback interfaces and successfully ping each other. The following example shows the command output on Device 1 after it pings Device 3:
    [~Device1] ping 4.4.4.4
      PING 4.4.4.4: 56  data bytes, press CTRL_C to break
        Reply from 4.4.4.4: bytes=56 Sequence=1 ttl=254 time=5 ms
        Reply from 4.4.4.4: bytes=56 Sequence=2 ttl=254 time=2 ms
        Reply from 4.4.4.4: bytes=56 Sequence=3 ttl=254 time=2 ms
        Reply from 4.4.4.4: bytes=56 Sequence=4 ttl=254 time=3 ms
        Reply from 4.4.4.4: bytes=56 Sequence=5 ttl=254 time=3 ms
    
      --- 4.4.4.4 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 2/3/5 ms

  2. Configure a service access point on Device 1 and Device 3.

    # Configure Device 1.
    [~Device1] bridge-domain 10
    [*Device1-bd10] quit
    [*Device1] interface gigabitethernet0/1/2.1 mode l2
    [*Device1-GigabitEthernet0/1/2.1] encapsulation dot1q vid 10
    [*Device1-GigabitEthernet0/1/2.1] rewrite pop single
    [*Device1-GigabitEthernet0/1/2.1] bridge-domain 10
    [*Device1-GigabitEthernet0/1/2.1] quit
    [*Device1] bridge-domain 20
    [*Device1-bd20] quit
    [*Device1] interface gigabitethernet0/1/3.1 mode l2
    [*Device1-GigabitEthernet0/1/3.1] encapsulation dot1q vid 30
    [*Device1-GigabitEthernet0/1/3.1] rewrite pop single
    [*Device1-GigabitEthernet0/1/3.1] bridge-domain 20
    [*Device1-GigabitEthernet0/1/3.1] quit
    [*Device1] commit

    The configuration of Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.

  3. Configure a BGP EVPN peer relationship.

    # Configure Device 1.

    [~Device1] bgp 100
    [*Device1-bgp] peer 3.3.3.3 as-number 100
    [*Device1-bgp] peer 3.3.3.3 connect-interface LoopBack1
    [*Device1-bgp] peer 4.4.4.4 as-number 100
    [*Device1-bgp] peer 4.4.4.4 connect-interface LoopBack1
    [*Device1-bgp] l2vpn-family evpn
    [*Device1-bgp-af-evpn] peer 3.3.3.3 enable
    [*Device1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
    [*Device1-bgp-af-evpn] peer 4.4.4.4 enable
    [*Device1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
    [*Device1-bgp-af-evpn] quit
    [*Device1-bgp] quit
    [*Device1] commit

    The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.

  4. Configure an EVPN instance on Device 1, Device 2, and Device 3.

    # Configure Device 1.
    [~Device1] evpn vpn-instance evrf3 bd-mode
    [*Device1-evpn-instance-evrf3] route-distinguisher 11:1
    [*Device1-evpn-instance-evrf3] vpn-target 1:1
    [*Device1-evpn-instance-evrf3] quit
    [*Device1] bridge-domain 10
    [*Device1-bd10] vxlan vni 5010 split-horizon-mode
    [*Device1-bd10] evpn binding vpn-instance evrf3
    [*Device1-bd10] quit
    [*Device1] evpn vpn-instance evrf4 bd-mode
    [*Device1-evpn-instance-evrf4] route-distinguisher 12:1
    [*Device1-evpn-instance-evrf4] vpn-target 2:2
    [*Device1-evpn-instance-evrf4] quit
    [*Device1] bridge-domain 20
    [*Device1-bd20] vxlan vni 5020 split-horizon-mode
    [*Device1-bd20] evpn binding vpn-instance evrf4
    [*Device1-bd20] quit
    [*Device1] commit

    The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.

  5. Configure an ingress replication list.

    # Configure Device 1.
    [~Device1] interface nve 1
    [*Device1-Nve1] source 2.2.2.2
    [*Device1-Nve1] vni 5010 head-end peer-list protocol bgp
    [*Device1-Nve1] vni 5020 head-end peer-list protocol bgp
    [*Device1-Nve1] quit
    [*Device1] commit

    The configuration of Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.

  6. Configure Device 2 as a Layer 3 VXLAN gateway.

    [~Device2] interface vbdif 10
    [*Device2-Vbdif10] ip address 192.168.10.10 24
    [*Device2-Vbdif10] quit
    [*Device2] interface vbdif 20
    [*Device2-Vbdif20] ip address 192.168.20.10 24
    [*Device2-Vbdif20] quit
    [*Device2-Vbdif20] commit

  7. Verify the configuration.

    After completing the configurations, run the display vxlan tunnel and display vxlan vni commands on Device 1, Device 2, and Device 3 to check the VXLAN tunnel and VNI information, respectively. The VNIs are Up. The following example shows the command output on Device 1.

    [~Device1] display vxlan tunnel
    Number of vxlan tunnel : 2
    Tunnel ID   Source           Destination      State  Type     Uptime
    -------------------------------------------------------------------
    4026531843  2.2.2.2          4.4.4.4          up     dynamic  0035h21m
    4026531844  2.2.2.2          3.3.3.3          up     dynamic  0036h10m
    [~Device1] display vxlan vni
    Number of vxlan vni : 2
    VNI            BD-ID            State
    ---------------------------------------
    5010           10               up
    5020           20               up

    Run the display bgp evpn all routing-table command to check EVPN route information.

    [~Device1] display bgp evpn all routing-table
     Local AS number : 100
    
     BGP Local router ID is 192.168.1.1
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
    
     EVPN address family:
     Number of Inclusive Multicast Routes: 5
     Route Distinguisher: 11:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    0:32:2.2.2.2                                           0.0.0.0
     Route Distinguisher: 12:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    0:32:2.2.2.2                                           0.0.0.0
     Route Distinguisher: 21:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>i   0:32:3.3.3.3                                           3.3.3.3
     Route Distinguisher: 23:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>i   0:32:3.3.3.3                                           3.3.3.3
     Route Distinguisher: 31:2
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>i   0:32:4.4.4.4                                           4.4.4.4

    VM1s on different servers can communicate. For example, you can ping VM1 of Server 1 on the Layer 3 gateway Device 2.

    [~Device2] ping 192.168.10.1
      PING 192.168.10.1: 56  data bytes, press CTRL_C to break
        Reply from 192.168.10.1: bytes=56 Sequence=1 ttl=254 time=15 ms
        Reply from 192.168.10.1: bytes=56 Sequence=2 ttl=254 time=5 ms
        Reply from 192.168.10.1: bytes=56 Sequence=3 ttl=254 time=5 ms
        Reply from 192.168.10.1: bytes=56 Sequence=4 ttl=254 time=10 ms
        Reply from 192.168.10.1: bytes=56 Sequence=5 ttl=254 time=10 ms
    
      --- 192.168.10.1 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 5/10/15 ms

Configuration Files
  • Device 1 configuration file

    #
    sysname Device1
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 11:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 5010 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 12:1
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    bridge-domain 20
     vxlan vni 5020 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.1.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 30
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #
    interface Nve1
     source 2.2.2.2
     vni 5010 head-end peer-list protocol bgp
     vni 5020 head-end peer-list protocol bgp
    #
    bgp 100
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      peer 3.3.3.3 enable
      peer 4.4.4.4 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 2.2.2.2 0.0.0.0
      network 192.168.1.0 0.0.0.255
    #
    return
  • Device 2 configuration file

    #
    sysname Device2
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 21:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 5010 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 23:1
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    bridge-domain 20
     vxlan vni 5020 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip address 192.168.10.10 255.255.255.0
    #
    interface Vbdif20
     ip address 192.168.20.10 255.255.255.0
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.1.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 192.168.2.1 255.255.255.0
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #
    interface Nve1
     source 3.3.3.3
     vni 5010 head-end peer-list protocol bgp
     vni 5020 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      peer 2.2.2.2 enable
      peer 4.4.4.4 enable
     #
     l2vpn-family evpn
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 3.3.3.3 0.0.0.0
      network 192.168.1.0 0.0.0.255
      network 192.168.2.0 0.0.0.255
    #
    return
  • Device 3 configuration file

    #
    sysname Device3
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 31:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity 
    #
    bridge-domain 20
     vxlan vni 5020 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.2.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #
    interface Nve1
     source 4.4.4.4
     vni 5020 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      peer 2.2.2.2 enable
      peer 3.3.3.3 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 4.4.4.4 0.0.0.0
      network 192.168.2.0 0.0.0.255
    #
    return

Example for Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN

This section provides an example for configuring VXLAN in distributed gateway mode using BGP EVPN.

Networking Requirements

Distributed VXLAN gateways can be configured to address problems that occur in legacy centralized VXLAN gateway networking, for example, forwarding paths are not optimal, and the ARP entry specification is a bottleneck.

On the network shown in Figure 1-1121, an enterprise has VMs deployed in different data centers. VM 1 on Server 1 belongs to VLAN 10, and VM 1 on Server 2 belongs to VLAN 20. VM 1 on Server 1 and VM 1 on Server 2 reside on different network segments. To allow VM1s in different data centers to communicate with each other, configure distributed VXLAN gateways.

Figure 1-1121 Networking for configuring VXLAN in distributed gateway mode using BGP EVPN

In this example, Interface1 and Interface2 represent GE 0/1/0 and GE 0/1/1, respectively.


Table 1-483 Interface IP addresses

Device

Interface

IP Address

Device 1

GE 0/1/0

192.168.3.2/24

GE 0/1/1

192.168.2.2/24

Loopback 0

1.1.1.1/32

Device 2

GE 0/1/0

192.168.2.1/24

Loopback 0

2.2.2.2/32

Device 3

GE 0/1/0

192.168.3.1/24

Loopback 0

3.3.3.3/32

Configuration Roadmap
The configuration roadmap is as follows:
  1. Configure IGP to run between Device 1 and Device 2 and between Device 1 and Device 3.
  2. Configure a service access point on Device 2 and Device 3 to differentiate service traffic.
  3. Specify Device 1 as a BGP EVPN peer for Device 2 and Device 3.
  4. Specify Device 2 and Device 3 as BGP EVPN peers for Device 1 and configure Device 2 and Device 3 as RR clients.
  5. Configure VPN and EVPN instances on Device 2 and Device 3.
  6. Configure an ingress replication list on Device 2 and Device 3.
  7. Configure Device 2 and Device 3 as Layer 3 VXLAN gateways.
  8. Configure IRB route advertisement on Device 1, Device 2, and Device 3.
Data Preparation

To complete the configuration, you need the following data.

  • VMs' VLAN IDs (10 and 20)
  • IP addresses of interfaces connecting devices
  • BD IDs (10 and 20)
  • VNI IDs (10 and 20)
  • VNI ID in VPN instance (5010)

Procedure

  1. Configure IGP routing protocol.

    Assign an IP address to each interface on Device 1, Device 2, and Device 3 according to Figure 1-1121.

    # Configure Device 1.

    <HUAWEI> system-view
    [~HUAWEI] sysname Device1
    [*HUAWEI] commit
    [~Device1] isis 1
    [*Device1-isis-1] network-entity 10.0000.0000.0001.00
    [*Device1-isis-1] quit
    [*Device1] commit
    [~Device1] interface loopback 0
    [*Device1-LoopBack0] ip address 1.1.1.1 32
    [*Device1-LoopBack0] isis enable 1
    [*Device1-LoopBack0] quit
    [*Device1] interface GigabitEthernet0/1/0
    [*Device1-GigabitEthernet0/1/0] ip address 192.168.3.2 24
    [*Device1-GigabitEthernet0/1/0] isis enable 1
    [*Device1-GigabitEthernet0/1/0] quit
    [*Device1] interface GigabitEthernet0/1/1
    [*Device1-GigabitEthernet0/1/1] ip address 192.168.2.2 24
    [*Device1-GigabitEthernet0/1/1] isis enable 1
    [*Device1-GigabitEthernet0/1/1] quit
    [*Device1] commit

    The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.

  2. Configure a service access point on Device 2 and Device 3.

    # Configure Device 2.

    [~Device2] bridge-domain 10
    [*Device2-bd10] quit
    [*Device2] interface GigabitEthernet0/1/1.1 mode l2
    [*Device2-GigabitEthernet0/1/1.1] encapsulation dot1q vid 10
    [*Device2-GigabitEthernet0/1/1.1] rewrite pop single
    [*Device2-GigabitEthernet0/1/1.1] bridge-domain 10
    [*Device2-GigabitEthernet0/1/1.1] quit
    [*Device2] commit

    The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.

  3. Specify Device 1 as a BGP EVPN peer for Device 2 and Device 3.

    # Specify Device 1 as a BGP EVPN peer for Device 2.
    [~Device2] bgp 100
    [*Device2-bgp] peer 1.1.1.1 as-number 100
    [*Device2-bgp] peer 1.1.1.1 connect-interface LoopBack0
    [*Device2-bgp] l2vpn-family evpn
    [*Device2-bgp-af-evpn] policy vpn-target
    [*Device2-bgp-af-evpn] peer 1.1.1.1 enable
    [*Device2-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
    [*Device2-bgp-af-evpn] quit
    [*Device2-bgp] quit
    [*Device2] commit

    The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.

  4. Specify Device 2 and Device 3 as BGP EVPN peers for Device 1 and configure them as RR clients.

    # Specify BGP EVPN peers for Device 1.
    [~Device1] bgp 100
    [*Device1-bgp] peer 2.2.2.2 as-number 100
    [*Device1-bgp] peer 2.2.2.2 connect-interface LoopBack0
    [*Device1-bgp] peer 3.3.3.3 as-number 100
    [*Device1-bgp] peer 3.3.3.3 connect-interface LoopBack0
    [*Device1-bgp] l2vpn-family evpn
    [*Device1-bgp-af-evpn] peer 2.2.2.2 enable
    [*Device1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*Device1-bgp-af-evpn] peer 2.2.2.2 reflect-client
    [*Device1-bgp-af-evpn] peer 3.3.3.3 enable
    [*Device1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
    [*Device1-bgp-af-evpn] peer 3.3.3.3 reflect-client
    [*Device1-bgp-af-evpn] undo policy vpn-target
    [*Device1-bgp-af-evpn] quit
    [*Device1-bgp] quit
    [*Device1] commit

  5. Configure VPN and EVPN instances on Device 2 and Device 3.

    # Configure Device 2.

    [~Device2] ip vpn-instance vpn1
    [*Device2-vpn-instance-vpn1] vxlan vni 5010
    [*Device2-vpn-instance-vpn1] ipv4-family
    [*Device2-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
    [*Device2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
    [*Device2-vpn-instance-vpn1-af-ipv4] quit
    [*Device2-vpn-instance-vpn1] quit
    [*Device2] evpn vpn-instance evrf3 bd-mode
    [*Device2-evpn-instance-evrf3] route-distinguisher 10:1
    [*Device2-evpn-instance-evrf3] vpn-target 11:1
    [*Device2-evpn-instance-evrf3] quit
    [*Device2] bridge-domain 10
    [*Device2-bd10] vxlan vni 10 split-horizon-mode
    [*Device2-bd10] evpn binding vpn-instance evrf3
    [*Device2-bd10] quit
    [*Device2] commit

    The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.

  6. Configure an ingress replication list on Device 2 and Device 3.

    # Configure an ingress replication list on Device 2.
    [~Device2] interface nve 1
    [*Device2-Nve1] source 2.2.2.2
    [*Device2-Nve1] vni 10 head-end peer-list protocol bgp
    [*Device2-Nve1] quit
    [*Device2] commit

    The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.

  7. Configure Device 2 and Device 3 as Layer 3 VXLAN gateways.

    # Configure Device 2.

    [~Device2] interface Vbdif10
    [*Device2-Vbdif10] ip binding vpn-instance vpn1
    [*Device2-Vbdif10] ip address 10.1.1.1 255.255.255.0
    [*Device2-Vbdif10] vxlan anycast-gateway enable
    [*Device2-Vbdif10] arp collect host enable
    [*Device2-Vbdif10] quit
    [*Device2] commit

    The configuration of Device 3 is similar to the configuration of Device 2. Note that the IP addresses of VBDIF interfaces on Device 2 and Device 3 must belong to different network segments. For configuration details, see Configuration Files in this section.

  8. Configure IRB route advertisement on Device 1, Device 2, and Device 3.

    # Configure Device 1.

    [~Device1] bgp 100
    [~Device1-bgp] l2vpn-family evpn
    [~Device1-bgp-af-evpn] peer 2.2.2.2 advertise irb
    [*Device1-bgp-af-evpn] peer 3.3.3.3 advertise irb
    [*Device1-bgp-af-evpn] quit
    [*Device1-bgp] quit
    [*Device1] commit

    # Configure Device 2.

    [~Device2] bgp 100
    [~Device2-bgp] l2vpn-family evpn
    [~Device2-bgp-af-evpn] peer 1.1.1.1 advertise irb
    [*Device2-bgp-af-evpn] quit
    [*Device2-bgp] quit
    [*Device2] commit

    The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.

  9. Verify the configuration.

    After completing the configurations, run the display vxlan tunnel command on Device 2 and Device 3 to check VXLAN tunnel information. The following example uses the command output on Device 2.

    [*Device2] display vxlan tunnel
    Number of vxlan tunnel : 1
    Tunnel ID   Source           Destination      State  Type     Uptime
    --------------------------------------------------------------------
    4026531841  2.2.2.2          3.3.3.3          up     dynamic  0026h29m

    Run the display bgp evpn all routing-table command to check EVPN route information.

    [*Device2]display bgp evpn all routing-table
     Local AS number : 100
    
     BGP Local router ID is 2.2.2.2
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
    
     EVPN address family:
     Number of Mac Routes: 2
     Route Distinguisher: 10:1
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>    0:48:00e0-fc00-0002:0:0.0.0.0                          0.0.0.0
     Route Distinguisher: 20:1
           Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)  NextHop
     *>i   0:48:00e0-fc00-0003:0:0.0.0.0                          3.3.3.3
    
     EVPN address family:
     Number of Inclusive Multicast Routes: 2
     Route Distinguisher: 10:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    0:32:2.2.2.2                                           0.0.0.0
     Route Distinguisher: 20:1
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>i   0:32:3.3.3.3                                           3.3.3.3

    VM1s on different servers can communicate. You can ping VM1 of Server 2 from the distributed gateway Device 2.

    [~Device2] ping -vpn-instance vpn1 10.2.1.10 
      PING 10.2.1.10: 300  data bytes, press CTRL_C to break
        Reply from 10.2.1.10: bytes=300 Sequence=1 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=2 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=3 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=4 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=5 ttl=254 time=30 ms
    
      --- 10.2.1.10 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 30/30/30 ms

Configuration Files
  • Device 1 configuration file

    #
    sysname Device1
    #
    isis 1
     network-entity 10.0000.0000.0001.00
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 192.168.3.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 192.168.2.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack0
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack0
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack0
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
      peer 2.2.2.2 advertise irb
      peer 2.2.2.2 reflect-client
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
      peer 3.3.3.3 advertise irb
      peer 3.3.3.3 reflect-client
    #
    return
  • Device 2 configuration file

    #
    sysname Device2
    #
    isis 1
     network-entity 10.0000.0000.0002.00
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 192.168.2.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/1.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack0
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 2.2.2.2
     vni 10 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack0
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise encap-type vxlan
      peer 1.1.1.1 advertise irb
    #
    return
  • Device 3 configuration file

    #
    sysname Device3
    #
    isis 1
     network-entity 10.0000.0000.0003.00
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 22:22
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 20:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ip address 192.168.3.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/1.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack0
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     vni 20 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack0
     #
     l2vpn-family evpn
      policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise encap-type vxlan
      peer 1.1.1.1 advertise irb
    #
    return

Example for Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN

This section provides an example for deploying IPv6 VXLAN in distributed gateway mode using BGP EVPN.

Networking Requirements

In IPv6 VXLAN, distributed gateways can be configured to address problems that occur in centralized gateway networking. Such problems include sub-optimal forwarding paths and bottlenecks on Layer 3 gateways in terms of ARP or ND entry specifications.

As shown in Figure 1-1122, an enterprise deploys IPv4 VMs in different areas of an IPv6 DC. IPv4 VM1 on Server1 belongs to VLAN 10, and IPv4 VM1 on Server2 belongs to VLAN 20. The two VMs reside on different network segments. IPv6 VXLAN in distributed gateway mode is required for communication between IPv4 VM1s on different servers.

Figure 1-1122 Configuring IPv6 VXLAN in distributed gateway mode

Interface1 and Interface2 in this example represent GigabitEthernet 0/1/0 and GigabitEthernet 0/1/1, respectively.


Table 1-484 Interface IP addresses and masks

Device

Interface

IP Address and Mask

Device1

GigabitEthernet 0/1/0

2001:DB8:3::2/64

GigabitEthernet 0/1/1

2001:DB8:2::2/64

LoopBack0

2001:DB8:11::1/128

Device2

GigabitEthernet 0/1/0

2001:DB8:2::1/64

LoopBack0

2001:DB8:22::2/128

Device3

GigabitEthernet 0/1/0

2001:DB8:3::1/64

LoopBack0

2001:DB8:33::3/128

Configuration Roadmap
The configuration roadmap is as follows:
  1. Configure OSPFv3 to run between Device1 and Device2 and between Device1 and Device3.
  2. Configure a service access point on Device2 and Device3 to differentiate service traffic.
  3. Configure Device2 and Device3 to establish BGP EVPN peer relationships with Device1.
  4. Configure Device1 to establish BGP EVPN peer relationships with Device2 and Device3. Then, configure Device1 as the RR.
  5. Configure a VPN instance and an EVPN instance on Device2 and Device3.
  6. Enable ingress replication on Device2 and Device3.
  7. Configure an IPv6 VXLAN Layer 3 gateway on Device2 and Device3, and configure an IPv4 address for the gateway interface.
  8. Configure BGP to advertise IRB routes between Device1 and Device2 and between Device1 and Device3.
Data Preparation

To complete the configuration, you need the following data:

  • Router IDs of Device1, Device2, and Device3 used by OSPFv3 (1.1.1.1, 2.2.2.2, and 3.3.3.3)
  • VM1 VLAN IDs (10 and 20)
  • IPv6 addresses of interconnection interfaces between network devices and IPv4 address of the VBDIF interface that functions as the Layer 3 gateway interface
  • BD IDs (10 and 20)
  • VNI IDs (10 and 20)
  • VNI ID in the VPN instance (100)

Procedure

  1. Assign an IPv6 address for each interface.

    Assign an IPv6 address to each interface on Device1, Device2, and Device3 according to Figure 1-1122. For configuration details, see Configuration Files in this section.

  2. Configure OSPFv3.

    # Configure Device1.

    <HUAWEI> system-view
    [~HUAWEI] sysname Device1
    [*HUAWEI] commit
    [~Device1] ospfv3 1
    [*Device1-ospfv3-1] router-id 1.1.1.1
    [*Device1-ospfv3-1] area 0.0.0.0
    [*Device1-ospfv3-1-area-0.0.0.0] quit
    [*Device1-ospfv3-1] quit
    [*Device1] commit
    [~Device1] interface loopback 0
    [*Device1-LoopBack0] ospfv3 1 area 0.0.0.0
    [*Device1-LoopBack0] quit
    [*Device1] interface GigabitEthernet0/1/0
    [*Device1-GigabitEthernet0/1/0] ospfv3 1 area 0.0.0.0
    [*Device1-GigabitEthernet0/1/0] quit
    [*Device1] interface GigabitEthernet0/1/1
    [*Device1-GigabitEthernet0/1/1] ospfv3 1 area 0.0.0.0
    [*Device1-GigabitEthernet0/1/1] quit
    [*Device1] commit

    The configuration of Device2 and Device3 is similar to the configuration of Device1. For configuration details, see Configuration Files in this section.

  3. Configure a service access point on Device2 and Device3.

    # Configure Device2.

    [~Device2] bridge-domain 10
    [*Device2-bd10] quit
    [*Device2] interface GigabitEthernet0/1/1.1 mode l2
    [*Device2-GigabitEthernet0/1/1.1] encapsulation dot1q vid 10
    [*Device2-GigabitEthernet0/1/1.1] rewrite pop single
    [*Device2-GigabitEthernet0/1/1.1] bridge-domain 10
    [*Device2-GigabitEthernet0/1/1.1] quit
    [*Device2] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  4. Configure Device2 and Device3 to establish BGP EVPN peer relationships with Device1.

    # Configure a BGP EVPN peer relationship on Device2.

    [~Device2] bgp 100
    [*Device2-bgp] peer 2001:DB8:11::1 as-number 100
    [*Device2-bgp] peer 2001:DB8:11::1 connect-interface LoopBack0
    [*Device2-bgp] l2vpn-family evpn
    [*Device2-bgp-af-evpn] policy vpn-target
    [*Device2-bgp-af-evpn] peer 2001:DB8:11::1 enable
    [*Device2-bgp-af-evpn] peer 2001:DB8:11::1 advertise encap-type vxlan
    [*Device2-bgp-af-evpn] quit
    [*Device2-bgp] quit
    [*Device2] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  5. Configure Device1 to establish BGP EVPN peer relationships with Device2 and Device3. Then configure Device1 as the RR and Device2 and Device3 as the RR clients.

    # Configure Device1.

    [~Device1] bgp 100
    [*Device1-bgp] peer 2001:DB8:22::2 as-number 100
    [*Device1-bgp] peer 2001:DB8:22::2 connect-interface LoopBack0
    [*Device1-bgp] peer 2001:DB8:33::3 as-number 100
    [*Device1-bgp] peer 2001:DB8:33::3 connect-interface LoopBack0
    [*Device1-bgp] l2vpn-family evpn
    [*Device1-bgp-af-evpn] peer 2001:DB8:22::2 enable
    [*Device1-bgp-af-evpn] peer 2001:DB8:22::2 advertise encap-type vxlan
    [*Device1-bgp-af-evpn] peer 2001:DB8:22::2 reflect-client
    [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 enable
    [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 advertise encap-type vxlan
    [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 reflect-client
    [*Device1-bgp-af-evpn] undo policy vpn-target
    [*Device1-bgp-af-evpn] quit
    [*Device1-bgp] quit
    [*Device1] commit

  6. Configure a VPN instance and an EVPN instance on Device2 and Device3.

    # Configure Device2.

    [~Device2] ip vpn-instance vpn1
    [*Device2-vpn-instance-vpn1] vxlan vni 100
    [*Device2-vpn-instance-vpn1] ipv4-family
    [*Device2-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
    [*Device2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
    [*Device2-vpn-instance-vpn1-af-ipv4] quit
    [*Device2-vpn-instance-vpn1] quit
    [*Device2] evpn vpn-instance evrf1 bd-mode
    [*Device2-evpn-instance-evrf1] route-distinguisher 10:1
    [*Device2-evpn-instance-evrf1] vpn-target 11:1
    [*Device2-evpn-instance-evrf1] quit
    [*Device2] bridge-domain 10
    [*Device2-bd10] vxlan vni 10 split-horizon-mode
    [*Device2-bd10] evpn binding vpn-instance evrf1
    [*Device2-bd10] quit
    [*Device2] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  7. Enable ingress replication on Device2 and Device3.

    # Enable ingress replication on Device2.

    [~Device2] interface nve 1
    [*Device2-Nve1] source 2001:DB8:22::2
    [*Device2-Nve1] vni 10 head-end peer-list protocol bgp
    [*Device2-Nve1] quit
    [*Device2] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  8. Configure Device2 and Device3 as Layer 3 VXLAN gateways.

    # Configure Device2.

    [~Device2] interface Vbdif10
    [*Device2-Vbdif10] ip binding vpn-instance vpn1
    [*Device2-Vbdif10] ip address 10.1.1.1 255.255.255.0
    [*Device2-Vbdif10] vxlan anycast-gateway enable
    [*Device2-Vbdif10] arp collect host enable
    [*Device2-Vbdif10] quit
    [*Device2] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  9. Configure BGP to advertise IRB routes between Device1 and Device2 and between Device1 and Device3.

    # Configure Device1.

    [~Device1] bgp 100
    [~Device1-bgp] l2vpn-family evpn
    [~Device1-bgp-af-evpn] peer 2001:DB8:22::2 advertise irb
    [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 advertise irb
    [*Device1-bgp-af-evpn] quit
    [*Device1-bgp] quit
    [*Device1] commit

    # Configure Device2.

    [~Device2] bgp 100
    [~Device2-bgp] l2vpn-family evpn
    [~Device2-bgp-af-evpn] peer 2001:DB8:11::1 advertise irb
    [*Device2-bgp-af-evpn] quit
    [*Device2-bgp] quit
    [*Device2] commit

    Repeat these steps for Device3. For configuration details, see Configuration Files in this section.

  10. Verify the configuration.

    After completing the configurations, run the display vxlan tunnel command on Device2 and Device3 to check VXLAN tunnel information. The following example uses the command output on Device2.

    [*Device2] display vxlan tunnel
    Number of vxlan tunnel : 1
    Tunnel ID   Source           Destination      State  Type     Uptime
    --------------------------------------------------------------------
    4026531879  2001:DB8:22::2   2001:DB8:33::3   up     dynamic  00:44:18

    VM1s on different servers can communicate. VM1 on Server2 can be pinged from the distributed gateway Device2.

    [~Device2] ping -vpn-instance vpn1 10.2.1.10 
      PING 10.2.1.10: 300  data bytes, press CTRL_C to break
        Reply from 10.2.1.10: bytes=300 Sequence=1 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=2 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=3 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=4 ttl=254 time=30 ms
        Reply from 10.2.1.10: bytes=300 Sequence=5 ttl=254 time=30 ms
    
      --- 10.2.1.10 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 30/30/30 ms

Configuration Files
  • Device1 configuration file

    #
    sysname Device1
    #
    ospfv3 1
     router-id 1.1.1.1
     area 0.0.0.0
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8:3::2/64
     ospfv3 1 area 0.0.0.0
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8:2::2/64
     ospfv3 1 area 0.0.0.0
    #
    interface LoopBack0
     ipv6 enable
     ipv6 address 2001:DB8:11::1/128
     ospfv3 1 area 0.0.0.0
    #
    bgp 100
     peer 2001:DB8:22::2 as-number 100
     peer 2001:DB8:22::2 connect-interface LoopBack0
     peer 2001:DB8:33::3 as-number 100
     peer 2001:DB8:33::3 connect-interface LoopBack0
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2001:DB8:22::2 enable
      peer 2001:DB8:22::2 advertise encap-type vxlan
      peer 2001:DB8:22::2 advertise irb
      peer 2001:DB8:22::2 reflect-client
      peer 2001:DB8:33::3 enable
      peer 2001:DB8:33::3 advertise encap-type vxlan
      peer 2001:DB8:33::3 advertise irb
      peer 2001:DB8:33::3 reflect-client
    #
    return
  • Device2 configuration file

    #
    sysname Device2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 100
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    ospfv3 1
     router-id 2.2.2.2
     area 0.0.0.0
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8:2::1/64
     ospfv3 1 area 0.0.0.0
    #
    interface GigabitEthernet0/1/1.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack0
     ipv6 enable
     ipv6 address 2001:DB8:22::2/128
     ospfv3 1 area 0.0.0.0
    #
    interface Nve1
     source 2001:DB8:22::2
     vni 10 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2001:DB8:11::1 as-number 100
     peer 2001:DB8:11::1 connect-interface LoopBack0
     #
     l2vpn-family evpn
      policy vpn-target
      peer 2001:DB8:11::1 enable
      peer 2001:DB8:11::1 advertise encap-type vxlan
      peer 2001:DB8:11::1 advertise irb
    #
    return
  • Device3 configuration file

    #
    sysname Device3
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 20:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 22:22
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 100
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    ospfv3 1
     router-id 3.3.3.3
     area 0.0.0.0
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/0
     undo shutdown
     ipv6 enable
     ipv6 address 2001:DB8:3::1/64
     ospfv3 1 area 0.0.0.0
    #
    interface GigabitEthernet0/1/1.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack0
     ipv6 enable
     ipv6 address 2001:DB8:33::3/128
     ospfv3 1 area 0.0.0.0
    #
    interface Nve1
     source 2001:DB8:33::3
     vni 20 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2001:DB8:11::1 as-number 100
     peer 2001:DB8:11::1 connect-interface LoopBack0
     #
     l2vpn-family evpn
      policy vpn-target
      peer 2001:DB8:11::1 enable
      peer 2001:DB8:11::1 advertise encap-type vxlan
      peer 2001:DB8:11::1 advertise irb
    #
    return

Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking

This section provides an example for configuring three-segment VXLAN to enable Layer 3 communication between VMs that belong to the different DCs.

Networking Requirements

In Figure 1-1123, DC-A and DC-B reside in different BGP ASs. To allow intra-DC VM communication (VMa1 and VMa2 in DC-A, and VMb1 and VMb2 in DC-B), configure BGP EVPN on the devices in the DCs to create VXLAN tunnels between distributed gateways. To allow VMs in different DCs (for example, VMa1 and VMb2) to communicate with each other, configure BGP EVPN on Leaf2 and Leaf3 to create another VXLAN tunnel. In this way, three-segment VXLAN tunnels are established to implement DC interconnection (DCI).

Figure 1-1123 Three-segment VXLAN

Interface1, interface2, and interface3 in this example stand for GE 0/1/0, GE 0/2/0, and GE 0/3/0, respectively.



Table 1-485 Interface IP addresses

Device Name

Interface Name

IP Address

Device Name

Interface Name

IP Address

Device1

GE 0/1/0

192.168.50.1/24

Device2

GE 0/1/0

192.168.60.1/24

GE 0/2/0

192.168.1.1/24

GE 0/2/0

192.168.1.2/24

Loopback1

1.1.1.1/32

Loopback1

2.2.2.2/32

Spine1

GE 0/1/0

192.168.10.1/24

Spine2

GE 0/1/0

192.168.30.1/24

GE 0/2/0

192.168.20.1/24

GE 0/2/0

192.168.40.1/24

Loopback1

3.3.3.3/32

Loopback1

4.4.4.4/32

Leaf1

GE 0/1/0

192.168.10.2/24

Leaf4

GE 0/1/0

192.168.40.2/24

GE 0/2/0

-

GE 0/2/0

-

Loopback1

5.5.5.5/32

Loopback1

8.8.8.8/32

Leaf2

GE 0/1/0

192.168.20.2/24

Leaf3

GE 0/1/0

192.168.30.2/24

GE 0/2/0

-

GE 0/2/0

-

GE 0/3/0

192.168.50.2/24

GE 0/3/0

192.168.60.2/24

Loopback1

6.6.6.6/32

Loopback1

7.7.7.7/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address to each interface.

  2. Configure an IGP to ensure route reachability between nodes.

  3. Configure static routes to achieve interworking between DCs.

  4. Configure BGP EVPN on Leaf1 and Leaf2 in DC-A and Leaf3 and Leaf4 in DC-B to create VXLAN tunnels between distributed gateways.

  5. Configure BGP EVPN on DC edge nodes Leaf2 and Leaf3 to create a VXLAN tunnel between DCs.

Data Preparation

To complete the configuration, you need the following data:

  • VLAN IDs of the VMs

  • BD IDs

  • VXLAN network identifiers (VNIs) in BDs and VNIs in VPN instances

Procedure

  1. Assign an IP address to each interface (including each loopback interface) on each node.

    For configuration details, see Configuration Files in this section.

  2. Configure an IGP. In this example, OSPF is used.

    For configuration details, see Configuration Files in this section.

  3. Configure static routes to achieve interworking between DCs.

    For configuration details, see Configuration Files in this section.

  4. Configure BGP EVPN on Leaf1 and Leaf2 in DC-A and Leaf3 and Leaf4 in DC-B to create VXLAN tunnels between distributed gateways.
    1. Configure a service access point on leaf nodes.

      # Configure Leaf1.

      [~Leaf1] bridge-domain 10
      [*Leaf1-bd10] quit
      [*Leaf1] interface GigabitEthernet 0/2/0.1 mode l2
      [*Leaf1-GigabitEthernet0/2/0.1] encapsulation dot1q vid 10
      [*Leaf1-GigabitEthernet0/2/0.1] rewrite pop single
      [*Leaf1-GigabitEthernet0/2/0.1] bridge-domain 10
      [*Leaf1-GigabitEthernet0/2/0.1] quit
      [*Leaf1] commit

      The configurations of Leaf2, Leaf3, and Leaf4 are similar to the configurations of Leaf1. For configuration details, see Configuration Files in this section.

    2. Configure an IBGP EVPN peer relationship between Leaf1 and Leaf2 in DC-A and between Leaf3 and Leaf4 in DC-B.

      # Configure Leaf1.

      [~Leaf1] bgp 100
      [*Leaf1-bgp] peer 6.6.6.6 as-number 100
      [*Leaf1-bgp] peer 6.6.6.6 connect-interface LoopBack 1
      [*Leaf1-bgp] l2vpn-family evpn
      [*Leaf1-bgp-af-evpn] peer 6.6.6.6 enable
      [*Leaf1-bgp-af-evpn] peer 6.6.6.6 advertise encap-type vxlan
      [*Leaf1-bgp-af-evpn] quit
      [*Leaf1-bgp] quit
      [*Leaf1] commit

      The configurations of Leaf2, Leaf3, and Leaf4 are similar to the configurations of Leaf1. For configuration details, see Configuration Files in this section.

    3. Configure VPN instances and EVPN instances on leaf nodes.

      # Configure Leaf1.

      [~Leaf1] ip vpn-instance vpn1
      [*Leaf1-vpn-instance-vpn1] vxlan vni 5010
      [*Leaf1-vpn-instance-vpn1] ipv4-family
      [*Leaf1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
      [*Leaf1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1
      [*Leaf1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
      [*Leaf1-vpn-instance-vpn1-af-ipv4] quit
      [*Leaf1-vpn-instance-vpn1] quit
      [*Leaf1] evpn vpn-instance evrf1 bd-mode
      [*Leaf1-evpn-instance-evrf1] route-distinguisher 10:1
      [*Leaf1-evpn-instance-evrf1] vpn-target 11:1
      [*Leaf1-evpn-instance-evrf1] quit
      [*Leaf1] bridge-domain 10
      [*Leaf1-bd10] vxlan vni 10 split-horizon-mode
      [*Leaf1-bd10] evpn binding vpn-instance evrf1
      [*Leaf1-bd10] quit
      [*Leaf1] commit

      The configurations of Leaf2, Leaf3, and Leaf4 are similar to the configurations of Leaf1. For configuration details, see Configuration Files in this section.

    4. Configure an ingress replication list on leaf nodes.

      # Configure Leaf1.

      [~Leaf1] interface nve 1
      [*Leaf1-Nve1] source 5.5.5.5
      [*Leaf1-Nve1] vni 10 head-end peer-list protocol bgp
      [*Leaf1-Nve1] quit
      [*Leaf1] commit

      The configurations of Leaf2, Leaf3, and Leaf4 are similar to the configurations of Leaf1. For configuration details, see Configuration Files in this section.

    5. Configure leaf nodes as Layer 3 VXLAN gateways.

      # Configure Leaf1.

      [~Leaf1] interface vbdif10
      [*Leaf1-Vbdif10] ip binding vpn-instance vpn1
      [*Leaf1-Vbdif10] ip address 10.1.1.1 24
      [*Leaf1-Vbdif10] vxlan anycast-gateway enable
      [*Leaf1-Vbdif10] arp collect host enable
      [*Leaf1-Vbdif10] quit
      [*Leaf1] commit

      The configurations of Leaf2, Leaf3, and Leaf4 are similar to the configurations of Leaf1. For configuration details, see Configuration Files in this section.

    6. Configure IRB route advertisement between Leaf1 and Leaf2 in DC A, between Leaf3 and Leaf4 in DC B, and between Leaf2 and Leaf3.

      # Configure Leaf1.

      [~Leaf1] bgp 100
      [*Leaf1-bgp] l2vpn-family evpn
      [*Leaf1-bgp-af-evpn] peer 6.6.6.6 advertise irb
      [*Leaf1-bgp-af-evpn] quit
      [*Leaf1-bgp] quit
      [*Leaf1] commit

      # Configure Leaf2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 5.5.5.5 advertise irb
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 advertise irb
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      The configurations of Leaf4 are similar to the configurations of Leaf1, and those of Leaf3 are similar to the configurations of Leaf2. For configuration details, see Configuration Files in this section.

      After the configurations are complete, run the display vxlan tunnel command on leaf nodes to check VXLAN tunnel information. The following example uses the command output on Leaf1.
      [~Leaf1] display vxlan tunnel
      Number of vxlan tunnel : 1
      Tunnel ID   Source           Destination      State  Type    Uptime
      ---------------------------------------------------------------------
      4026531841  5.5.5.5          6.6.6.6          up     dynamic 00:05:36

  5. Configure BGP EVPN on Leaf2 and Leaf3 to create a VXLAN tunnel.
    1. Configure an EBGP EVPN peer relationship between Leaf2 and Leaf3.

      As VPN and EVPN instances have been configured on Leaf2 and Leaf3, you only need to configure an EBGP EVPN peer relationship between Leaf2 and Leaf3 to ensure IP route reachability.

      # Configure Leaf2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] peer 7.7.7.7 as-number 200
      [*Leaf2-bgp] peer 7.7.7.7 connect-interface LoopBack1
      [*Leaf2-bgp] peer 7.7.7.7 ebgp-max-hop 255
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 enable
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 advertise encap-type vxlan
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      # Configure Leaf3.

      [~Leaf3] bgp 200
      [*Leaf3-bgp] peer 6.6.6.6 as-number 100
      [*Leaf3-bgp] peer 6.6.6.6 connect-interface LoopBack1
      [*Leaf3-bgp] peer 6.6.6.6 ebgp-max-hop 255
      [*Leaf3-bgp] l2vpn-family evpn
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 enable
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 advertise encap-type vxlan
      [*Leaf3-bgp-af-evpn] quit
      [*Leaf3-bgp] quit
      [*Leaf3] commit

    2. Configure the regeneration of IRB routes and IP prefix routes in EVPN routing tables.

      # Configure Leaf2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 5.5.5.5 import reoriginate
      [*Leaf2-bgp-af-evpn] peer 5.5.5.5 advertise route-reoriginated evpn ip
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 import reoriginate
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 advertise route-reoriginated evpn ip
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      # Configure Leaf3.

      [~Leaf3] bgp 200
      [*Leaf3-bgp] l2vpn-family evpn
      [*Leaf3-bgp-af-evpn] peer 8.8.8.8 import reoriginate
      [*Leaf3-bgp-af-evpn] peer 8.8.8.8 advertise route-reoriginated evpn ip
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 import reoriginate
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 advertise route-reoriginated evpn ip
      [*Leaf3-bgp-af-evpn] quit
      [*Leaf3-bgp] quit
      [*Leaf3] commit

  6. Verify the configuration.

    Run the display vxlan tunnel command on leaf nodes to check VXLAN tunnel information. The following example uses the command output on Leaf2. The command output shows that the VXLAN tunnels are Up.

    [~Leaf2] display vxlan tunnel
    Number of vxlan tunnel : 2
    Tunnel ID   Source           Destination      State  Type    Uptime
    ---------------------------------------------------------------------
    4026531841  6.6.6.6          5.5.5.5          up     dynamic 00:11:01
    4026531842  6.6.6.6          7.7.7.7          up     dynamic 00:12:11

    Run the display ip routing-table vpn-instance vpn1 command on leaf nodes to check IP route information. The following example uses the command output on Leaf1.

    [~Leaf1] display ip routing-table vpn-instance vpn1
     Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpn1
             Destinations : 8        Routes : 8         
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        Vbdif10
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       Vbdif10
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       Vbdif10
          10.20.1.0/24  IBGP    255  0             RD  6.6.6.6         VXLAN
          10.30.1.0/24  IBGP    255  0             RD  6.6.6.6         VXLAN
          10.40.1.0/24  IBGP    255  0             RD  6.6.6.6         VXLAN
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0 

    After the configurations are complete, VMa1 and VMb2 can communicate with each other.

Configuration Files
  • Spine1 configuration file

    #
    sysname Spine1
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.10.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.20.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #               
    ospf 1          
     area 0.0.0.0   
      network 3.3.3.3 0.0.0.0
      network 192.168.10.0 0.0.0.255
      network 192.168.20.0 0.0.0.255
    #               
    return 
  • Leaf1 configuration file

    #
    sysname Leaf1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.10.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 5.5.5.5 255.255.255.255
    #
    interface Nve1
     source 5.5.5.5
     vni 10 head-end peer-list protocol bgp
    #
    bgp 100
     peer 6.6.6.6 as-number 100
     peer 6.6.6.6 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 6.6.6.6 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 6.6.6.6 enable
      peer 6.6.6.6 advertise irb
      peer 6.6.6.6 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 5.5.5.5 0.0.0.0
      network 192.168.10.0 0.0.0.255
    #
    return
  • Leaf2 configuration file

    #
    sysname Leaf2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.20.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.20.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #               
    interface GigabitEthernet0/3/0
     undo shutdown  
     ip address 192.168.50.2 255.255.255.0
    #
    interface LoopBack1
     ip address 6.6.6.6 255.255.255.255
    #
    interface Nve1
     source 6.6.6.6
     vni 20 head-end peer-list protocol bgp
    #
    bgp 100
     peer 5.5.5.5 as-number 100
     peer 5.5.5.5 connect-interface LoopBack1
     peer 7.7.7.7 as-number 200
     peer 7.7.7.7 ebgp-max-hop 255
     peer 7.7.7.7 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 5.5.5.5 enable
      peer 7.7.7.7 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 5.5.5.5 enable
      peer 5.5.5.5 advertise irb
      peer 5.5.5.5 advertise encap-type vxlan
      peer 5.5.5.5 import reoriginate
      peer 5.5.5.5 advertise route-reoriginated evpn ip
      peer 7.7.7.7 enable
      peer 7.7.7.7 advertise irb
      peer 7.7.7.7 advertise encap-type vxlan
      peer 7.7.7.7 import reoriginate
      peer 7.7.7.7 advertise route-reoriginated evpn ip
    #
    ospf 1
     area 0.0.0.0
      network 6.6.6.6 0.0.0.0
      network 192.168.20.0 0.0.0.255
    #
    ip route-static 7.7.7.7 255.255.255.255 192.168.50.1
    ip route-static 192.168.1.0 255.255.255.0 192.168.50.1
    ip route-static 192.168.60.0 255.255.255.0 192.168.50.1
    #
    return
  • Spine2 configuration file

    #
    sysname Spine2
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.30.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.40.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #               
    ospf 1          
     area 0.0.0.0   
      network 4.4.4.4 0.0.0.0
      network 192.168.30.0 0.0.0.255
      network 192.168.40.0 0.0.0.255
    #               
    return
  • Leaf3 configuration file

    #
    sysname Leaf3
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.30.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.30.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface GigabitEthernet0/3/0
     undo shutdown  
     ip address 192.168.60.2 255.255.255.0
    #
    interface LoopBack1
     ip address 7.7.7.7 255.255.255.255
    #
    interface Nve1
     source 7.7.7.7
     vni 10 head-end peer-list protocol bgp
    #
    bgp 200
     peer 6.6.6.6 as-number 100
     peer 6.6.6.6 ebgp-max-hop 255
     peer 6.6.6.6 connect-interface LoopBack1
     peer 8.8.8.8 as-number 200
     peer 8.8.8.8 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 6.6.6.6 enable
      peer 8.8.8.8 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 6.6.6.6 enable
      peer 6.6.6.6 advertise irb
      peer 6.6.6.6 advertise encap-type vxlan
      peer 6.6.6.6 import reoriginate
      peer 6.6.6.6 advertise route-reoriginated evpn ip
      peer 8.8.8.8 enable
      peer 8.8.8.8 advertise irb
      peer 8.8.8.8 advertise encap-type vxlan
      peer 8.8.8.8 import reoriginate
      peer 8.8.8.8 advertise route-reoriginated evpn ip
    #
    ospf 1
     area 0.0.0.0
      network 7.7.7.7 0.0.0.0
      network 192.168.30.0 0.0.0.255
    #
    ip route-static 6.6.6.6 255.255.255.255 192.168.60.1
    ip route-static 192.168.1.0 255.255.255.0 192.168.60.1
    ip route-static 192.168.50.0 255.255.255.0 192.168.60.1
    #
    return
  • Leaf4 configuration file

    #
    sysname Leaf4
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.40.1.1 255.255.255.0
     arp collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.40.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack1
     ip address 8.8.8.8 255.255.255.255
    #
    interface Nve1
     source 8.8.8.8
     vni 20 head-end peer-list protocol bgp
    #               
    bgp 200
     peer 7.7.7.7 as-number 200
     peer 7.7.7.7 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 7.7.7.7 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 7.7.7.7 enable
      peer 7.7.7.7 advertise irb
      peer 7.7.7.7 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 8.8.8.8 0.0.0.0
      network 192.168.40.0 0.0.0.255
    #
    return
  • Device1 configuration file

    #
    sysname Device1
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.50.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.1.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
    #
    ip route-static 6.6.6.6 255.255.255.255 192.168.50.2
    ip route-static 7.7.7.7 255.255.255.255 192.168.1.2
    ip route-static 192.168.60.0 255.255.255.0 192.168.1.2
    #               
    return 
  • Device2 configuration file

    #
    sysname Device2
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.60.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.1.2 255.255.255.0
    #               
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #
    ip route-static 6.6.6.6 255.255.255.255 192.168.1.1
    ip route-static 7.7.7.7 255.255.255.255 192.168.60.2
    ip route-static 192.168.50.0 255.255.255.0 192.168.1.1
    #               
    return 

Example for Configuring Three-Segment VXLAN to Implement Layer 2 Interworking

This section provides an example for configuring three-segment VXLAN tunnels to enable Layer 2 communication between VMs that belong to the different DCs.

Networking Requirements

On the network shown in Figure 1-1124, BGP EVPN is configured within DC A and DC B to establish VXLAN tunnels. BGP EVPN is also configured on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them. To enable communication between VM 1 and VM 2, implement Layer 2 communication between DC A and DC B. In this example, the VXLAN tunnel in DC A uses the VNI 10, and that in DC B uses the VNI 20. VNI conversion must be Implemented before establishing a VXLAN tunnel between Leaf 2 and Leaf 3.

Figure 1-1124 Configuring three-segment VXLAN to implement Layer 2 interworking

Interfaces 1 and 2 in this example represent GE 0/1/0 and GE 0/2/0, respectively.


Table 1-486 Interface IP addresses

Device

Interface

IP Address

Device

Interface

IP Address

Spine 1

GE 0/1/0

192.168.10.1/24

Spine 2

GE 0/1/0

192.168.30.1/24

GE 0/2/0

192.168.20.1/24

GE 0/2/0

192.168.40.1/24

Leaf 1

GE 0/1/0

192.168.10.2/24

Leaf 4

GE 0/1/0

192.168.40.2/24

GE 0/2/0

-

GE 0/2/0

-

Loopback 1

1.1.1.1/32

Loopback 1

4.4.4.4/32

Leaf 2

GE 0/1/0

192.168.20.2/24

Leaf 3

GE 0/1/0

192.168.30.2/24

GE 0/2/0

192.168.50.1/24

GE 0/2/0

192.168.50.2/24

Loopback 1

2.2.2.2/32

Loopback 1

3.3.3.3/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Assign an IP address to each interface.

  2. Configure an IGP to allow devices to communicate with each other.

  3. Configure static routes to achieve interworking between DCs.

  4. Configure BGP EVPN within DC A and DC B to establish VXLAN tunnels.

  5. Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.

  6. Configure Leaf 2 and Leaf 3 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.

Data Preparation

To complete the configuration, you need the following data:

  • VLAN IDs of the VMs

  • BD IDs

  • VNI IDs associated with BDs within DC A and DC B

  • Number of the AS to which DC A and DC B belong

  • Name of the SHG to which Leaf 2 and Leaf 3 belong

Procedure

  1. Assign an IP address to each interface (including the loopback interface) on each node.

    For configuration details, see "Configuration Files" in this section.

  2. Configure an IGP. In this example, OSPF is used.

    For configuration details, see "Configuration Files" in this section.

  3. Configure static routes to achieve interworking between DCs.

    For configuration details, see "Configuration Files" in this section.

  4. Configure BGP EVPN within DC A and DC B to create VXLAN tunnels.
    1. Configuring service access points on Leaf 1 and Leaf 4.

      # Configure Leaf 1.

      [~Leaf1] bridge-domain 10
      [*Leaf1-bd10] quit
      [*Leaf1] interface GE 0/2/0.1 mode l2
      [*Leaf1-GE0/2/0.1] encapsulation dot1q vid 10
      [*Leaf1-GE0/2/0.1] rewrite pop single
      [*Leaf1-GE0/2/0.1] bridge-domain 10
      [*Leaf1-GE0/2/0.1] quit
      [*Leaf1] commit

      Repeat these steps for Leaf 4. For configuration details, see "Configuration Files" in this section.

    2. Configure BGP EVPN peer relationships between Leaf 1 and Leaf 2 in DC A and between Leaf 3 and Leaf 4 in DC B.

      # Configure a BGP EVPN peer relationship on Leaf 1.

      [~Leaf1] bgp 100
      [*Leaf1-bgp] peer 2.2.2.2 as-number 100
      [*Leaf1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
      [*Leaf1-bgp] l2vpn-family evpn
      [*Leaf1-bgp-af-evpn] peer 2.2.2.2 enable
      [*Leaf1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
      [*Leaf1-bgp-af-evpn] quit
      [*Leaf1-bgp] quit
      [*Leaf1] commit

      Repeat these steps for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see "Configuration Files" in this section.

    3. Configure EVPN an instance on each leaf node.

      # Configure Leaf 1.

      [~Leaf1] evpn vpn-instance evrf1 bd-mode
      [*Leaf1-evpn-instance-evrf1] route-distinguisher 10:1
      [*Leaf1-evpn-instance-evrf1] vpn-target 11:1
      [*Leaf1-evpn-instance-evrf1] quit
      [*Leaf1] bridge-domain 10
      [*Leaf1-bd10] vxlan vni 10 split-horizon-mode
      [*Leaf1-bd10] evpn binding vpn-instance evrf1
      [*Leaf1-bd10] quit
      [*Leaf1] commit

      Repeat these steps for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see "Configuration Files" in this section.

    4. Configure an ingress replication list on each leaf node.

      # Configure Leaf 1.

      [~Leaf1] interface nve 1
      [*Leaf1-Nve1] source 1.1.1.1
      [*Leaf1-Nve1] vni 10 head-end peer-list protocol bgp
      [*Leaf1-Nve1] quit
      [*Leaf1] commit

      Repeat these steps for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see "Configuration Files" in this section.

  5. Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.
    1. Configure a BGP EVPN peer relationship.

      # Configure Leaf 2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] peer 3.3.3.3 as-number 200
      [*Leaf2-bgp] peer 3.3.3.3 connect-interface LoopBack 1
      [*Leaf2-bgp] peer 3.3.3.3 ebgp-max-hop 255
      [*Leaf2-bgp] network 2.2.2.2 32
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 3.3.3.3 enable
      [*Leaf2-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      # Configure Leaf 3.

      [~Leaf3] bgp 200
      [*Leaf3-bgp] peer 2.2.2.2 as-number 100
      [*Leaf3-bgp] peer 2.2.2.2 connect-interface LoopBack 1
      [*Leaf3-bgp] peer 2.2.2.2 ebgp-max-hop 255
      [*Leaf3-bgp] network 3.3.3.3 32
      [*Leaf3-bgp] l2vpn-family evpn
      [*Leaf3-bgp-af-evpn] peer 2.2.2.2 enable
      [*Leaf3-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
      [*Leaf3-bgp-af-evpn] quit
      [*Leaf3-bgp] quit
      [*Leaf3] commit

  6. Configure Leaf 2 and Leaf 3 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.
    1. Configure an SHG to which the BGP EVPN peers belong.

      # Configure Leaf 2.

      [~Leaf2] bgp 100
      [~Leaf2-bgp] l2vpn-family evpn
      [~Leaf2-bgp-af-evpn] peer 3.3.3.3 split-group sg1
      [*Leaf2-bgp-af-evpn] commit

      # Configure Leaf 3.

      [~Leaf3] bgp 200
      [~Leaf3-bgp] l2vpn-family evpn
      [~Leaf3-bgp-af-evpn] peer 2.2.2.2 split-group sg1
      [*Leaf3-bgp-af-evpn] commit

    2. Enable the function to re-originate MAC routes.

      # Configure Leaf 2.

      [~Leaf2-bgp-af-evpn] peer 1.1.1.1 import reoriginate
      [*Leaf2-bgp-af-evpn] peer 1.1.1.1 advertise route-reoriginated evpn mac
      [*Leaf2-bgp-af-evpn] peer 3.3.3.3 import reoriginate
      [*Leaf2-bgp-af-evpn] peer 3.3.3.3 advertise route-reoriginated evpn mac
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      # Configure Leaf 3.

      [~Leaf3-bgp-af-evpn] peer 4.4.4.4 import reoriginate
      [*Leaf3-bgp-af-evpn] peer 4.4.4.4 advertise route-reoriginated evpn mac
      [*Leaf3-bgp-af-evpn] peer 2.2.2.2 import reoriginate
      [*Leaf3-bgp-af-evpn] peer 2.2.2.2 advertise route-reoriginated evpn mac
      [*Leaf3-bgp-af-evpn] quit
      [*Leaf3-bgp] quit
      [*Leaf3] commit

  7. Verify the configuration.

    Run the display vxlan tunnel command on each leaf node to view information about the VXLAN tunnels. The following example uses the command output on Leaf 2. The command output shows that the VXLAN tunnels are Up.

    [~Leaf2] display vxlan tunnel
    Number of vxlan tunnel : 2
    Tunnel ID   Source                Destination           State  Type     Uptime
    -----------------------------------------------------------------------------------
    4026531924  2.2.2.2               1.1.1.1               up     dynamic  00:39:19  
    4026531925  2.2.2.2               3.3.3.3               up     dynamic  00:39:09 

    Run the display vxlan peer command on Leaf 2 to view information about the VXLAN peers.

    [~Leaf2] display vxlan peer
    Number of peers : 2
    Vni ID    Source                  Destination            Type      Out Vni ID
    -------------------------------------------------------------------------------
    10        2.2.2.2                 1.1.1.1                dynamic   10         
    10        2.2.2.2                 3.3.3.3                dynamic   20

    After the preceding configurations are complete, Layer 2 communication can be implemented between VM 1 and VM 2.

Configuration Files
  • Spine 1 configuration file

    #
    sysname Spine1
    #
    interface GE0/1/0
     undo shutdown  
     ip address 192.168.10.1 255.255.255.0
    #               
    interface GE0/2/0
     undo shutdown  
     ip address 192.168.20.1 255.255.255.0
    #               
    ospf 1          
     area 0.0.0.0   
      network 192.168.10.0 0.0.0.255
      network 192.168.20.0 0.0.0.255
    #               
    return 
  • Leaf 1 configuration file

    #
    sysname Leaf1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #               
    interface GE0/1/0
     undo shutdown  
     ip address 192.168.10.2 255.255.255.0
    #               
    interface GE0/2/0
     undo shutdown       
    #               
    interface GE0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
    #               
    interface Nve1  
     source 1.1.1.1 
     vni 10 head-end peer-list protocol bgp
    #               
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     #
     ipv4-family unicast
      peer 2.2.2.2 enable
     #              
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
    #               
    ospf 1          
     area 0.0.0.0   
      network 1.1.1.1 0.0.0.0
      network 192.168.10.0 0.0.0.255
    #               
    return
  • Leaf 2 configuration file

    #
    sysname Leaf2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #                         
    interface GE0/1/0
     undo shutdown  
     ip address 192.168.20.2 255.255.255.0
    #               
    interface GE0/2/0
     undo shutdown  
     ip address 192.168.50.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #               
    interface Nve1  
     source 2.2.2.2 
     vni 10 head-end peer-list protocol bgp
    #               
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 3.3.3.3 as-number 200
     peer 3.3.3.3 ebgp-max-hop 255
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      network 2.2.2.2 255.255.255.255
      peer 1.1.1.1 enable
      peer 3.3.3.3 enable
     #              
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise encap-type vxlan
      peer 1.1.1.1 import reoriginate
      peer 1.1.1.1 advertise route-reoriginated evpn mac
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
      peer 3.3.3.3 import reoriginate
      peer 3.3.3.3 advertise route-reoriginated evpn mac
      peer 3.3.3.3 split-group sg1
    #
    ospf 1          
     area 0.0.0.0   
      network 2.2.2.2 0.0.0.0
      network 192.168.20.0 0.0.0.255
    #
    ip route-static 3.3.3.3 255.255.255.255 192.168.50.2
    #               
    return  
  • Spine 2 configuration file

    #
    sysname Spine2
    #
    interface GE0/1/0
     undo shutdown  
     ip address 192.168.30.1 255.255.255.0
    #               
    interface GE0/2/0
     undo shutdown  
     ip address 192.168.40.1 255.255.255.0
    #               
    ospf 1          
     area 0.0.0.0   
      network 192.168.30.0 0.0.0.255
      network 192.168.40.0 0.0.0.255
    #               
    return
  • Leaf 3 configuration file

    #
    sysname Leaf3
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #               
    interface GE0/1/0
     undo shutdown  
     ip address 192.168.30.2 255.255.255.0
    #               
    interface GE0/2/0
     undo shutdown  
     ip address 192.168.50.2 255.255.255.0
    
    #               
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #               
    interface Nve1  
     source 3.3.3.3 
     vni 20 head-end peer-list protocol bgp
    #               
    bgp 200
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 ebgp-max-hop 255
     peer 2.2.2.2 connect-interface LoopBack1
     peer 4.4.4.4 as-number 200
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      network 3.3.3.3 255.255.255.255
      peer 2.2.2.2 enable
      peer 4.4.4.4 enable
     #              
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
      peer 2.2.2.2 import reoriginate
      peer 2.2.2.2 advertise route-reoriginated evpn mac
      peer 2.2.2.2 split-group sg1
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise encap-type vxlan
      peer 4.4.4.4 import reoriginate
      peer 4.4.4.4 advertise route-reoriginated evpn mac
    #               
    ospf 1          
     area 0.0.0.0   
      network 3.3.3.3 0.0.0.0
      network 192.168.30.0 0.0.0.255
    #
    ip route-static 2.2.2.2 255.255.255.255 192.168.50.1
    #               
    return
  • Leaf 4 configuration file

    #
    sysname Leaf4
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #               
    interface GE0/1/0
     undo shutdown  
     ip address 192.168.40.2 255.255.255.0
    #                              
    interface GE0/2/0
     undo shutdown       
    #               
    interface GE0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #               
    interface Nve1  
     source 4.4.4.4 
     vni 20 head-end peer-list protocol bgp
    #               
    bgp 200
     peer 3.3.3.3 as-number 200
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      peer 3.3.3.3 enable
     #              
     l2vpn-family evpn
      undo policy vpn-target
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
    #               
    ospf 1          
     area 0.0.0.0   
      network 4.4.4.4 0.0.0.0
      network 192.168.40.0 0.0.0.255
    #               
    return

Example for Configuring Static VXLAN in an Active-Active Scenario (Layer 2 Communication)

In a scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. Carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in the case of a fault.

Networking Requirements

On the network shown in Figure 1-1125, CE1 is dual-homed to PE1 and PE2 through an Eth-Trunk. PE1 and PE2 use the same virtual address as the source VTEP address of an NVE interface, namely, an anycast VTEP address. In this way, the CPE is aware of only one remote NVE interface and establishes a static VXLAN tunnel with the anycast VTEP address.

The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.

Figure 1-1125 Networking for configuring static VXLAN in an active-active scenario (Layer 2 communication)

In this example, interfaces 1 through 3 represent GE 0/1/1, GE 0/1/2, and GE 0/1/3, respectively.



Table 1-487 Interface IP addresses

Device

Interface

IP Address

PE1

GE 0/1/1

10.1.20.1/24

GE 0/1/2

-

GE 0/1/3

10.1.1.1/24

Loopback 1

1.1.1.1/32

Loopback 2

3.3.3.3/32

PE2

GE 0/1/1

10.1.20.2/24

GE 0/1/2

-

GE 0/1/3

10.1.2.1/24

Loopback 1

2.2.2.2/32

Loopback 2

3.3.3.3/32

CE1

GE 0/1/1

-

GE 0/1/2

-

CPE

GE 0/1/1

10.1.1.2/24

GE 0/1/2

10.1.2.2/24

GE 0/1/3

-

Loopback 1

4.4.4.4/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure an IGP on the PEs and CPE to implement network connectivity.
  2. On PE1 and PE2, configure service access points and set the same ESI for the access links of CE1 so that CE1 is dual-homed to PE1 and PE2.
  3. Configure the same anycast VTEP address (virtual address) on PE1 and PE2 as the NVE interface's source address, which is used to establish a VXLAN tunnel with the CPE. Establish static VXLAN tunnels between the PEs and CPE so that the PEs and CPE can communicate.
  4. Establish a BGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
  5. Configure EVPN instances in BD mode on PE1 and PE2 and bind the BD to the corresponding EVPN instances.
  6. Enable the inter-chassis VXLAN function on PE1 and PE2, configure different bypass addresses for PE1 and PE2, and establish a bypass VXLAN tunnel on PE1 and PE2 so that PE1 and PE2 can communicate.
  7. (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
  8. Configure a BD on PE1 and PE2.
  9. On PE1 and PE2, enable routes to be sent to carry extended community attributes and the function of redirecting received routes carrying the extended VLAN community attribute.
  10. On PE1 and PE2, enable FRR for MAC routes between the local and remote ends. When a PE fails, the downstream traffic of the CPE can quickly switch to the other PE.
  11. (Optional) When PE1 and PE2 establish an EBGP peer relationship, set the function of not changing the next-hop addresses of routes. When PE1 and PE2 establish an IBGP peer relationship, this function is not required.
Data Preparation

To complete the configuration, you need the following data:

  • Interfaces and their IP addresses

  • Names of VPN and EVPN instances

  • VPN targets of the received and sent routes in VPN and EVPN instances

Procedure

  1. Assign IP addresses to device interfaces, including loopback interfaces.

    For configuration details, see Configuration Files in this section.

  2. Configure an IGP. In this example, IS-IS is used.

    For configuration details, see Configuration Files in this section.

  3. Enable EVPN capabilities.

    # Configure PE1.

    <PE1> system-view
    [~PE1] evpn
    [*PE1-evpn] vlan-extend private enable
    [*PE1-evpn] vlan-extend redirect enable
    [*PE1-evpn] local-remote frr enable
    [*PE1-evpn] bypass-vxlan enable
    [*PE1-evpn] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  4. Establish an IBGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 2.2.2.2 as-number 100
    [*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*PE1-bgp] ipv4-family unicast
    [*PE1-bgp-af-ipv4] undo synchronization
    [*PE1-bgp-af-ipv4] peer 2.2.2.2 enable
    [*PE1-bgp-af-ipv4] quit
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] undo policy vpn-target
    [*PE1-bgp-af-evpn] peer 2.2.2.2 enable
    [*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  5. Create a VXLAN tunnel.
    1. Configure EVPN instances and bind them to BDs on the PEs.

      # Configure PE1.

      [~PE1] evpn vpn-instance evpn1 bd-mode
      [*PE1-evpn-instance-evpn1] route-distinguisher 11:11
      [*PE1-evpn-instance-evpn1] vpn-target 1:1 export-extcommunity
      [*PE1-evpn-instance-evpn1] vpn-target 1:1 import-extcommunity
      [*PE1-evpn-instance-evpn1] quit
      [*PE1] bridge-domain 10
      [*PE1-bd10] vxlan vni 10 split-horizon-mode
      [*PE1-bd10] evpn binding vpn-instance evpn1
      [*PE1-bd10] quit
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    2. Configure an ingress replication list on each PE and the CPE.

      # Configure the CPE.

      [~CPE] interface nve 1
      [*CPE-Nve1] source 4.4.4.4
      [*CPE-Nve1] vni 10 head-end peer-list 3.3.3.3
      [*CPE-Nve1] quit
      [*CPE] commit

      # Configure PE1.

      [~PE1] interface nve 1
      [*PE1-Nve1] source 3.3.3.3
      [*PE1-Nve1] bypass source 1.1.1.1
      [*PE1-Nve1] mac-address 00e0-fc12-7890
      [*PE1-Nve1] vni 10 head-end peer-list protocol bgp
      [*PE1-Nve1] vni 10 head-end peer-list 4.4.4.4
      [*PE1-Nve1] quit
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  6. Configure PEs to provide access for CEs.

    # Configure PE1.

    [*PE1] e-trunk 1
    [*PE1-e-trunk-1] priority 10
    [*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
    [*PE1-e-trunk-1] quit
    [*PE1] interface eth-trunk 1
    [*PE1-Eth-Trunk1] mac-address 00e0-fc12-3456
    [*PE1-Eth-Trunk1] mode lacp-static
    [*PE1-Eth-Trunk1] e-trunk 1
    [*PE1-Eth-Trunk1] e-trunk mode force-master
    [*PE1-Eth-Trunk1] es track evpn-peer 2.2.2.2
    [*PE1-Eth-Trunk1] esi 0000.0001.0001.0001.0001
    [*PE1-Eth-Trunk1] quit
    [*PE1] interface eth-trunk1.1 mode l2
    [*PE1-Eth-Trunk1.1] encapsulation dot1q vid 1
    [*PE1-Eth-Trunk1.1] rewrite pop single
    [*PE1-Eth-Trunk1.1] bridge-domain 10
    [*PE1-Eth-Trunk1.1] quit
    [~PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  7. Verify the configuration.

    Run the display vxlan tunnel command on PE1 to view VXLAN tunnel information. The following example uses the command output on PE1.

    [~PE1] display vxlan tunnel
    Number of vxlan tunnel : 2
    Tunnel ID   Source                Destination           State  Type     Uptime
    -----------------------------------------------------------------------------------
    4026531842  1.1.1.1               2.2.2.2               up     dynamic  00:43:14  
    4026531843  3.3.3.3               4.4.4.4               up     static   00:08:30 

Configuration Files
  • PE1 configuration file

    #
    sysname PE1
    #
    evpn enhancement port 1345
    #
    evpn
     vlan-extend private enable
     vlan-extend redirect enable
     local-remote frr enable
     bypass-vxlan enable
    #
    evpn vpn-instance evpn1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evpn1
    #
    e-trunk 1
     priority 10
     peer-address 2.2.2.2 source-address 1.1.1.1
    #
    isis 1
     network-entity 10.0000.0000.0001.00
     frr
    #
    interface Eth-Trunk1
     mac-address 00e0-fc12-3456
     mode lacp-static
     e-trunk 1
     e-trunk mode force-master
     es track evpn-peer 2.2.2.2
     esi 0000.0001.0001.0001.0001
    #
    interface Eth-Trunk1.1 mode l2
     encapsulation dot1q vid 1
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet 0/1/1
     undo shutdown
     ip address 10.1.20.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet 0/1/3
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 1.1.1.1
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
     vni 10 head-end peer-list 4.4.4.4
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
    #
    return
    
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn enhancement port 1345
    #
    evpn
     vlan-extend redirect enable
     vlan-extend private enable
     local-remote frr enable
     bypass-vxlan enable
    #
    evpn vpn-instance evpn1 bd-mode
     route-distinguisher 22:22
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evpn1
    #
    e-trunk 1
     priority 10
     peer-address 1.1.1.1 source-address 2.2.2.2
    #
    isis 1
     network-entity 10.0000.0000.0002.00
     frr
    #
    interface Eth-Trunk1
     mac-address 00e0-fc12-3456
     mode lacp-static
     e-trunk 1
     e-trunk mode force-master
     es track evpn-peer 1.1.1.1
     esi 0000.0001.0001.0001.0001
    #
    interface Eth-Trunk1.1 mode l2
     encapsulation dot1q vid 1
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet 0/1/1
     undo shutdown
     ip address 10.1.20.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet 0/1/3
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 2.2.2.2
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
     vni 10 head-end peer-list 4.4.4.4
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise encap-type vxlan
    #
    return
    
  • CE configuration file

    #
    sysname CE
    #
    vlan batch 1 to 4094
    #
    interface Eth-Trunk1
     port link-type trunk
     port trunk allow-pass vlan 1
    #
    interface GigabitEthernet 0/1/1
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet 0/1/2
      undo shutdown
     eth-trunk 1
    #
    return
    
  • CPE configuration file

    #
    sysname CPE
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
    #
    isis 1
     network-entity 20.0000.0000.0001.00
     frr
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.1.2.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/3
     undo shutdown
     esi 0000.0000.0000.0000.0017
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 4.4.4.4
     vni 10 head-end peer-list 3.3.3.3
    #
    return
    

Example for Configuring Dynamic VXLAN in an Active-Active Scenario (Layer 3 Communication)

In a scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. Carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in the case of a fault.

Networking Requirements

On the network shown in Figure 1-1126, a VXLAN tunnel is required to be dynamically deployed using BGP EVPN between the CPE and the PE1-PE2 pair. An EVPN peer relationship needs to be established between PE1 and PE2 to deploy a bypass VXLAN tunnel. In addition, a CE needs to be dual-homed to PE1 and PE2 that are configured with the same anycast VTEP address to implement the active-active function. If one of the PEs fails, this function allows traffic to be quickly switched to the other PE.

Figure 1-1126 Networking for configuring dynamic VXLAN in an active-active scenario

In this example, interfaces 1 through 3 represent GE 0/1/1, GE 0/1/2, and GE 0/1/3, respectively.



Table 1-488 Interface IP addresses

Device

Interface

IP Address

PE1

GE 0/1/1

10.1.20.1/24

GE 0/1/2

-

GE 0/1/3

10.1.1.1/24

Loopback 1

1.1.1.1/32

Loopback 2

3.3.3.3/32

PE2

GE 0/1/1

10.1.20.2/24

GE 0/1/2

-

GE 0/1/3

10.1.2.1/24

Loopback 1

2.2.2.2/32

Loopback 2

3.3.3.3/32

CE

GE 0/1/1

-

GE 0/1/2

-

CPE

GE 0/1/1

10.1.1.2/24

GE 0/1/2

10.1.2.2/24

Loopback 1

4.4.4.4/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure a routing protocol on the CPE and each PE for the devices to communicate at Layer 3. In this example, an IGP is configured.
  2. Configure PE1 and PE2 to provide dual-homing access for CE1 through Eth-Trunk Layer 2 sub-interfaces.
  3. Configure EVPN instances on PE1, PE2, and the CPE.
  4. Configure VPN instances on PE1, PE2, and the CPE.
  5. Configure VBDIF interfaces on PE1 and PE2 for Layer 3 access.
  6. Configure the same anycast VTEP address (virtual address) on PE1 and PE2 as the NVE interface's source address, which is used to establish a VXLAN tunnel with the CPE. Create a VXLAN tunnel between the CPE and the PE1-PE2 pair using BGP EVPN, allowing the CPE to communicate with the PEs.
  7. Enable the inter-chassis VXLAN function on PE1 and PE2, configure different bypass addresses for the PEs, establish a BGP EVPN peer relationship, and create a bypass VXLAN tunnel between the PEs, allowing the PEs to communicate with each other.
  8. Enable auto FRR in the BGP VPN address family view on PE1 and PE2. In this way, if a PE fails, the downstream traffic of the CPE can be quickly switched to the other PE and then forwarded to a CE.

Procedure

  1. Assign IP addresses to device interfaces, including loopback interfaces.

    For configuration details, see Configuration Files in this section.

  2. Configure an IGP. In this example, IS-IS is used.

    For configuration details, see Configuration Files in this section.

  3. Configure PE1 and PE2 to provide dual-homing access for CE1 through Eth-Trunk Layer 2 sub-interfaces.

    # Configure PE1.

    <PE1> system-view
    [~PE1] interface eth-trunk 10
    [*PE1-Eth-Trunk10] trunkport gigabitethernet 0/1/2
    [*PE1-Eth-Trunk10] esi 0000.0000.0000.0000.1111
    [*PE1-Eth-Trunk10] quit
    [*PE1] bridge-domain 10
    [*PE1-bd10] vxlan vni 10 split-horizon-mode
    [*PE1-bd10] quit
    [*PE1] interface eth-trunk 10.1 mode l2
    [*PE1-Eth-Trunk10.1] encapsulation dot1q vid 10
    [*PE1-Eth-Trunk10.1] rewrite pop single
    [*PE1-Eth-Trunk10.1] bridge-domain 10
    [*PE1-Eth-Trunk10.1] quit
    [*PE1] commit

    # Configure PE2.

    <PE2> system-view
    [~PE2] interface eth-trunk 10
    [*PE2-Eth-Trunk10] trunkport gigabitethernet 0/1/2
    [*PE2-Eth-Trunk10] esi 0000.0000.0000.0000.1111
    [*PE2-Eth-Trunk10] quit
    [*PE2] bridge-domain 10
    [*PE2-bd10] vxlan vni 10 split-horizon-mode
    [*PE2-bd10] quit
    [*PE2] interface eth-trunk 10.1 mode l2
    [*PE2-Eth-Trunk10.1] encapsulation dot1q vid 10
    [*PE2-Eth-Trunk10.1] rewrite pop single
    [*PE2-Eth-Trunk10.1] bridge-domain 10
    [*PE2-Eth-Trunk10.1] quit
    [*PE2] commit

  4. Enable the inter-chassis VXLAN function.

    # Configure PE1.

    [~PE1] evpn
    [*PE1-evpn] bypass-vxlan enable
    [*PE1-evpn] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  5. Configure EVPN instances.

    # Configure the CPE.

    <CPE> system-view
    [~CPE] evpn vpn-instance evrf1 bd-mode
    [*CPE-evpn-instance-evrf1] route-distinguisher 11:11
    [*CPE-evpn-instance-evrf1] vpn-target 1:1 import-extcommunity
    [*CPE-evpn-instance-evrf1] vpn-target 1:1 export-extcommunity
    [*CPE-evpn-instance-evrf1] quit
    [*CPE] bridge-domain 20
    [*CPE-bd20] vxlan vni 20 split-horizon-mode
    [*CPE-bd20] evpn binding vpn-instance evrf1
    [*CPE-bd20] quit
    [*CPE] commit

    # Configure PE1.

    [~PE1] evpn vpn-instance evrf1 bd-mode
    [*PE1-evpn-instance-evrf1] route-distinguisher 11:11
    [*PE1-evpn-instance-evrf1] vpn-target 1:1 import-extcommunity
    [*PE1-evpn-instance-evrf1] vpn-target 1:1 export-extcommunity
    [*PE1-evpn-instance-evrf1] quit
    [*PE1] bridge-domain 10
    [*PE1-bd10] evpn binding vpn-instance evrf1
    [*PE1-bd10] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  6. Configure VPN instances.

    # Configure the CPE.

    [~CPE] ip vpn-instance vpn1
    [*CPE-vpn-instance-vpn1] vxlan vni 100
    [*CPE-vpn-instance-vpn1] ipv4-family
    [*CPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 1:1
    [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity
    [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity
    [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity evpn
    [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity evpn
    [*CPE-vpn-instance-vpn1-af-ipv4] quit
    [*CPE-vpn-instance-vpn1] quit
    [*CPE] commit

    # Configure PE1.

    [~PE1] ip vpn-instance vpn1
    [*PE1-vpn-instance-vpn1] vxlan vni 100
    [*PE1-vpn-instance-vpn1] ipv4-family
    [*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 1:1
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity evpn
    [*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity evpn
    [*PE1-vpn-instance-vpn1-af-ipv4] quit
    [*PE1-vpn-instance-vpn1] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  7. Establish BGP and BGP EVPN peer relationships.

    # Configure the CPE.
    [~CPE] bgp 100
    [*CPE-bgp] peer 1.1.1.1 as-number 100
    [*CPE-bgp] peer 1.1.1.1 connect-interface LoopBack 1
    [*CPE-bgp] peer 2.2.2.2 as-number 100
    [*CPE-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*CPE-bgp] ipv4-family vpn-instance vpn1
    [*CPE-bgp-vpn1] advertise l2vpn evpn
    [*CPE-bgp-vpn1] import-route direct
    [*CPE-bgp-vpn1] quit
    [*CPE-bgp] l2vpn-family evpn
    [*CPE-bgp-af-evpn] undo policy vpn-target
    [*CPE-bgp-af-evpn] peer 1.1.1.1 enable
    [*CPE-bgp-af-evpn] peer 1.1.1.1 advertise irb
    [*CPE-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
    [*CPE-bgp-af-evpn] peer 2.2.2.2 enable
    [*CPE-bgp-af-evpn] peer 2.2.2.2 advertise irb
    [*CPE-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*CPE-bgp-af-evpn] quit
    [*CPE-bgp] quit
    [*CPE] commit

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 2.2.2.2 as-number 100
    [*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*PE1-bgp] peer 4.4.4.4 as-number 100
    [*PE1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
    [*PE1-bgp] ipv4-family vpn-instance vpn1
    [*PE1-bgp-vpn1] import-route direct
    [*PE1-bgp-vpn1] auto-frr
    [*PE1-bgp-vpn1] advertise l2vpn evpn
    [*PE1-bgp-vpn1] quit
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] undo policy vpn-target
    [*PE1-bgp-af-evpn] peer 2.2.2.2 enable
    [*PE1-bgp-af-evpn] peer 2.2.2.2 advertise irb
    [*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*PE1-bgp-af-evpn] peer 4.4.4.4 enable
    [*PE1-bgp-af-evpn] peer 4.4.4.4 advertise irb
    [*PE1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  8. Create a VXLAN tunnel between the CPE and the PE1-PE2 pair and a bypass VXLAN tunnel between PE1 and PE2.

    # Configure the CPE.

    [~CPE] interface nve 1
    [*CPE-Nve1] source 4.4.4.4
    [*CPE-Nve1] vni 20 head-end peer-list protocol bgp
    [*CPE-Nve1] quit
    [*CPE] commit

    # Configure PE1.

    [~PE1] interface nve 1
    [*PE1-Nve1] source 3.3.3.3
    [*PE1-Nve1] bypass source 1.1.1.1
    [*PE1-Nve1] mac-address 00e0-fc12-7890
    [*PE1-Nve1] vni 10 head-end peer-list protocol bgp
    [*PE1-Nve1] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  9. Bind VPN instances to VBDIF interfaces.

    # Configure the CPE.

    [~CPE] interface vbdif20
    [*CPE-Vbdif20] ip binding vpn-instance vpn1
    [*CPE-Vbdif20] ip address 10.1.30.1 24
    [*CPE-Vbdif20] arp collect host enable
    [*CPE-Vbdif20] vxlan anycast-gateway enable
    [*CPE-Vbdif20] quit
    [*CPE] commit

    # Configure PE1.

    [~PE1] interface vbdif10
    [*PE1-Vbdif10] ip binding vpn-instance vpn1
    [*PE1-Vbdif10] ip address 10.1.10.1 24
    [*PE1-Vbdif10] arp collect host enable
    [*PE1-Vbdif10] vxlan anycast-gateway enable
    [*PE1-Vbdif10] mac-address 00e0-fc12-3456
    [*PE1-Vbdif10] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  10. Verify the configuration.

    Run the display vxlan tunnel command on a PE to check VXLAN tunnel information. The following example uses the command output on PE1.

    [~PE1] display vxlan tunnel
    Number of vxlan tunnel : 1
    Tunnel ID   Source           Destination      State  TyPE    Uptime
    -------------------------------------------------------------------
    4026531841  1.1.1.1          2.2.2.2          up     dynamic       0033h12m
    4026531842  3.3.3.3          4.4.4.4          up     dynamic       0033h12m

Configuration Files
  • PE1 configuration file

    #
    sysname PE1
    #
    evpn
     bypass-vxlan enable
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 1:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 1:1 import-extcommunity
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
     vxlan vni 100
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    isis 1
     network-entity 10.0000.0000.0010.00
     frr
    #
    interface Eth-Trunk10
     trunkport gigabitethernet 0/1/2
     esi 0000.0000.0000.0000.1111
    #
    interface Eth-Trunk10.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.10.1 255.255.255.0
     mac-address 00e0-fc12-3456
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.20.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     eth-trunk 10
    #
    interface GigabitEthernet0/1/3
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 1.1.1.1
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
      peer 4.4.4.4 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      auto-frr
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise irb
      peer 2.2.2.2 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise irb
      peer 4.4.4.4 advertise encap-type vxlan
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn
     bypass-vxlan enable
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 1:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 1:1 import-extcommunity
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
     vxlan vni 100
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    isis 1
     network-entity 10.0000.0000.0020.00
     frr
    #
    interface Eth-Trunk10
     trunkport gigabitethernet 0/1/2
     esi 0000.0000.0000.0000.1111
    #
    interface Eth-Trunk10.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.10.1 255.255.255.0
     mac-address 00e0-fc12-3456
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.20.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     eth-trunk 10
    #
    interface GigabitEthernet0/1/3
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 2.2.2.2
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 4.4.4.4 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      auto-frr
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise irb
      peer 1.1.1.1 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise irb
      peer 4.4.4.4 advertise encap-type vxlan
    #
    return
  • CE1 configuration file

    # 
    sysname CE1
    #
    vlan batch 10
    #
    interface Eth-Trunk10
     port link-type trunk
     port trunk allow-pass vlan 10
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     eth-trunk 10
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     eth-trunk 10
    #
    return
    
  • CPE configuration file

    #
    sysname CPE
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 1:1
      apply-label per-instance
      vpn-target 1:1 export-extcommunity
      vpn-target 1:1 import-extcommunity
      vpn-target 1:1 export-extcommunity evpn
      vpn-target 1:1 import-extcommunity evpn
     vxlan vni 100
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    isis 1
     network-entity 20.0000.0000.0001.00
     frr
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.1.30.1 255.255.255.0
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.1.2.2 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 4.4.4.4 
     vni 20 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 2.2.2.2 enable
     #
     ipv4-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise irb
      peer 1.1.1.1 advertise encap-type vxlan
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise irb
      peer 2.2.2.2 advertise encap-type vxlan
    #
    return

Example for Configuring VXLAN over IPsec in an Active-Active Scenario

In a scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network, which enhances VXLAN access reliability and implements rapid convergence in the case of a fault. IPsec encapsulation implements encrypted packet transmission, securing packet transmission.

Networking Requirements

On the network shown in Figure 1-1127, CE1 is dual-homed to PE1 and PE2, and PE1 and PE2 use the same virtual address as the VTEP address of the source NVE interface. In this way, the CPE is aware of only one remote NVE interface and establishes a static VXLAN tunnel with the anycast VTEP address to communicate with the PEs. VXLAN packets are transmitted in plain text on the network, which is insecure. IPsec encapsulation implements encrypted packet transmission, securing packet transmission.

Figure 1-1127 Networking for configuring VXLAN over IPsec in an active-active scenario

In this example, interfaces 1 through 3 represent GE 0/1/1, GE 0/2/0, and GE 0/3/0, respectively.



Table 1-489 Interface IP addresses

Device

Interface

IP Address

PE1

GE 0/1/1

10.1.20.1/24

GE 0/1/2

192.168.1.1/24

GE 0/1/3

10.1.1.1/24

Loopback 0

1.1.1.1/32

Loopback 1

3.3.3.3/32

Loopback 2

5.5.5.5/32

PE2

GE 0/1/1

10.1.20.2/24

GE 0/1/2

192.168.2.1/24

GE 0/1/3

10.1.2.1/24

Loopback 0

2.2.2.2/32

Loopback 1

3.3.3.3/32

Loopback 2

5.5.5.5/32

CE1

GE 0/1/1

192.168.1.2/24

GE 0/1/2

192.168.2.2/24

CPE

GE 0/1/1

10.1.1.2/24

Loopback 0

4.4.4.4/32

Loopback 1

6.6.6.6/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure an IGP on the CEs, PEs, and CPE to implement Layer 2 network connectivity.
  2. Configure service access points on PE1 and PE2 so that CE1 can be dual-homed to PE1 and PE2.
  3. Establish static VXLAN tunnels between the PEs and CPE so that the PEs and CPE can communicate.
  4. Establish a bypass VXLAN tunnel between PE1 and PE2 so that PE1 and PE2 can communicate.
  5. (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
  6. Configure IPsec on the PEs and CPE and establish IPsec tunnels.
Data Preparation

To complete the configuration, you need the following data:

  • Interfaces and their IP addresses

  • EVPN instance names

  • VPN targets of the received and sent routes in EVPN instances

  • Preshared key

  • SHA2-256 as the ESP authentication algorithm and AES 256 as the ESP encryption algorithm for the IPsec proposal

  • SHA2-256 as the authentication algorithm and HMAC-SHA2-256 as the integrity algorithm for the IKE proposal

Procedure

  1. Assign IP addresses to device interfaces, including loopback interfaces.

    For configuration details, see Configuration Files in this section.

  2. Configure an IGP. In this example, IS-IS is used.

    For configuration details, see Configuration Files in this section.

  3. Enable EVPN capabilities.

    # Configure PE1.

    <PE1> system-view
    [~PE1] evpn
    [*PE1-evpn] vlan-extend private enable
    [*PE1-evpn] vlan-extend redirect enable
    [*PE1-evpn] local-remote frr enable
    [*PE1-evpn] bypass-vxlan enable
    [*PE1-evpn] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  4. Establish an IBGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 2.2.2.2 as-number 100
    [*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*PE1-bgp] ipv4-family unicast
    [*PE1-bgp-af-ipv4] undo synchronization
    [*PE1-bgp-af-ipv4] peer 2.2.2.2 enable
    [*PE1-bgp-af-ipv4] quit
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] undo policy vpn-target
    [*PE1-bgp-af-evpn] peer 2.2.2.2 enable
    [*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  5. Create a VXLAN tunnel.
    1. Configure EVPN instances and bind them to BDs on the PEs.

      # Configure PE1.

      [~PE1] evpn vpn-instance evpn1 bd-mode
      [*PE1-evpn-instance-evpn1] route-distinguisher 11:11
      [*PE1-evpn-instance-evpn1] vpn-target 1:1 export-extcommunity
      [*PE1-evpn-instance-evpn1] vpn-target 1:1 import-extcommunity
      [*PE1-evpn-instance-evpn1] quit
      [*PE1] bridge-domain 10
      [*PE1-bd10] vxlan vni 10 split-horizon-mode
      [*PE1-bd10] evpn binding vpn-instance evpn1
      [*PE1-bd10] quit
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    2. Configure an ingress replication list on each PE and the CPE.

      # Configure the CPE.

      [~CPE] interface nve 1
      [*CPE-Nve1] source 4.4.4.4
      [*CPE-Nve1] vni 10 head-end peer-list 3.3.3.3
      [*CPE-Nve1] quit
      [*CPE] commit

      # Configure PE1.

      [~PE1] interface nve 1
      [*PE1-Nve1] source 3.3.3.3
      [*PE1-Nve1] bypass source 1.1.1.1
      [*PE1-Nve1] mac-address 00e0-fc12-7890
      [*PE1-Nve1] vni 10 head-end peer-list protocol bgp
      [*PE1-Nve1] vni 10 head-end peer-list 4.4.4.4
      [*PE1-Nve1] quit
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  6. Configure PEs to provide access for CEs.

    # Configure PE1.

    [*PE1] e-trunk 1
    [*PE1-e-trunk-1] priority 10
    [*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
    [*PE1-e-trunk-1] quit
    [*PE1] interface eth-trunk 1
    [*PE1-Eth-Trunk1] mac-address 00e0-fc12-3456
    [*PE1-Eth-Trunk1] mode lacp-static
    [*PE1-Eth-Trunk1] e-trunk 1
    [*PE1-Eth-Trunk1] e-trunk mode force-master
    [*PE1-Eth-Trunk1] es track evpn-peer 2.2.2.2
    [*PE1-Eth-Trunk1] esi 0000.0001.0001.0001.0001
    [*PE1-Eth-Trunk1] quit
    [*PE1] interface eth-trunk1.1 mode l2
    [*PE1-Eth-Trunk1.1] encapsulation dot1q vid 1
    [*PE1-Eth-Trunk1.1] rewrite pop single
    [*PE1-Eth-Trunk1.1] bridge-domain 10
    [*PE1-Eth-Trunk1.1] quit
    [~PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  7. (Optional) Configure a UDP port.

    # Configure PE1.

    [~PE1] evpn enhancement port 1345
    [*PE1] commit

    The same UDP port number must be set for the PEs in the active state.

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  8. Configure IPsec on PE1.
    1. Configure advanced ACL 3000.

      [~PE1] acl 3000
      [*PE1-acl-adv-3000] rule 5 permit ip source 3.3.3.3 0 destination 4.4.4.4 0
      [*PE1acl-adv-3000] quit
      [*PE1] commit

    2. Configure an IPsec proposal named tran1.

      [~PE1] ipsec proposal tran1
      [*PE1-ipsec-proposal-tran1] encapsulation-mode tunnel
      [*PE1-ipsec-proposal-tran1] transform esp
      [*PE1-ipsec-proposal-tran1] esp authentication-algorithm sha2-256
      [*PE1-ipsec-proposal-tran1] esp encryption-algorithm aes 256
      [*PE1-ipsec-proposal-tran1] quit
      [*PE1] commit

    3. Configure an IKE proposal numbered 10.

      [~PE1] ike proposal 10
      [*PE1-ike-proposal-10] authentication-method pre-share
      [*PE1-ike-proposal-10] authentication-algorithm sha2-256
      [*PE1-ike-proposal-10] integrity-algorithm hmac-sha2-256
      [*PE1-ike-proposal-10] dh group14
      [*PE1-ike-proposal-10] quit
      [*PE1] commit

    4. Configure an IKE peer named b.

      [~PE1] ike peer b
      [*PE1-ike-peer-b] ike-proposal 10
      [*PE1-ike-peer-b] remote-address 4.4.4.4
      [*PE1-ike-peer-b] pre-shared-key abcde
      [*PE1-ike-peer-b] quit
      [*PE1] commit

      The pre-shared key configured on the local device must be the same as that configured on the IKE peer.

    5. Configure an IPsec policy named map1 and numbered 10.

      [~PE1] ipsec policy map1 10 isakmp
      [*PE1-ipsec-policy-isakmp-map1-10] security acl 3000
      [*PE1-ipsec-policy-isakmp-map1-10] proposal tran1
      [*PE1-ipsec-policy-isakmp-map1-10] ike-peer b
      [~PE1-ipsec-policy-isakmp-map1-10] local-address 3.3.3.3
      [*PE1-ipsec-policy-isakmp-map1-10] quit
      [*PE1] commit

    6. Configure an IPsec service-instance group.

      [~PE1] service-location 1
      [*PE1-service-location-1] location follow-forwarding-mode   //Use this configuration in 1:1 board protection mode.
      [*PE1-service-location-1] location slot 9   //Use this configuration in non-1:1 board protection mode.
      [*PE1-service-location-1] commit
      [~PE1-service-location-1] quit
      [~PE1] service-instance-group group1
      [*PE1-service-instance-group-group1] service-location 1
      [*PE1-service-instance-group-group1] commit
      [~PE1-service-instance-group-group1] quit

    7. Create and configure an IPsec tunnel.

      [~PE1] interface Tunnel 1
      [*PE1-Tunnel1] ip address 10.11.1.1 255.255.255.255
      [*PE1-Tunnel1] tunnel-protocol ipsec
      [*PE1-Tunnel1] ipsec policy map1 service-instance-group group1
      [*PE1-Tunnel1] quit
      [*PE1] commit

    8. Configure static routes that import traffic into the tunnel.

      [~PE1] ip route-static 6.6.6.6 255.255.255.255 GigabitEthernet0/1/3 10.1.1.2
      [*PE1] ip route-static 4.4.4.4 255.255.255.255 Tunnel1 6.6.6.6
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  9. Configure IPsec on the CPE.
    1. Configure advanced ACL 3000.

      [~CPE] acl 3000
      [*CPE-acl-adv-3000] rule 5 permit ip
      [*CPE-acl-adv-3000] quit
      [*CPE] commit

    2. Configure an IPsec proposal named tran1.

      [~CPE] ipsec proposal tran1
      [*CPE-ipsec-proposal-tran1] encapsulation-mode tunnel
      [*CPE-ipsec-proposal-tran1] transform esp
      [*CPE-ipsec-proposal-tran1] esp authentication-algorithm sha2-256
      [*CPE-ipsec-proposal-tran1] esp encryption-algorithm aes 256
      [*CPE-ipsec-proposal-tran1] quit
      [*CPE] commit

    3. Configure an IKE proposal numbered 10.

      [~CPE] ike proposal 10
      [*CPE-ike-proposal-10] authentication-method pre-share
      [*CPE-ike-proposal-10] authentication-algorithm sha2-256
      [*CPE-ike-proposal-10] integrity-algorithm hmac-sha2-256
      [*CPE-ike-proposal-10] dh group14
      [*CPE-ike-proposal-10] quit
      [*CPE] commit

    4. Configure an IKE peer named 1.

      [~CPE] ike peer 1
      [*CPE-ike-peer-1] ike-proposal 10
      [*CPE-ike-peer-1] remote-address 5.5.5.5
      [*CPE-ike-peer-1] pre-shared-key abcde
      [*CPE-ike-peer-1] quit
      [*CPE] commit

      The pre-shared key configured on the local device must be the same as that configured on the IKE peer.

    5. Configure an IPsec policy template named temp1 and numbered 1.

      [~CPE] ipsec policy-template temp1 1
      [*CPE-ipsec-policy-templet-temp1-1] security acl 3000
      [*CPE-ipsec-policy-templet-temp1-1] proposal tran1
      [*CPE-ipsec-policy-templet-temp1-1] ike-peer 1
      [*CPE-ipsec-policy-templet-temp1-1] local-address 6.6.6.6
      [*CPE-ipsec-policy-templet-temp1-1] quit
      [*CPE] commit

    6. Create a security policy based on the policy template.

      [~CPE] ipsec policy 1 1 isakmp template temp1
      [*CPE] commit

    7. Configure an IPsec service instance group.

      [~CPE] service-location 1
      [*CPE-service-location-1] location follow-forwarding-mode   //Use this configuration in 1:1 board protection mode.
      [*CPE-service-location-1] location slot 9   //Use this configuration in non-1:1 board protection mode.
      [*CPE-service-location-1] commit
      [~CPE-service-location-1] quit
      [~CPE] service-instance-group group1
      [*CPE-service-instance-group-group1] service-location 1
      [*CPE-service-instance-group-group1] commit
      [~CPE-service-instance-group-group1] quit

    8. Create and configure an IPsec tunnel.

      [~CPE] interface Tunnel 1
      [*CPE-Tunnel1] ip address 10.22.2.2 255.255.255.255
      [*CPE-Tunnel1] tunnel-protocol ipsec
      [*CPE-Tunnel1] ipsec policy 1 service-instance-group group1
      [*CPE-Tunnel1] quit
      [*CPE] commit

    9. Configure static routes that import traffic into the tunnel.

      [~CPE] ip route-static 5.5.5.5 255.255.255.255 GigabitEthernet0/1/1 192.168.1.1
      [*CPE] commit

Configuration Files
  • PE1 configuration file

    #
    sysname PE1
    #
    evpn enhancement port 1345
    #
    evpn
     vlan-extend private enable
     vlan-extend redirect enable
     local-remote frr enable
     bypass-vxlan enable
    #
    evpn vpn-instance evpn1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evpn1
    #  
    acl number 3000
      rule 5 permit ip source 3.3.3.3 0 destination 4.4.4.4 0
    #
    e-trunk 1
     priority 10
     peer-address 2.2.2.2 source-address 1.1.1.1
    #
    isis 1
     network-entity 10.0000.0000.0001.00
     frr
    #
    ike proposal 10
     encryption-algorithm aes-cbc 256
     dh group14
     authentication-algorithm sha2-256 
     integrity-algorithm hmac-sha2-256
    #
    ike peer b
     pre-shared-key %$%$THBGMJK2659z"C(T{J"-,.2n%$%$
     ike-proposal 10
     remote-address 4.4.4.4
    #
    service-location 1
     location follow-forwarding-mode   //Use this configuration in 1:1 board protection mode.
     location slot 9   //Use this configuration in non-1:1 board protection mode.
    #
    service-instance-group group1
     service-location 1
    #
    ipsec proposal tran1
     esp authentication-algorithm sha2-256  
     esp encryption-algorithm aes 256
    #                                         
    ipsec policy map1 10 isakmp
     security acl 3000
     ike-peer b
     proposal tran1
     local-address 3.3.3.3
    # 
    interface Eth-Trunk1
     mac-address 00e0-fc12-3456
     mode lacp-static
     e-trunk 1
     e-trunk mode force-master
     es track evpn-peer 2.2.2.2
     esi 0000.0001.0001.0001.0001
    #
    interface Eth-Trunk1.1 mode l2
     encapsulation dot1q vid 1
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet 0/1/1
     undo shutdown
     ip address 10.1.20.1 255.255.255.0
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet 0/1/3
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    interface LoopBack0
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 5.5.5.5 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 1.1.1.1
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
     vni 10 head-end peer-list 4.4.4.4
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
    #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
    #
    interface Tunnel1 
     ip address 10.11.1.1 255.255.255.255
     tunnel-protocol ipsec
     ipsec policy map1 service-instance-group group1
    #
    ip route-static 6.6.6.6 255.255.255.255 GigabitEthernet0/1/3 10.1.1.2        
    ip route-static 4.4.4.4 255.255.255.255 Tunnel1 6.6.6.6   
    #
    return
    
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn enhancement port 1345
    #
    evpn
     vlan-extend redirect enable
     vlan-extend private enable
     local-remote frr enable
     bypass-vxlan enable
    #
    evpn vpn-instance evpn1 bd-mode
     route-distinguisher 22:22
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evpn1
    #  
    acl number 3000
      rule 5 permit ip source 3.3.3.3 0 destination 4.4.4.4 0
    #
    ike proposal 10
     encryption-algorithm aes-cbc 256
     dh group14
     authentication-algorithm sha2-256 
     integrity-algorithm hmac-sha2-256
    #
    ike peer b
     pre-shared-key %$%$THBGMJK2659z"C(T{J"-,.2n%$%$
     ike-proposal 10
     remote-address 2.2.2.2
    #
    service-location 1
     location follow-forwarding-mode   //Use this configuration in 1:1 board protection mode.
     location slot 9   //Use this configuration in non-1:1 board protection mode.
    #
    service-instance-group group1
     service-location 1
    #
    ipsec proposal tran1
     esp authentication-algorithm sha2-256  
     esp encryption-algorithm aes 256
    #                                         
    ipsec policy map1 10 isakmp
     security acl 3000
     ike-peer b
     proposal tran1
     local-address 5.5.5.5
    
    #
    e-trunk 1
     priority 10
     peer-address 1.1.1.1 source-address 2.2.2.2
    #
    isis 1
     network-entity 10.0000.0000.0002.00
     frr
    #
    interface Eth-Trunk1
     mac-address 00e0-fc12-3456
     mode lacp-static
     e-trunk 1
     e-trunk mode force-master
     es track evpn-peer 1.1.1.1
     esi 0000.0001.0001.0001.0001
    #
    interface Eth-Trunk1.1 mode l2
     encapsulation dot1q vid 1
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet 0/1/1
     undo shutdown
      ip address 10.1.20.2 255.255.255.0
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet 0/1/3
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
    #
    interface LoopBack0
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 5.5.5.5 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 2.2.2.2
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
     vni 10 head-end peer-list 4.4.4.4
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise encap-type vxlan
     #
    interface Tunnel1 
     ip address 10.11.1.1 255.255.255.0
     tunnel-protocol ipsec
     ipsec policy map1 service-instance-group group1
    #
    ip route-static 6.6.6.6 255.255.255.255 GigabitEthernet0/1/3 10.1.2.2        
    ip route-static 4.4.4.4 255.255.255.255 Tunnel1 6.6.6.6   
    #
    return
  • CE configuration file

    #
    sysname CE
    #
    vlan batch 1 to 4094
    #
    interface Eth-Trunk1
     portswitch
     port link-type trunk
     port trunk allow-pass vlan 1
    #
    interface GigabitEthernet 0/1/1
      undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet 0/1/2
     undo shutdown
     eth-trunk 1
    #
    return
  • CPE configuration file

    #
    sysname CPE
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
    #
    acl number 3000
      rule 5 permit ip
    #
    ike proposal 10
     encryption-algorithm aes-cbc 256
     dh group14
     authentication-algorithm sha2-256 
     integrity-algorithm hmac-sha2-256
    #
    ike peer 1
     pre-shared-key %$%$THBGMJK2659z"C(T{J"-,.2n%$%$
     ike-proposal 10
     remote-address 5.5.5.5
    #
    service-location 1
     location follow-forwarding-mode   //Use this configuration in 1:1 board protection mode.
     location slot 9   //Use this configuration in non-1:1 board protection mode.
    #
    service-instance-group group1
     service-location 1
    #
    ipsec proposal tran1
     esp authentication-algorithm sha2-256  
     esp encryption-algorithm aes 256
    #                                         
    ipsec policy-template temp1 1
    #
     security acl 3000
     ike-peer 1
     proposal tran1
     local-address 6.6.6.6
    #
    ipsec policy 1 1 isakmp template temp1
    
    #
    isis 1
     network-entity 20.0000.0000.0001.00
     frr
    #
    interface GigabitEthernet 0/1/1
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet 0/1/1.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack0
     ip address 4.4.4.4 255.255.255.255
     isis enable 1
    #
    interface LoopBack1
     ip address 6.6.6.6 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 4.4.4.4
     vni 10 head-end peer-list 3.3.3.3
    #
    
    interface Tunnel1 
     ip address 10.22.2.2 255.255.255.255                                             
     tunnel-protocol ipsec 
     ipsec policy 1 service-instance-group group1                                                                         
    #
     ip route-static 5.5.5.5 255.255.255.255 GigabitEthernet0/1/1 192.168.1.1
    #
    return
    

Example for Configuring the Static VXLAN Active-Active Scenario (in VLAN-Aware Bundle Mode)

In the scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. In this way, carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in case of a fault. The VLAN-aware bundle access mode allows different VLANs configured on different physical interfaces to access the same EVPN instance while ensuring that traffic from these VLANs remains isolated.

Networking Requirements

On the network shown in Figure 1-1128, CE1 is dual-homed to PE1 and PE2 through an Eth-Trunk. PE1 and PE2 use a virtual address as the source virtual tunnel end point (VTEP) address of a Network Virtualization Edge (NVE) interface, that is, anycast VTEP address. In this way, the CPE detects only one remote NVE interface, and a static VXLAN tunnel is established between the CPE and the anycast VTEP address.

The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.

To allow different VLANs configured on different physical interfaces to access the same EVPN instance while ensuring that traffic from these VLANs remains isolated, configure the CE to access PEs in VLAN-aware mode.

Figure 1-1128 Networking for configuring static VXLAN in an active-active scenario (Layer 2 communication)

In this example, interfaces 1 through 3 represen GE 0/1/1, GE 0/1/2, and GE 0/1/3, respectively.



Table 1-490 Interface IP addresses

Device

Interface

IP Address

PE1

GE 0/1/1

10.1.20.1/24

GE 0/1/2

-

GE 0/1/3

10.1.1.1/24

Loopback1

1.1.1.1/24

Loopback2

3.3.3.3/32

PE2

GE 0/1/1

10.1.20.2/24

GE 0/1/2

-

GE 0/1/3

10.1.2.1/24

Loopback1

2.2.2.2/32

Loopback2

3.3.3.3/32

CE1

GE 0/1/1

-

GE 0/1/2

-

CPE

GE 0/1/1

10.1.1.2/24

GE 0/1/2

10.1.2.2/24

GE 0/1/3

-

Loopback1

4.4.4.4/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure an IGP on each PE and the CPE to ensure Layer 3 connectivity.
  2. Configure fast traffic switching on PE1 and PE2. If a PE fails, this configuration allows downstream traffic on the CPE to be switched to the other PE, which then forwards the traffic to a CE.
  3. Establish a BGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
  4. Create an EVPN instance in BD mode and a BD and bind the BD to the EVPN instance with a BD tag set on each of PE1 and PE2.
  5. Configure the same anycast VTEP address (virtual address) on PE1 and PE2 as the NVE interface's source address, which is used to establish a VXLAN tunnel with the CPE. Establish static VXLAN tunnels between the PEs and CPE so that the PEs and CPE can communicate.
  6. On PE1 and PE2, configure service access points and set the same ESI for the access links of CE1 so that CE1 is dual-homed to PE1 and PE2.
  7. Enable inter-chassis VXLAN on PE1 and PE2, configure different bypass addresses for the PEs, and establish a bypass VXLAN tunnel between the PEs, allowing communication between PE1 and PE2.
Data Preparation

To complete the configuration, you need the following data:

  • Interfaces and their IP addresses

  • VPN and EVPN instance names

  • Import and export VPN targets for the VPN and EVPN instances

Procedure

  1. Assign IP addresses to device interfaces, including loopback interfaces.

    For configuration details, see Configuration Files in this section.

  2. Configure an IGP on each PE and the CPE. IS-IS is used in this example.

    For configuration details, see Configuration Files in this section.

  3. Configure fast traffic switching on each PE.

    # Configure PE1.

    [~PE1] evpn
    [*PE1-evpn] vlan-extend private enable
    [*PE1-evpn] vlan-extend redirect enable
    [*PE1-evpn] local-remote frr enable
    [*PE1-evpn] bypass-vxlan enable
    [*PE1-evpn] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  4. Establish an IBGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.

    # Configure PE1.

    [~PE1] bgp 100
    [*PE1-bgp] peer 2.2.2.2 as-number 100
    [*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 0
    [*PE1-bgp] l2vpn-family evpn
    [*PE1-bgp-af-evpn] peer 2.2.2.2 enable
    [*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*PE1-bgp-af-evpn] quit
    [*PE1-bgp] quit
    [*PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  5. Establish VXLAN tunnels.
    1. Configure an EVI and bind the EVI to a BD on each PE.

      # Configure PE1.

      [~PE1] evpn vpn-instance evpn1 bd-mode
      [*PE1-evpn-instance-evpn1] route-distinguisher 11:11
      [*PE1-evpn-instance-evpn1] vpn-target 1:1 export-extcommunity
      [*PE1-evpn-instance-evpn1] vpn-target 1:1 import-extcommunity
      [*PE1-evpn-instance-evpn1] quit
      [*PE1] bridge-domain 10
      [*PE1-bd10] vxlan vni 10 split-horizon-mode
      [*PE1-bd10] evpn binding vpn-instance evpn1 bd-tag 100
      [*PE1-bd10] quit
      [*PE1] bridge-domain 20
      [*PE1-bd20] vxlan vni 20 split-horizon-mode
      [*PE1-bd20] evpn binding vpn-instance evpn1 bd-tag 200
      [*PE1-bd20] quit
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

    2. Configure an ingress replication list on each PE and the CPE.

      # Configure the CPE.

      [~CPE] bridge-domain 10
      [*CPE-bd10] vxlan vni 10 split-horizon-mode
      [*CPE-bd10] quit
      [*CPE] bridge-domain 20
      [*CPE-bd20] vxlan vni 20 split-horizon-mode
      [*CPE-bd20] quit
      [*CPE] interface nve 1
      [*CPE-Nve1] source 4.4.4.4
      [*CPE-Nve1] vni 10 head-end peer-list 3.3.3.3
      [*CPE-Nve1] vni 20 head-end peer-list 3.3.3.3
      [*CPE-Nve1] quit
      [*CPE] commit

      # Configure PE1.

      [~PE1] interface nve 1
      [*PE1-Nve1] source 3.3.3.3
      [*PE1-Nve1] bypass source 1.1.1.1
      [*PE1-Nve1] mac-address 00e0-fc12-7890
      [*PE1-Nve1] vni 10 head-end peer-list protocol bgp
      [*PE1-Nve1] vni 10 head-end peer-list 4.4.4.4
      [*PE1-Nve1] vni 20 head-end peer-list protocol bgp
      [*PE1-Nve1] vni 20 head-end peer-list 4.4.4.4
      [*PE1-Nve1] quit
      [*PE1] commit

      The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  6. Configure PEs to provide access for CEs.

    # Configure PE1.

    [*PE1] e-trunk 1
    [*PE1-e-trunk-1] priority 10
    [*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
    [*PE1-e-trunk-1] quit
    [*PE1] interface eth-trunk 1
    [*PE1-Eth-Trunk1] mac-address 00e0-fc12-3456
    [*PE1-Eth-Trunk1] mode lacp-static
    [*PE1-Eth-Trunk1] e-trunk 1
    [*PE1-Eth-Trunk1] e-trunk mode force-master
    [*PE1-Eth-Trunk1] es track evpn-peer 2.2.2.2
    [*PE1-Eth-Trunk1] esi 0000.0001.0001.0001.0001
    [*PE1-Eth-Trunk1] quit
    [*PE1] interface eth-trunk1.1 mode l2
    [*PE1-Eth-Trunk1.1] encapsulation dot1q vid 100
    [*PE1-Eth-Trunk1.1] bridge-domain 10
    [*PE1-Eth-Trunk1.1] quit
    [*PE1] interface eth-trunk1.2 mode l2
    [*PE1-Eth-Trunk1.2] encapsulation dot1q vid 200
    [*PE1-Eth-Trunk1.2] bridge-domain 20
    [*PE1-Eth-Trunk1.2] quit
    [~PE1] commit

    The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.

  7. Verify the configuration.

    Run the display vxlan tunnel command on PE1 and check VXLAN tunnel information. The following example uses the command output on PE1.

    [~PE1] display vxlan tunnel
    Number of vxlan tunnel : 2
    Tunnel ID   Source                Destination           State  Type     Uptime
    -----------------------------------------------------------------------------------
    4026531841  3.3.3.3               4.4.4.4               up     static   00:30:12  
    4026531842  1.1.1.1               2.2.2.2               up     dynamic  00:12:28 

    Run the display bgp evpn all routing-table command on PE1. The command output shows that EVPN routes carrying Ethernet tag IDs are received from PE2.

    [~PE1] display bgp evpn all routing-table
     Local AS number : 100
    
     BGP Local router ID is 1.1.1.1
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
    
    
     EVPN address family:
     Number of A-D Routes: 4
     Route Distinguisher: 11:11
           Network(ESI/EthTagId)                                  NextHop
     *>    0000.0001.0001.0001.0001:100                          0.0.0.0
     * i                                                          3.3.3.3
     *>    0000.0001.0001.0001.0001:200                          0.0.0.0
     * i                                                          3.3.3.3
        
    
     EVPN-Instance evpn1:
     Number of A-D Routes: 4
           Network(ESI/EthTagId)                                  NextHop
     *>    0000.0001.0001.0001.0001:100                          0.0.0.0
       i                                                          3.3.3.3
     *>    0000.0001.0001.0001.0001:200                          0.0.0.0
       i                                                          3.3.3.3
    
     EVPN address family:
     Number of Inclusive Multicast Routes: 4
     Route Distinguisher: 11:11
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    100:32:3.3.3.3                                        0.0.0.0
     * i                                                          3.3.3.3
     *>    200:32:3.3.3.3                                        0.0.0.0
     * i                                                          3.3.3.3
        
    
     EVPN-Instance evpn1:
     Number of Inclusive Multicast Routes: 4
           Network(EthTagId/IpAddrLen/OriginalIp)                 NextHop
     *>    100:32:3.3.3.3                                        0.0.0.0
     * i                                                          3.3.3.3
     *>    200:32:3.3.3.3                                        0.0.0.0
     * i                                                          3.3.3.3

Configuration Files
  • PE1 configuration file

    #
    sysname PE1
    #
    evpn
     vlan-extend private enable
     vlan-extend redirect enable
     local-remote frr enable
     bypass-vxlan enable
    #
    evpn vpn-instance evpn1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evpn1 bd-tag 100
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evpn1 bd-tag 200
    #
    e-trunk 1
     priority 10
     peer-address 2.2.2.2 source-address 1.1.1.1
    #
    isis 1
     network-entity 10.0000.0000.0001.00
    #
    interface Eth-Trunk1
     mac-address 00e0-fc12-3456
     mode lacp-static
     e-trunk 1
     e-trunk mode force-master
     es track evpn-peer 2.2.2.2
     esi 0000.0001.0001.0001.0001
    #
    interface Eth-Trunk1.1 mode l2
     encapsulation dot1q vid 100
     bridge-domain 10
    #
    interface Eth-Trunk1.2 mode l2
     encapsulation dot1q vid 200
     bridge-domain 20
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.20.1 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet0/1/3
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 1.1.1.1
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
     vni 10 head-end peer-list 4.4.4.4
     vni 20 head-end peer-list protocol bgp
     vni 20 head-end peer-list 4.4.4.4
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack0
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise encap-type vxlan
    #
    return
  • PE2 configuration file

    #
    sysname PE2
    #
    evpn
     vlan-extend private enable
     vlan-extend redirect enable
     local-remote frr enable
     bypass-vxlan enable
    #
    evpn vpn-instance evpn1 bd-mode
     route-distinguisher 11:11
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evpn1 bd-tag 100
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evpn1 bd-tag 200
    #
    e-trunk 1
     priority 10
     peer-address 1.1.1.1 source-address 2.2.2.2
    #
    isis 1
     network-entity 10.0000.0000.0002.00
    #
    interface Eth-Trunk1
     mac-address 00e0-fc12-3456
     mode lacp-static
     e-trunk 1
     e-trunk mode force-master
     es track evpn-peer 1.1.1.1
     esi 0000.0001.0001.0001.0001
    #
    interface Eth-Trunk1.1 mode l2
     encapsulation dot1q vid 100
     bridge-domain 10
    #
    interface Eth-Trunk1.2 mode l2
     encapsulation dot1q vid 200
     bridge-domain 20
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.20.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet0/1/3
     undo shutdown
     ip address 10.1.2.1 255.255.255.0
     isis enable 1
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
     isis enable 1
    #
    interface LoopBack2
     ip address 3.3.3.3 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 3.3.3.3
     bypass source 2.2.2.2
     mac-address 00e0-fc12-7890
     vni 10 head-end peer-list protocol bgp
     vni 10 head-end peer-list 4.4.4.4
     vni 20 head-end peer-list protocol bgp
     vni 20 head-end peer-list 4.4.4.4
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise encap-type vxlan
    #
    return
  • CE configuration file

    #
    sysname CE
    #
    vlan batch 1 to 4094
    #
    interface Eth-Trunk1
     port link-type trunk
     port trunk allow-pass vlan 100 200
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     eth-trunk 1
    #
    interface GigabitEthernet0/1/2
      undo shutdown
     eth-trunk 1
    #
    return
    
  • CPE configuration file

    #
    sysname CPE
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
    #
    isis 1
     network-entity 20.0000.0000.0001.00
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.1.2.2 255.255.255.0
     isis enable 1
    #
    interface GigabitEthernet0/1/3
     undo shutdown
     esi 0000.0000.0000.0000.0017
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 100
     bridge-domain 10
    #
    interface GigabitEthernet0/1/3.2 mode l2
     encapsulation dot1q vid 200
     bridge-domain 20
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
     isis enable 1
    #
    interface Nve1
     source 4.4.4.4
     vni 10 head-end peer-list 3.3.3.3
     vni 20 head-end peer-list 3.3.3.3
    #
    return

Example for Configuring IPv4 NFVI Distributed Gateway

This section provides an example for configuring IPv4 NFVI distributed gateway in a typical usage scenario.

Networking Requirements

Huawei's NFVI telecommunications (telco) cloud is a networking solution that incorporates Data Center Interconnect (DCI) and data center network (DCN) technologies. Mobile phone IPv4 traffic enters the DCN and accesses its virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE). After being processed by these, the phone traffic is forwarded over the Internet through the DCN to the destination devices. Equally, response traffic sent over the Internet from the destination devices to the mobile phones also undergoes this process. For this to take place and to ensure that the traffic is balanced within the DCN, you need to deploy the NFVI distributed gateway function on the DCN.

Figure 1-1129 Configuring IPv4 NFVI distributed gateway

Interfaces 1 through 5 in this example represent GE 0/1/1, GE 0/1/2, GE 0/1/3, GE 0/1/4, and GE 0/1/5, respectively.



Figure 1-1129 shows the network on which the NFVI distributed gateway function is deployed. DCGW1 and DCGW2 are the DCN's border gateways. The DCGWs exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions (VNFs). As virtualized NEs, VNF1 and VNF2 can be deployed separately to implement the functions of the vUGW and vMSE. VNF1 and VNF2 are connected to L2GW/L3GW1 and L2GW/L3GW2 through respective interface process units (IPUs).

This networking combines the distributed gateway function and the EVPN VXLAN active-active gateway function:
  • The EVPN VXLAN active-active gateway function is deployed on DCGW1 and DCGW2. Specifically, a bypass VXLAN tunnel is set up between DCGW1 and DCGW2. In addition, they use a virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.

  • The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between them.

The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can be deployed as a DCGW and L2GW/L3GW in this networking.

Table 1-491 Interface IP addresses and masks

Device

Interface

IP Address and Mask

DCGW1

GE 0/1/1

10.6.1.1/24

GE 0/1/2

10.6.2.1/24

Loopback 0

9.9.9.9/32

Loopback1

3.3.3.3/32

Loopback2

33.33.33.33/32

DCGW2

GE 0/1/1

10.6.1.2/24

GE 0/1/2

10.6.3.1/24

Loopback0

9.9.9.9/32

Loopback1

4.4.4.4/32

Loopback2

44.44.44.44/32

L2GW/L3GW1

GE 0/1/1

10.6.4.1/24

GE 0/1/2

10.6.2.2/24

GE 0/1/3

-

GE 0/1/4

-

GE 0/1/5

-

Loopback1

1.1.1.1/32

L2GW/L3GW2

GE 0/1/1

10.6.4.2/24

GE 0/1/2

10.6.3.2/24

GE 0/1/3

-

GE 0/1/4

-

Loopback1

2.2.2.2/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.
  2. Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.
  3. Configure an L3VPN instance and bind it to a VBDIF interface on each DCGW and each L2GW/L3GW.
  4. Configure BGP EVPN on each DCGW and each L2GW/L3GW.
  5. Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.
  6. On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.
  7. On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.
  8. On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.
  9. Configure each DCGW to establish a VPN BGP peer relationship with a VNF.
  10. Configure load balancing on each DCGW and each L2GW/L3GW.

Procedure

  1. Assign an IP address to each device interface, including the loopback interfaces.

    For configuration details, see Configuration Files in this section.

  2. Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.

    For configuration details, see Configuration Files in this section.

  3. Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] evpn vpn-instance evrf1 bd-mode
    [*DCGW1-evpn-instance-evrf1] route-distinguisher 1:1
    [*DCGW1-evpn-instance-evrf1] vpn-target 1:1
    [*DCGW1-evpn-instance-evrf1] quit
    [*DCGW1] evpn vpn-instance evrf2 bd-mode
    [*DCGW1-evpn-instance-evrf2] route-distinguisher 2:2
    [*DCGW1-evpn-instance-evrf2] vpn-target 2:2
    [*DCGW1-evpn-instance-evrf2] quit
    [*DCGW1] evpn vpn-instance evrf3 bd-mode
    [*DCGW1-evpn-instance-evrf3] route-distinguisher 3:3
    [*DCGW1-evpn-instance-evrf3] vpn-target 3:3
    [*DCGW1-evpn-instance-evrf3] quit
    [*DCGW1] evpn vpn-instance evrf4 bd-mode
    [*DCGW1-evpn-instance-evrf4] route-distinguisher 4:4
    [*DCGW1-evpn-instance-evrf4] vpn-target 4:4
    [*DCGW1-evpn-instance-evrf4] quit
    [*DCGW1] bridge-domain 10
    [*DCGW1-bd10] vxlan vni 100 split-horizon-mode
    [*DCGW1-bd10] evpn binding vpn-instance evrf1
    [*DCGW1-bd10] quit
    [*DCGW1] bridge-domain 20
    [*DCGW1-bd20] vxlan vni 110 split-horizon-mode
    [*DCGW1-bd20] evpn binding vpn-instance evrf2
    [*DCGW1-bd20] quit
    [*DCGW1] bridge-domain 30
    [*DCGW1-bd30] vxlan vni 120 split-horizon-mode
    [*DCGW1-bd30] evpn binding vpn-instance evrf3
    [*DCGW1-bd30] quit
    [*DCGW1] bridge-domain 40
    [*DCGW1-bd40] vxlan vni 130 split-horizon-mode
    [*DCGW1-bd40] evpn binding vpn-instance evrf4
    [*DCGW1-bd40] quit
    [*DCGW1] commit

    Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.

  4. Configure an L3VPN instance on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] ip vpn-instance vpn1
    [*DCGW1-vpn-instance-vpn1] vxlan vni 200
    [*DCGW1-vpn-instance-vpn1] ipv4-family
    [*DCGW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
    [*DCGW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
    [*DCGW1-vpn-instance-vpn1-af-ipv4] quit
    [*DCGW1-vpn-instance-vpn1] quit
    [*DCGW1] interface vbdif10
    [*DCGW1-Vbdif10] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif10] ip address 10.1.1.1 24
    [*DCGW1-Vbdif10] arp generate-rd-table enable
    [*DCGW1-Vbdif10] vxlan anycast-gateway enable
    [*DCGW1-Vbdif10] mac-address 00e0-fc00-0002
    [*DCGW1-Vbdif10] quit
    [*DCGW1] interface vbdif20
    [*DCGW1-Vbdif20] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif20] ip address 10.2.1.1 24
    [*DCGW1-Vbdif20] arp generate-rd-table enable
    [*DCGW1-Vbdif20] vxlan anycast-gateway enable
    [*DCGW1-Vbdif20] mac-address 00e0-fc00-0003
    [*DCGW1-Vbdif20] quit
    [*DCGW1] interface vbdif30
    [*DCGW1-Vbdif30] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif30] ip address 10.3.1.1 24
    [*DCGW1-Vbdif30] arp generate-rd-table enable
    [*DCGW1-Vbdif30] vxlan anycast-gateway enable
    [*DCGW1-Vbdif30] mac-address 00e0-fc00-0001
    [*DCGW1-Vbdif30] quit
    [*DCGW1] interface vbdif40
    [*DCGW1-Vbdif40] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif40] ip address 10.4.1.1 24
    [*DCGW1-Vbdif40] arp generate-rd-table enable
    [*DCGW1-Vbdif40] vxlan anycast-gateway enable
    [*DCGW1-Vbdif40] mac-address 00e0-fc00-0004
    [*DCGW1-Vbdif40] quit
    [*DCGW1] commit

    Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.

  5. Configure BGP EVPN on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] ip ip-prefix uIP index 10 permit 10.10.10.10 32
    [*DCGW1] route-policy stopuIP deny node 10
    [*DCGW1-route-policy] if-match ip-prefix uIP
    [*DCGW1-route-policy] quit
    [*DCGW1] route-policy stopuIP permit node 20
    [*DCGW1-route-policy] quit
    [*DCGW1] bgp 100
    [*DCGW1-bgp] peer 1.1.1.1 as-number 100
    [*DCGW1-bgp] peer 1.1.1.1 connect-interface LoopBack 1
    [*DCGW1-bgp] peer 2.2.2.2 as-number 100
    [*DCGW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*DCGW1-bgp] peer 4.4.4.4 as-number 100
    [*DCGW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
    [*DCGW1-bgp] l2vpn-family evpn
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
    [*DCGW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
    [*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
    [*DCGW1-bgp-af-evpn] quit
    [*DCGW1-bgp] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] bgp 100
    [*L2GW/L3GW1-bgp] peer 2.2.2.2 as-number 100
    [*L2GW/L3GW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*L2GW/L3GW1-bgp] peer 3.3.3.3 as-number 100
    [*L2GW/L3GW1-bgp] peer 3.3.3.3 connect-interface LoopBack 1
    [*L2GW/L3GW1-bgp] peer 4.4.4.4 as-number 100
    [*L2GW/L3GW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
    [*L2GW/L3GW1-bgp] l2vpn-family evpn
    [*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 enable
    [*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise arp
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise arp
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise arp
    [*L2GW/L3GW1-bgp-af-evpn] quit
    [*L2GW/L3GW1-bgp] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  6. Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] evpn
    [*DCGW1-evpn] bypass-vxlan enable
    [*DCGW1-evpn] quit
    [*DCGW1] interface nve 1
    [*DCGW1-Nve1] source 9.9.9.9
    [*DCGW1-Nve1] bypass source 3.3.3.3
    [*DCGW1-Nve1] mac-address 00e0-fc00-0009
    [*DCGW1-Nve1] vni 100 head-end peer-list protocol bgp
    [*DCGW1-Nve1] vni 110 head-end peer-list protocol bgp
    [*DCGW1-Nve1] vni 120 head-end peer-list protocol bgp
    [*DCGW1-Nve1] vni 130 head-end peer-list protocol bgp
    [*DCGW1-Nve1] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] interface nve 1
    [*L2GW/L3GW1-Nve1] source 1.1.1.1
    [*L2GW/L3GW1-Nve1] vni 100 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] vni 110 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] vni 120 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] vni 130 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  7. On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] interface GigabitEthernet0/1/3.1 mode l2
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] encapsulation dot1q vid 10
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] rewrite pop single
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] bridge-domain 10
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] quit
    [*L2GW/L3GW1] interface GigabitEthernet0/1/4.1 mode l2
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] encapsulation dot1q vid 20
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] rewrite pop single
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] bridge-domain 20
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] quit
    [*L2GW/L3GW1] interface GigabitEthernet0/1/5.1 mode l2
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] encapsulation dot1q vid 10
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] rewrite pop single
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] bridge-domain 10
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] quit
    [*L2GW/L3GW1] ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.1.1.2 tag 1000
    [*L2GW/L3GW1] ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.2.1.2 tag 1000
    [*L2GW/L3GW1] ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.1.1.3 tag 1000
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  8. On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] bgp 100
    [*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
    [*L2GW/L3GW1-bgp-vpn1] import-route static
    [*L2GW/L3GW1-bgp-vpn1] advertise l2vpn evpn import-route-multipath
    [*L2GW/L3GW1-bgp-vpn1] quit
    [*L2GW/L3GW1-bgp] quit
    [*L2GW/L3GW1] route-policy sp permit node 10
    [*L2GW/L3GW1-route-policy] if-match tag 1000
    [*L2GW/L3GW1-route-policy] apply gateway-ip origin-nexthop
    [*L2GW/L3GW1-route-policy] quit
    [*L2GW/L3GW1] route-policy sp deny node 20
    [*L2GW/L3GW1-route-policy] quit
    [*L2GW/L3GW1] ip vpn-instance vpn1
    [*L2GW/L3GW1-vpn-instance-vpn1] export route-policy sp evpn
    [*L2GW/L3GW1-vpn-instance-vpn1] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  9. On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.

    # Configure DCGW1.

    [~DCGW1] ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
    [*DCGW1] interface LoopBack2
    [*DCGW1-LoopBack2] ip binding vpn-instance vpn1
    [*DCGW1-LoopBack2] ip address 33.33.33.33 255.255.255.255
    [*DCGW1-LoopBack2] quit
    [*DCGW1] bgp 100
    [*DCGW1-bgp] ipv4-family vpn-instance vpn1
    [*DCGW1-bgp-vpn1] advertise l2vpn evpn
    [*DCGW1-bgp-vpn1] import-route direct
    [*DCGW1-bgp-vpn1] network 0.0.0.0 0
    [*DCGW1-bgp-vpn1] quit
    [*DCGW1-bgp] quit
    [*DCGW1] ip ip-prefix lp index 10 permit 33.33.33.33 32
    [*DCGW1] route-policy dp permit node 10
    [*DCGW1-route-policy] if-match tag 2000
    [*DCGW1-route-policy] quit
    [*DCGW1] route-policy dp permit node 15
    [*DCGW1-route-policy] if-match ip-prefix lp
    [*DCGW1-route-policy] quit
    [*DCGW1] route-policy dp deny node 20
    [*DCGW1-route-policy] quit
    [*DCGW1] ip vpn-instance vpn1
    [*DCGW1-vpn-instance-vpn1] export route-policy dp evpn
    [*DCGW1-vpn-instance-vpn1] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

  10. Configure each DCGW to establish a VPN BGP peer relationship with a VNF.

    # Configure DCGW1.

    [~DCGW1] route-policy p1 deny node 10
    [*DCGW1-route-policy] quit
    [*DCGW1] bgp 100
    [*DCGW1-bgp] ipv4-family vpn-instance vpn1
    [*DCGW1-bgp-vpn1] peer 5.5.5.5 as-number 100
    [*DCGW1-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
    [*DCGW1-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
    [*DCGW1-bgp-vpn1] peer 6.6.6.6 as-number 100
    [*DCGW1-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
    [*DCGW1-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
    [*DCGW1-bgp-vpn1] quit
    [*DCGW1-bgp] quit
    [*DCGW1] commit

    # Configure DCGW2.

    [~DCGW2] route-policy p1 deny node 10
    [*DCGW2-route-policy] quit
    [*DCGW2] bgp 100
    [*DCGW2-bgp] ipv4-family vpn-instance vpn1
    [*DCGW2-bgp-vpn1] peer 5.5.5.5 as-number 100
    [*DCGW2-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
    [*DCGW2-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
    [*DCGW2-bgp-vpn1] peer 6.6.6.6 as-number 100
    [*DCGW2-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
    [*DCGW2-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
    [*DCGW2-bgp-vpn1] quit
    [*DCGW2-bgp] quit
    [*DCGW2] commit

  11. Configure load balancing on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] bgp 100
    [*DCGW1-bgp] ipv4-family vpn-instance vpn1
    [*DCGW1-bgp-vpn1] maximum load-balancing 16
    [*DCGW1-bgp-vpn1] quit
    [*DCGW1-bgp] l2vpn-family evpn
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
    [*DCGW1-bgp-af-evpn] quit
    [*DCGW1-bgp] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] bgp 100
    [*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
    [*L2GW/L3GW1-bgp-vpn1] maximum load-balancing 16
    [*L2GW/L3GW1-bgp-vpn1] quit
    [*L2GW/L3GW1-bgp] l2vpn-family evpn
    [*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
    [*L2GW/L3GW1-bgp-af-evpn] quit
    [*L2GW/L3GW1-bgp] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  12. Verify the configuration.

    Run the display bgp vpnv4 vpn-instance vpn1 peer command on each DCGW. The command output shows that the VPN BGP peer relationship between the DCGW and VNF is in Established state. The following example uses the command output on DCGW1:

    [~DCGW1] display bgp vpnv4 vpn-instance vpn1 peer
     BGP local router ID : 10.6.1.1
    
     Local AS number : 100
    
     VPN-Instance vpn1, Router ID 10.6.1.1:
     Total number of peers : 2                 Peers in established state : 2
    
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down        State  PrefRcv
      5.5.5.5         4         100     8136     8135     0 0118h05m Established        4
      6.6.6.6         4         100     8140     8167     0 0118h07m Established        0

    Run the display bgp vpnv4 vpn-instance vpn1 routing-table command on each DCGW. The command output shows that the DCGW has received the mobile phone route (destined for 10.10.10.10 in this example) from the VNF and the next hop of the route is the VNF IP address. The following example uses the command output on DCGW1:

    [~DCGW1] display bgp vpnv4 vpn-instance vpn1 routing-table
     BGP Local router ID is 10.6.1.1
    
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
     RPKI validation codes: V - valid, I - invalid, N - not-found
    
     VPN-Instance vpn1, Router ID 10.6.1.1:
    
    
     Total Number of Routes: 20
            Network            NextHop                       MED        LocPrf    PrefVal Path/Ogn
    
     *>i    5.5.5.5/32         1.1.1.1                        0          100        0       ?
     * i                       1.1.1.1                        0          100        0       ?
       i                       5.5.5.5                        0          100        0       ?
     *>i    6.6.6.6/32         1.1.1.1                        0          100        0       ?
     * i                       2.2.2.2                        0          100        0       ?
     * i                       2.2.2.2                        0          100        0       ?
     *>     10.1.1.0/24        0.0.0.0                        0                     0       ?
     * i                       5.5.5.5                        0          100        0       ?
     *>     10.1.1.1/32        0.0.0.0                        0                     0       ?
     *>     10.2.1.0/24        0.0.0.0                        0                     0       ?
     * i                       5.5.5.5                        0          100        0       ?
     *>     10.2.1.1/32        0.0.0.0                        0                     0       ?
     *>     10.3.1.0/24        0.0.0.0                        0                     0       ?
     *>     10.3.1.1/32        0.0.0.0                        0                     0       ?
     *>     10.4.1.0/24        0.0.0.0                        0                     0       ?
     *>     10.4.1.1/32        0.0.0.0                        0                     0       ?
     *>i   10.10.10.10/32    5.5.5.5                       0          100        0       ?
     *>     33.33.33.33/32     0.0.0.0                        0                     0       ?
     *>i    44.44.44.44/32     9.9.9.9                        0          100        0       ?
     *>     127.0.0.0/8        0.0.0.0                        0                     0       ?

    Run the display ip routing-table vpn-instance vpn1 command on each DCGW. The command output shows the mobile phone routes in the VPN routing table on the DCGW and the outbound interfaces of the routes are VBDIF interfaces.

    [~DCGW1] display ip routing-table vpn-instance vpn1
    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
    ------------------------------------------------------------------------------
    Routing Table : vpn1
             Destinations : 20       Routes : 23        
    
    Destination/Mask    Proto   Pre  Cost        Flags NextHop         Interface
    
            0.0.0.0/0   Static  60   0             DB  0.0.0.0         NULL0
            5.5.5.5/32  IBGP    255  0             RD  10.2.1.2        Vbdif20
                        IBGP    255  0             RD  10.1.1.2        Vbdif10
            6.6.6.6/32  IBGP    255  0             RD  10.1.1.3        Vbdif10
                        IBGP    255  0             RD  10.3.1.2        Vbdif30
                        IBGP    255  0             RD  10.4.1.2        Vbdif40
           10.1.1.0/24  Direct  0    0             D   10.1.1.1        Vbdif10
           10.1.1.1/32  Direct  0    0             D   127.0.0.1       Vbdif10
         10.1.1.255/32  Direct  0    0             D   127.0.0.1       Vbdif10
           10.2.1.0/24  Direct  0    0             D   10.2.1.1        Vbdif20
           10.2.1.1/32  Direct  0    0             D   127.0.0.1       Vbdif20
         10.2.1.255/32  Direct  0    0             D   127.0.0.1       Vbdif20
           10.3.1.0/24  Direct  0    0             D   10.3.1.1        Vbdif30
           10.3.1.1/32  Direct  0    0             D   127.0.0.1       Vbdif30
         10.3.1.255/32  Direct  0    0             D   127.0.0.1       Vbdif30
           10.4.1.0/24  Direct  0    0             D   10.4.1.1        Vbdif40
           10.4.1.1/32  Direct  0    0             D   127.0.0.1       Vbdif40
         10.4.1.255/32  Direct  0    0             D   127.0.0.1       Vbdif40
       10.10.10.10/32 IBGP    255  0             RD  5.5.5.5         Vbdif20
                        IBGP    255  0             RD  5.5.5.5         Vbdif10
        33.33.33.33/32  Direct  0    0             D   127.0.0.1       LoopBack2
        44.44.44.44/32  IBGP    255  0             RD  4.4.4.4         VXLAN
          127.0.0.0/8   Direct  0    0             D   127.0.0.1       InLoopBack0
    255.255.255.255/32  Direct  0    0             D   127.0.0.1       InLoopBack0

Configuration Files
  • DCGW1 configuration file

    #
    sysname DCGW1
    #
    evpn
     bypass-vxlan enable
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 11:11
      apply-label per-instance
      export route-policy dp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200  
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0002
     vxlan anycast-gateway enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0003
     vxlan anycast-gateway enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ip address 10.3.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0001
     vxlan anycast-gateway enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ip address 10.4.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0004
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.1.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.2.1 255.255.255.0
    #
    interface LoopBack0
     ip address 9.9.9.9 255.255.255.255
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #
    interface LoopBack2
     ip binding vpn-instance vpn1
     ip address 33.33.33.33 255.255.255.255
    #
    interface Nve1
     source 9.9.9.9
     bypass source 3.3.3.3
     mac-address 00e0-fc00-0009
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 2.2.2.2 enable
      peer 4.4.4.4 enable
     #
     ipv4-family vpn-instance vpn1
      network 0.0.0.0 0
      import-route direct
      maximum load-balancing 16  
      advertise l2vpn evpn
      peer 5.5.5.5 as-number 100
      peer 5.5.5.5 connect-interface LoopBack2
      peer 5.5.5.5 route-policy p1 export
      peer 6.6.6.6 as-number 100
      peer 6.6.6.6 connect-interface LoopBack2
      peer 6.6.6.6 route-policy p1 export
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 capability-advertise add-path both
      peer 1.1.1.1 advertise add-path path-number 16
      peer 1.1.1.1 advertise encap-type vxlan
      peer 2.2.2.2 enable
      peer 2.2.2.2 capability-advertise add-path both
      peer 2.2.2.2 advertise add-path path-number 16
      peer 2.2.2.2 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise encap-type vxlan
      peer 4.4.4.4 route-policy stopuIP export
    #
    ospf 1
     area 0.0.0.0
      network 3.3.3.3 0.0.0.0
      network 9.9.9.9 0.0.0.0
      network 10.6.1.0 0.0.0.255
      network 10.6.2.0 0.0.0.255
    #
    route-policy dp permit node 10
     if-match tag 2000
    #
    route-policy dp permit node 15
     if-match ip-prefix lp
    #
    route-policy dp deny node 20
    #
    route-policy p1 deny node 10
    #
    route-policy stopuIP deny node 10
     if-match ip-prefix uIP
    #
    route-policy stopuIP permit node 20
    #
    ip ip-prefix lp index 10 permit 33.33.33.33 32
    ip ip-prefix uIP index 10 permit 10.10.10.10 32
    #
    ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
    #
    return
  • DCGW2 configuration file

    #
    sysname DCGW2
    #
    evpn
     bypass-vxlan enable
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 22:22
      apply-label per-instance
      export route-policy dp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200  
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0002
     vxlan anycast-gateway enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0003
     vxlan anycast-gateway enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ip address 10.3.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0001
     vxlan anycast-gateway enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ip address 10.4.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0004
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.1.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.3.1 255.255.255.0
    #
    interface LoopBack0
     ip address 9.9.9.9 255.255.255.255
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #
    interface LoopBack2
     ip binding vpn-instance vpn1
     ip address 44.44.44.44 255.255.255.255
    #
    interface Nve1
     source 9.9.9.9
     bypass source 4.4.4.4
     mac-address 00e0-fc00-0009
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 2.2.2.2 enable
      peer 3.3.3.3 enable
     #
     ipv4-family vpn-instance vpn1
      network 0.0.0.0 0
      import-route direct
      maximum load-balancing 16  
      advertise l2vpn evpn
      peer 5.5.5.5 as-number 100
      peer 5.5.5.5 connect-interface LoopBack2
      peer 5.5.5.5 route-policy p1 export
      peer 6.6.6.6 as-number 100
      peer 6.6.6.6 connect-interface LoopBack2
      peer 6.6.6.6 route-policy p1 export
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 capability-advertise add-path both
      peer 1.1.1.1 advertise add-path path-number 16
      peer 1.1.1.1 advertise encap-type vxlan
      peer 2.2.2.2 enable
      peer 2.2.2.2 capability-advertise add-path both
      peer 2.2.2.2 advertise add-path path-number 16
      peer 2.2.2.2 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
      peer 3.3.3.3 route-policy stopuIP export
    #
    ospf 1
     area 0.0.0.0
      network 4.4.4.4 0.0.0.0
      network 9.9.9.9 0.0.0.0
      network 10.6.1.0 0.0.0.255
      network 10.6.3.0 0.0.0.255
    #
    route-policy dp permit node 10
     if-match tag 2000
    #
    route-policy dp permit node 15
     if-match ip-prefix lp
    #
    route-policy dp deny node 20
    #
    route-policy p1 deny node 10
    #
    route-policy stopuIP deny node 10
     if-match ip-prefix uIP
    #
    route-policy stopuIP permit node 20
    #
    ip ip-prefix lp index 10 permit 44.44.44.44 32
    ip ip-prefix uIP index 10 permit 10.10.10.10 32
    #
    ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
    #
    return
  • L2GW/L3GW1 configuration file

    #
    sysname L2GW/L3GW1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 33:33
      apply-label per-instance
      export route-policy sp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0002
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0003
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ip address 10.3.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0001
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ip address 10.4.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0004
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.4.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.2.2 255.255.255.0
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet0/1/4.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface GigabitEthernet0/1/5.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
    #
    interface Nve1
     source 1.1.1.1
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
      peer 3.3.3.3 enable
      peer 4.4.4.4 enable
     #
     ipv4-family vpn-instance vpn1
      import-route static
      maximum load-balancing 16  
      advertise l2vpn evpn import-route-multipath
     #
     l2vpn-family evpn
      undo policy vpn-target
      bestroute add-path path-number 16
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise arp
      peer 2.2.2.2 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise arp
      peer 3.3.3.3 capability-advertise add-path both
      peer 3.3.3.3 advertise add-path path-number 16
      peer 3.3.3.3 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise arp
      peer 4.4.4.4 capability-advertise add-path both
      peer 4.4.4.4 advertise add-path path-number 16
      peer 4.4.4.4 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 1.1.1.1 0.0.0.0
      network 10.6.2.0 0.0.0.255
      network 10.6.4.0 0.0.0.255
    #
    route-policy sp permit node 10
     if-match tag 1000
     apply gateway-ip origin-nexthop
    #
    route-policy sp deny node 20
    #
    ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.1.1.2 tag 1000
    ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.2.1.2 tag 1000
    ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.1.1.3 tag 1000
    #
    return
  • L2GW/L3GW2 configuration file

    #
    sysname L2GW/L3GW2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv4-family
      route-distinguisher 44:44
      apply-label per-instance
      export route-policy sp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ip address 10.1.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0002
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ip address 10.2.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0003
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ip address 10.3.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0001
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ip address 10.4.1.1 255.255.255.0
     arp generate-rd-table enable
     mac-address 00e0-fc00-0004
     vxlan anycast-gateway enable
     arp collect host enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.4.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.3.2 255.255.255.0
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 30
     rewrite pop single
     bridge-domain 30
    #
    interface GigabitEthernet0/1/4.1 mode l2
     encapsulation dot1q vid 40
     rewrite pop single
     bridge-domain 40
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #               
    interface Nve1
     source 2.2.2.2
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 3.3.3.3 enable
      peer 4.4.4.4 enable
     #
     ipv4-family vpn-instance vpn1
      import-route static
      maximum load-balancing 16  
      advertise l2vpn evpn import-route-multipath
     #
     l2vpn-family evpn
      undo policy vpn-target
      bestroute add-path path-number 16
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise arp
      peer 1.1.1.1 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise arp
      peer 3.3.3.3 capability-advertise add-path both
      peer 3.3.3.3 advertise add-path path-number 16
      peer 3.3.3.3 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise arp
      peer 4.4.4.4 capability-advertise add-path both
      peer 4.4.4.4 advertise add-path path-number 16
      peer 4.4.4.4 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 2.2.2.2 0.0.0.0
      network 10.6.3.0 0.0.0.255
      network 10.6.4.0 0.0.0.255
    #
    route-policy sp permit node 10
     if-match tag 1000
     apply gateway-ip origin-nexthop
    #
    route-policy sp deny node 20
    #
    ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.3.1.2 tag 1000
    ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.4.1.2 tag 1000
    #
    return
  • VNF1 configuration file

    For details, see the configuration file of a specific device model.

  • VNF2 configuration file

    For details, see the configuration file of a specific device model.

Example for Configuring IPv6 NFVI Distributed Gateway

This section provides an example for configuring IPv6 NFVI distributed gateway in a typical usage scenario.

Networking Requirements

Huawei's NFVI telecommunications (telco) cloud is a networking solution that incorporates Data Center Interconnect (DCI) and data center network (DCN) technologies. Mobile phone IPv6 traffic enters the DCN and accesses its virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE). After being processed by these, the phone traffic is forwarded over the Internet through the DCN to the destination devices. Equally, response traffic sent over the Internet from the destination devices to the mobile phones also undergoes this process. For this to take place and to ensure that the traffic is balanced within the DCN, you need to deploy the NFVI distributed gateway function on the DCN.

Figure 1-1130 Configuring IPv6 NFVI distributed gateway

Interfaces 1 through 5 in this example represent GE 0/1/1, GE 0/1/2, GE 0/1/3, GE 0/1/4, and GE 0/1/5, respectively.



Figure 1-1130 shows the DCN on which the NFVI distributed gateway is deployed. DCGW1 and DCGW2 are the DCN's border gateways. The DCGWs exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions (VNFs). As virtualized NEs, VNF1 and VNF2 can be deployed separately to implement the functions of the vUGW and vMSE. VNF1 and VNF2 are connected to L2GW/L3GW1 and L2GW/L3GW2 through respective interface process units (IPUs).

This networking combines the distributed gateway function and the EVPN VXLAN active-active gateway function:
  • The EVPN VXLAN active-active gateway function is deployed on DCGW1 and DCGW2. Specifically, a bypass VXLAN tunnel is set up between DCGW1 and DCGW2. In addition, they use a virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.

  • The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between them.

The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can be deployed as a DCGW and L2GW/L3GW in this networking.

Table 1-492 Interface IP addresses and masks

Device

Interface

IP Address and Mask

DCGW1

GE 0/1/1

10.6.1.1/24

GE 0/1/2

10.6.2.1/24

Loopback 0

9.9.9.9/32

Loopback 1

3.3.3.3/32

Loopback 2

2001:db8:33::33/128

DCGW2

GE 0/1/1

10.6.1.2/24

GE 0/1/2

10.6.3.1/24

Loopback 0

9.9.9.9/32

Loopback 1

4.4.4.4/32

Loopback 2

2001:db8:44::44/128

L2GW/L3GW1

GE 0/1/1

10.6.4.1/24

GE 0/1/2

10.6.2.2/24

GE 0/1/3

-

GE 0/1/4

-

GE 0/1/5

-

Loopback 1

1.1.1.1/32

L2GW/L3GW2

GE 0/1/1

10.6.4.2/24

GE 0/1/2

10.6.3.2/24

GE 0/1/3

-

GE 0/1/4

-

Loopback 1

2.2.2.2/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.
  2. Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.
  3. Configure an L3VPN instance and bind it to a VBDIF interface on each DCGW and each L2GW/L3GW.
  4. Configure BGP EVPN on each DCGW and each L2GW/L3GW.
  5. Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.
  6. On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.
  7. On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.
  8. On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.
  9. Configure each DCGW to establish a VPN BGP peer relationship with a VNF.
  10. Configure load balancing on each DCGW and each L2GW/L3GW.

Procedure

  1. Assign an IP address to each device interface, including the loopback interfaces.

    For configuration details, see Configuration Files in this section.

  2. Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.

    For configuration details, see Configuration Files in this section.

  3. Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] evpn vpn-instance evrf1 bd-mode
    [*DCGW1-evpn-instance-evrf1] route-distinguisher 1:1
    [*DCGW1-evpn-instance-evrf1] vpn-target 1:1
    [*DCGW1-evpn-instance-evrf1] quit
    [*DCGW1] evpn vpn-instance evrf2 bd-mode
    [*DCGW1-evpn-instance-evrf2] route-distinguisher 2:2
    [*DCGW1-evpn-instance-evrf2] vpn-target 2:2
    [*DCGW1-evpn-instance-evrf2] quit
    [*DCGW1] evpn vpn-instance evrf3 bd-mode
    [*DCGW1-evpn-instance-evrf3] route-distinguisher 3:3
    [*DCGW1-evpn-instance-evrf3] vpn-target 3:3
    [*DCGW1-evpn-instance-evrf3] quit
    [*DCGW1] evpn vpn-instance evrf4 bd-mode
    [*DCGW1-evpn-instance-evrf4] route-distinguisher 4:4
    [*DCGW1-evpn-instance-evrf4] vpn-target 4:4
    [*DCGW1-evpn-instance-evrf4] quit
    [*DCGW1] bridge-domain 10
    [*DCGW1-bd10] vxlan vni 100 split-horizon-mode
    [*DCGW1-bd10] evpn binding vpn-instance evrf1
    [*DCGW1-bd10] quit
    [*DCGW1] bridge-domain 20
    [*DCGW1-bd20] vxlan vni 110 split-horizon-mode
    [*DCGW1-bd20] evpn binding vpn-instance evrf2
    [*DCGW1-bd20] quit
    [*DCGW1] bridge-domain 30
    [*DCGW1-bd30] vxlan vni 120 split-horizon-mode
    [*DCGW1-bd30] evpn binding vpn-instance evrf3
    [*DCGW1-bd30] quit
    [*DCGW1] bridge-domain 40
    [*DCGW1-bd40] vxlan vni 130 split-horizon-mode
    [*DCGW1-bd40] evpn binding vpn-instance evrf4
    [*DCGW1-bd40] quit
    [*DCGW1] commit

    Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.

  4. Configure an L3VPN instance on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] ip vpn-instance vpn1
    [*DCGW1-vpn-instance-vpn1] vxlan vni 200
    [*DCGW1-vpn-instance-vpn1] ipv6-family
    [*DCGW1-vpn-instance-vpn1-af-ipv6] route-distinguisher 11:11
    [*DCGW1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
    [*DCGW1-vpn-instance-vpn1-af-ipv6] quit
    [*DCGW1-vpn-instance-vpn1] quit
    [*DCGW1] interface vbdif10
    [*DCGW1-Vbdif10] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif10] ipv6 enable
    [*DCGW1-Vbdif10] ipv6 address 2001:db8:1::1 64
    [*DCGW1-Vbdif10] ipv6 nd generate-rd-table enable
    [*DCGW1-Vbdif10] vxlan anycast-gateway enable
    [*DCGW1-Vbdif10] mac-address 00e0-fc00-0002
    [*DCGW1-Vbdif10] quit
    [*DCGW1] interface vbdif20
    [*DCGW1-Vbdif20] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif20] ipv6 enable
    [*DCGW1-Vbdif20] ipv6 address 2001:db8:2::1 64
    [*DCGW1-Vbdif20] ipv6 nd generate-rd-table enable
    [*DCGW1-Vbdif20] vxlan anycast-gateway enable
    [*DCGW1-Vbdif20] mac-address 00e0-fc00-0003
    [*DCGW1-Vbdif20] quit
    [*DCGW1] interface vbdif30
    [*DCGW1-Vbdif30] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif30] ipv6 enable
    [*DCGW1-Vbdif30] ipv6 address 2001:db8:3::1 64
    [*DCGW1-Vbdif30] ipv6 nd generate-rd-table enable
    [*DCGW1-Vbdif30] vxlan anycast-gateway enable
    [*DCGW1-Vbdif30] mac-address 00e0-fc00-0001
    [*DCGW1-Vbdif30] quit
    [*DCGW1] interface vbdif40
    [*DCGW1-Vbdif40] ip binding vpn-instance vpn1
    [*DCGW1-Vbdif40] ipv6 enable
    [*DCGW1-Vbdif40] ipv6 address 2001:db8:4::1 64
    [*DCGW1-Vbdif40] ipv6 nd generate-rd-table enable
    [*DCGW1-Vbdif40] vxlan anycast-gateway enable
    [*DCGW1-Vbdif40] mac-address 00e0-fc00-0004
    [*DCGW1-Vbdif40] quit
    [*DCGW1] commit

    Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.

  5. Configure BGP EVPN on DCGW1 and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128
    [*DCGW1] route-policy stopuIP deny node 10
    [*DCGW1-route-policy] if-match ipv6 address prefix-list uIP
    [*DCGW1-route-policy] quit
    [*DCGW1] route-policy stopuIP permit node 20
    [*DCGW1-route-policy] quit
    [*DCGW1] bgp 100
    [*DCGW1-bgp] peer 1.1.1.1 as-number 100
    [*DCGW1-bgp] peer 1.1.1.1 connect-interface LoopBack 1
    [*DCGW1-bgp] peer 2.2.2.2 as-number 100
    [*DCGW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*DCGW1-bgp] peer 4.4.4.4 as-number 100
    [*DCGW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
    [*DCGW1-bgp] l2vpn-family evpn
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
    [*DCGW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
    [*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
    [*DCGW1-bgp-af-evpn] quit
    [*DCGW1-bgp] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] bgp 100
    [*L2GW/L3GW1-bgp] peer 2.2.2.2 as-number 100
    [*L2GW/L3GW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
    [*L2GW/L3GW1-bgp] peer 3.3.3.3 as-number 100
    [*L2GW/L3GW1-bgp] peer 3.3.3.3 connect-interface LoopBack 1
    [*L2GW/L3GW1-bgp] peer 4.4.4.4 as-number 100
    [*L2GW/L3GW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
    [*L2GW/L3GW1-bgp] l2vpn-family evpn
    [*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 enable
    [*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise nd
    [*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise nd
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise nd
    [*L2GW/L3GW1-bgp-af-evpn] quit
    [*L2GW/L3GW1-bgp] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  6. Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] evpn
    [*DCGW1-evpn] bypass-vxlan enable
    [*DCGW1-evpn] quit
    [*DCGW1] interface nve 1
    [*DCGW1-Nve1] source 9.9.9.9
    [*DCGW1-Nve1] bypass source 3.3.3.3
    [*DCGW1-Nve1] mac-address 00e0-fc00-0009
    [*DCGW1-Nve1] vni 100 head-end peer-list protocol bgp
    [*DCGW1-Nve1] vni 110 head-end peer-list protocol bgp
    [*DCGW1-Nve1] vni 120 head-end peer-list protocol bgp
    [*DCGW1-Nve1] vni 130 head-end peer-list protocol bgp
    [*DCGW1-Nve1] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] interface nve 1
    [*L2GW/L3GW1-Nve1] source 1.1.1.1
    [*L2GW/L3GW1-Nve1] vni 100 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] vni 110 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] vni 120 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] vni 130 head-end peer-list protocol bgp
    [*L2GW/L3GW1-Nve1] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  7. On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] interface GigabitEthernet0/1/3.1 mode l2
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] encapsulation dot1q vid 10
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] rewrite pop single
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] bridge-domain 10
    [*L2GW/L3GW1-GigabitEthernet0/1/3.1] quit
    [*L2GW/L3GW1] interface GigabitEthernet0/1/4.1 mode l2
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] encapsulation dot1q vid 20
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] rewrite pop single
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] bridge-domain 20
    [*L2GW/L3GW1-GigabitEthernet0/1/4.1] quit
    [*L2GW/L3GW1] interface GigabitEthernet0/1/5.1 mode l2
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] encapsulation dot1q vid 10
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] rewrite pop single
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] bridge-domain 10
    [*L2GW/L3GW1-GigabitEthernet0/1/5.1] quit
    [*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:1::2 tag 1000
    [*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:2::2 tag 1000
    [*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:1::3 tag 1000
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  8. On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] bgp 100
    [*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
    [*L2GW/L3GW1-bgp-6-vpn1] import-route static
    [*L2GW/L3GW1-bgp-6-vpn1] advertise l2vpn evpn import-route-multipath
    [*L2GW/L3GW1-bgp-6-vpn1] quit
    [*L2GW/L3GW1-bgp] quit
    [*L2GW/L3GW1] route-policy sp permit node 10
    [*L2GW/L3GW1-route-policy] if-match tag 1000
    [*L2GW/L3GW1-route-policy] apply ipv6 gateway-ip origin-nexthop
    [*L2GW/L3GW1-route-policy] quit
    [*L2GW/L3GW1] route-policy sp deny node 20
    [*L2GW/L3GW1-route-policy] quit
    [*L2GW/L3GW1] ip vpn-instance vpn1
    [*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
    [*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] export route-policy sp evpn
    [*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
    [*L2GW/L3GW1-vpn-instance-vpn1] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  9. On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.

    # Configure DCGW1.

    [~DCGW1] ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
    [*DCGW1] interface LoopBack2
    [*DCGW1-LoopBack2] ip binding vpn-instance vpn1
    [*DCGW1-LoopBack2] ipv6 enable
    [*DCGW1-LoopBack2] ipv6 address 2001:db8:33::33 128
    [*DCGW1-LoopBack2] quit
    [*DCGW1] bgp 100
    [*DCGW1-bgp] ipv6-family vpn-instance vpn1
    [*DCGW1-bgp-6-vpn1] advertise l2vpn evpn
    [*DCGW1-bgp-6-vpn1] import-route direct
    [*DCGW1-bgp-6-vpn1] network :: 0
    [*DCGW1-bgp-6-vpn1] quit
    [*DCGW1-bgp] quit
    [*DCGW1] ip ipv6-prefix lp index 10 permit 2001:db8:33::33 128
    [*DCGW1] route-policy dp permit node 10
    [*DCGW1-route-policy] if-match tag 2000
    [*DCGW1-route-policy] quit
    [*DCGW1] route-policy dp permit node 15
    [*DCGW1-route-policy] if-match ipv6 address prefix-list lp
    [*DCGW1-route-policy] quit
    [*DCGW1] route-policy dp deny node 20
    [*DCGW1-route-policy] quit
    [*DCGW1] ip vpn-instance vpn1
    [*DCGW1-vpn-instance-vpn1] ipv6-family
    [*DCGW1-vpn-instance-vpn1-af-ipv6] export route-policy dp evpn
    [*DCGW1-vpn-instance-vpn1-af-ipv6] quit
    [*DCGW1-vpn-instance-vpn1] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

  10. Configure each DCGW to establish a VPN BGP peer relationship with a VNF.

    # Configure DCGW1.

    [~DCGW1] route-policy p1 deny node 10
    [*DCGW1-route-policy] quit
    [*DCGW1] bgp 100
    [*DCGW1-bgp] ipv6-family vpn-instance vpn1
    [*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
    [*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
    [*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
    [*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
    [*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
    [*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
    [*DCGW1-bgp-6-vpn1] quit
    [*DCGW1-bgp] quit
    [*DCGW1] commit

    # Configure DCGW2.

    [~DCGW2] route-policy p1 deny node 10
    [*DCGW2-route-policy] quit
    [*DCGW2] bgp 100
    [*DCGW2-bgp] ipv6-family vpn-instance vpn1
    [*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
    [*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
    [*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
    [*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
    [*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
    [*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
    [*DCGW2-bgp-6-vpn1] quit
    [*DCGW2-bgp] quit
    [*DCGW2] commit

  11. Configure load balancing on each DCGW and each L2GW/L3GW.

    # Configure DCGW1.

    [~DCGW1] bgp 100
    [*DCGW1-bgp] ipv6-family vpn-instance vpn1
    [*DCGW1-bgp-6-vpn1] maximum load-balancing 16
    [*DCGW1-bgp-6-vpn1] quit
    [*DCGW1-bgp] l2vpn-family evpn
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
    [*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
    [*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
    [*DCGW1-bgp-af-evpn] quit
    [*DCGW1-bgp] quit
    [*DCGW1] commit

    Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.

    # Configure L2GW/L3GW1.

    [~L2GW/L3GW1] bgp 100
    [*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
    [*L2GW/L3GW1-bgp-6-vpn1] maximum load-balancing 16
    [*L2GW/L3GW1-bgp-6-vpn1] quit
    [*L2GW/L3GW1-bgp] l2vpn-family evpn
    [*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
    [*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
    [*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
    [*L2GW/L3GW1-bgp-af-evpn] quit
    [*L2GW/L3GW1-bgp] quit
    [*L2GW/L3GW1] commit

    Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.

  12. Verify the configuration.

    Run the display bgp vpnv6 vpn-instance vpn1 peer command on each DCGW. The command output shows that the VPN BGP peer relationship between the DCGW and each VNF is Established. The following example uses the command output on DCGW1:

    [~DCGW1] display bgp vpnv6 vpn-instance vpn1 peer
     BGP local router ID : 9.9.9.9
    
     Local AS number : 100
     Total number of peers : 2                 Peers in established state : 0
    
      VPN-Instance vpn1, Router ID 9.9.9.9:
      Peer            V          AS  MsgRcvd  MsgSent  OutQ  Up/Down        State   PrefRcv
      2001:DB8:5::5   4         100     7136     7135     0 0118h05m Established         4
      2001:DB8:6::6   4         100     7140     7167     0 01:59:11 Established         0

    Run the display bgp vpnv6 vpn-instance vpn1 routing-table command on each DCGW. The command output shows that the DCGW has received the mobile phone route (destined for 2001:DB8:10::10 in this example) from the VNF and the next hop of the route is the VNF IP address. The following example uses the command output on DCGW1:

    [~DCGW] display bgp vpnv6 vpn-instance vpn1 routing-table
     BGP Local router ID is 9.9.9.9
    
     Status codes: * - valid, > - best, d - damped, x - best external, a - add path,
                   h - history,  i - internal, s - suppressed, S - Stale
                   Origin : i - IGP, e - EGP, ? - incomplete
     RPKI validation codes: V - valid, I - invalid, N - not-found
    
     VPN-Instance vpn1, Router ID 9.9.9.9:
    
    
     Total Number of Routes: 19
     *>     Network  : ::                                       PrefixLen : 0   
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 32768
            Label    : 
            Path/Ogn :  i
     * i     
            NextHop  : ::FFFF:9.9.9.9                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 200/NULL
            Path/Ogn :  i
     *>     Network  : 2001:DB8:1::                             PrefixLen : 64  
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:1::1                            PrefixLen : 128 
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:2::                             PrefixLen : 64  
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:2::1                            PrefixLen : 128 
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:3::                             PrefixLen : 64  
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:3::1                            PrefixLen : 128 
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:4::                             PrefixLen : 64  
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:4::1                            PrefixLen : 128 
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>i    Network  : 2001:DB8:5::5                            PrefixLen : 128 
            NextHop  : ::FFFF:1.1.1.1                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     * i     
            NextHop  : ::FFFF:1.1.1.1                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>i    Network  : 2001:DB8:6::6                            PrefixLen : 128 
            NextHop  : ::FFFF:1.1.1.1                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     * i     
            NextHop  : ::FFFF:2.2.2.2                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     * i     
            NextHop  : ::FFFF:2.2.2.2                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:10::10                        PrefixLen : 128 
            NextHop  : 2001:DB8:5::5                          LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>     Network  : 2001:DB8:33::33                          PrefixLen : 128 
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?
     *>i    Network  : 2001:DB8:44::44                          PrefixLen : 128 
            NextHop  : ::FFFF:9.9.9.9                           LocPrf    : 100 
            MED      : 0                                        PrefVal   : 0
            Label    : 200/NULL
            Path/Ogn :  ?
     *>     Network  : FE80::                                   PrefixLen : 10  
            NextHop  : ::                                       LocPrf    :   
            MED      : 0                                        PrefVal   : 0
            Label    : 
            Path/Ogn :  ?

    Run the display ipv6 routing-table vpn-instance vpn1 command on each DCGW. The command output shows the mobile phone routes in the VPN routing table on the DCGW and the outbound interfaces of the routes are VBDIF interfaces.

    [~DCGW] display ipv6 routing-table vpn-instance vpn1
    Routing Table : vpn1
             Destinations : 15       Routes : 19        
    
    Destination  : ::                                      PrefixLength : 0
    NextHop      : ::                                      Preference   : 60
    Cost         : 0                                       Protocol     : Static
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : NULL0                                   Flags        : DB
    
    Destination  : 2001:DB8:1::                            PrefixLength : 64
    NextHop      : 2001:DB8:1::1                           Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif10                                 Flags        : D
    
    Destination  : 2001:DB8:1::1                           PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif10                                 Flags        : D
    
    Destination  : 2001:DB8:2::                            PrefixLength : 64
    NextHop      : 2001:DB8:2::1                           Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif20                                 Flags        : D
    
    Destination  : 2001:DB8:2::1                           PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif20                                 Flags        : D
    
    Destination  : 2001:DB8:3::                            PrefixLength : 64
    NextHop      : 2001:DB8:3::1                           Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif30                                 Flags        : D
    
    Destination  : 2001:DB8:3::1                           PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif30                                 Flags        : D
    
    Destination  : 2001:DB8:4::                            PrefixLength : 64
    NextHop      : 2001:DB8:4::1                           Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif40                                 Flags        : D
    
    Destination  : 2001:DB8:4::1                           PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif40                                 Flags        : D
    
    Destination  : 2001:DB8:5::5                           PrefixLength : 128
    NextHop      : 2001:DB8:2::2                           Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : 2001:DB8:2::2                           TunnelID     : 0x0
    Interface    : Vbdif20                                 Flags        : RD
    
    Destination  : 2001:DB8:5::5                           PrefixLength : 128
    NextHop      : 2001:DB8:1::2                           Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : 2001:DB8:1::2                           TunnelID     : 0x0
    Interface    : Vbdif10                                 Flags        : RD
    
    Destination  : 2001:DB8:6::6                           PrefixLength : 128
    NextHop      : 2001:DB8:1::3                           Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : 2001:DB8:1::3                           TunnelID     : 0x0
    Interface    : Vbdif10                                 Flags        : RD
    
    Destination  : 2001:DB8:6::6                           PrefixLength : 128
    NextHop      : 2001:DB8:4::2                           Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : 2001:DB8:4::2                           TunnelID     : 0x0
    Interface    : Vbdif40                                 Flags        : RD
    
    Destination  : 2001:DB8:6::6                           PrefixLength : 128
    NextHop      : 2001:DB8:3::2                           Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : 2001:DB8:3::2                           TunnelID     : 0x0
    Interface    : Vbdif30                                 Flags        : RD
    
    Destination  : 2001:DB8:10::10                       PrefixLength : 128
    NextHop      : 2001:DB8:5::5                           Preference   : 0
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif10                                Flags        : D
    
    Destination  : 2001:DB8:10::10                       PrefixLength : 128
    NextHop      : 2001:DB8:5::5                           Preference   : 0
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif20                                Flags        : D
    
    Destination  : 2001:DB8:33::33                         PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : LoopBack2                               Flags        : D
    Destination  : 2001:DB8:44::44                         PrefixLength : 128
    
    NextHop      : ::FFFF:4.4.4.4                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::                                      TunnelID     : 0x0000000027f0000001
    Interface    : VXLAN                                   Flags        : RD
    
    Destination  : FE80::                                  PrefixLength : 10
    NextHop      : ::                                      Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : NULL0                                   Flags        : DB

Configuration Files
  • DCGW1 configuration file

    #
    sysname DCGW1
    #
    evpn
     bypass-vxlan enable
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 11:11
      apply-label per-instance
      export route-policy dp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200  
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:1::1/64
     mac-address 00e0-fc00-0002
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:2::1/64
     mac-address 00e0-fc00-0003
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:3::1/64
     mac-address 00e0-fc00-0001
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:4::1/64
     mac-address 00e0-fc00-0004
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.1.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.2.1 255.255.255.0
    #
    interface LoopBack0
     ip address 9.9.9.9 255.255.255.255
    #
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #
    interface LoopBack2
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:33::33/128
    #
    interface Nve1
     source 9.9.9.9
     bypass source 3.3.3.3
     mac-address 00e0-fc00-0009
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 2.2.2.2 enable
      peer 4.4.4.4 enable
     #
     ipv6-family vpn-instance vpn1
      network :: 0
      import-route direct
      maximum load-balancing 16  
      advertise l2vpn evpn
      peer 2001:db8:5::5 as-number 100
      peer 2001:db8:5::5 connect-interface LoopBack2
      peer 2001:db8:5::5 route-policy p1 export
      peer 2001:db8:6::6 as-number 100
      peer 2001:db8:6::6 connect-interface LoopBack2
      peer 2001:db8:6::6 route-policy p1 export
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 capability-advertise add-path both
      peer 1.1.1.1 advertise add-path path-number 16
      peer 1.1.1.1 advertise encap-type vxlan
      peer 2.2.2.2 enable
      peer 2.2.2.2 capability-advertise add-path both
      peer 2.2.2.2 advertise add-path path-number 16
      peer 2.2.2.2 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise encap-type vxlan
      peer 4.4.4.4 route-policy stopuIP export
    #
    ospf 1
     area 0.0.0.0
      network 3.3.3.3 0.0.0.0
      network 9.9.9.9 0.0.0.0
      network 10.6.1.0 0.0.0.255
      network 10.6.2.0 0.0.0.255
    #
    route-policy dp permit node 10
     if-match tag 2000
    #
    route-policy dp permit node 15
     if-match ipv6 address prefix-list lp
    #
    route-policy dp deny node 20
    #
    route-policy p1 deny node 10
    #
    route-policy stopuIP deny node 10
     if-match ipv6 address prefix-list uIP
    #
    route-policy stopuIP permit node 20
    #
    ip ipv6-prefix lp index 10 permit 2001:db8:33::33 128
    ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128
    #
    ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
    #
    return
  • DCGW2 configuration file

    #
    sysname DCGW2
    #
    evpn
     bypass-vxlan enable
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 22:22
      apply-label per-instance
      export route-policy dp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200  
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:1::1/64
     mac-address 00e0-fc00-0002
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:2::1/64
     mac-address 00e0-fc00-0003
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:3::1/64
     mac-address 00e0-fc00-0001
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:4::1/64
     mac-address 00e0-fc00-0004
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.1.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.3.1 255.255.255.0
    #
    interface LoopBack0
     ip address 9.9.9.9 255.255.255.255
    #
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #
    interface LoopBack2
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:44::44 128
    #
    interface Nve1
     source 9.9.9.9
     bypass source 4.4.4.4
     mac-address 00e0-fc00-0009
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 2.2.2.2 enable
      peer 3.3.3.3 enable
     #
     ipv6-family vpn-instance vpn1
      network :: 0
      import-route direct
      maximum load-balancing 16  
      advertise l2vpn evpn
      peer 2001:db8:5::5 as-number 100
      peer 2001:db8:5::5 connect-interface LoopBack2
      peer 2001:db8:5::5 route-policy p1 export
      peer 2001:db8:6::6 as-number 100
      peer 2001:db8:6::6 connect-interface LoopBack2
      peer 2001:db8:6::6 route-policy p1 export
    
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 1.1.1.1 enable
      peer 1.1.1.1 capability-advertise add-path both
      peer 1.1.1.1 advertise add-path path-number 16
      peer 1.1.1.1 advertise encap-type vxlan
      peer 2.2.2.2 enable
      peer 2.2.2.2 capability-advertise add-path both
      peer 2.2.2.2 advertise add-path path-number 16
      peer 2.2.2.2 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise encap-type vxlan
      peer 3.3.3.3 route-policy stopuIP export
    #
    ospf 1
     area 0.0.0.0
      network 4.4.4.4 0.0.0.0
      network 9.9.9.9 0.0.0.0
      network 10.6.1.0 0.0.0.255
      network 10.6.3.0 0.0.0.255
    #
    route-policy dp permit node 10
     if-match tag 2000
    #
    route-policy dp permit node 15
     if-match ipv6 address prefix-list lp
    #
    route-policy dp deny node 20
    #
    route-policy p1 deny node 10
    #
    route-policy stopuIP deny node 10
     if-match ipv6 address prefix-list uIP
    #
    route-policy stopuIP permit node 20
    #
    ip ipv6-prefix lp index 10 permit 2001:db8:44::44 128
    ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128
    #
    ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
    #
    return
  • L2GW/L3GW1 configuration file

    #
    sysname L2GW/L3GW1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 33:33
      apply-label per-instance
      export route-policy sp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:1::1/64
     mac-address 00e0-fc00-0002
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:2::1/64
     mac-address 00e0-fc00-0003
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:3::1/64
     mac-address 00e0-fc00-0001
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:4::1/64
     mac-address 00e0-fc00-0004
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.4.1 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.2.2 255.255.255.0
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface GigabitEthernet0/1/4.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface GigabitEthernet0/1/5.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
    #
    interface Nve1
     source 1.1.1.1
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 2.2.2.2 as-number 100
     peer 2.2.2.2 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 2.2.2.2 enable
      peer 3.3.3.3 enable
      peer 4.4.4.4 enable
     #
     ipv6-family vpn-instance vpn1
      import-route static
      maximum load-balancing 16  
      advertise l2vpn evpn import-route-multipath
     #
     l2vpn-family evpn
      undo policy vpn-target
      bestroute add-path path-number 16
      peer 2.2.2.2 enable
      peer 2.2.2.2 advertise nd
      peer 2.2.2.2 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise nd
      peer 3.3.3.3 capability-advertise add-path both
      peer 3.3.3.3 advertise add-path path-number 16
      peer 3.3.3.3 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise nd
      peer 4.4.4.4 capability-advertise add-path both
      peer 4.4.4.4 advertise add-path path-number 16
      peer 4.4.4.4 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 1.1.1.1 0.0.0.0
      network 10.6.2.0 0.0.0.255
      network 10.6.4.0 0.0.0.255
    #
    route-policy sp permit node 10
     if-match tag 1000
     apply ipv6 gateway-ip origin-nexthop
    #
    route-policy sp deny node 20
    #
    ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:1::2 tag 1000
    ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:2::2 tag 1000
    ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:1::3 tag 1000
    #
    return
  • L2GW/L3GW2 configuration file

    #
    sysname L2GW/L3GW2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 1:1
     vpn-target 1:1 export-extcommunity
     vpn-target 1:1 import-extcommunity
    #
    evpn vpn-instance evrf2 bd-mode
     route-distinguisher 2:2
     vpn-target 2:2 export-extcommunity
     vpn-target 2:2 import-extcommunity
    #
    evpn vpn-instance evrf3 bd-mode
     route-distinguisher 3:3
     vpn-target 3:3 export-extcommunity
     vpn-target 3:3 import-extcommunity
    #
    evpn vpn-instance evrf4 bd-mode
     route-distinguisher 4:4
     vpn-target 4:4 export-extcommunity
     vpn-target 4:4 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 44:44
      apply-label per-instance
      export route-policy sp evpn
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 200
    #
    bridge-domain 10
     vxlan vni 100 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    bridge-domain 20
     vxlan vni 110 split-horizon-mode
     evpn binding vpn-instance evrf2
    #
    bridge-domain 30
     vxlan vni 120 split-horizon-mode
     evpn binding vpn-instance evrf3
    #
    bridge-domain 40
     vxlan vni 130 split-horizon-mode
     evpn binding vpn-instance evrf4
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:1::1/64
     mac-address 00e0-fc00-0002
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:2::1/64
     mac-address 00e0-fc00-0003
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif30
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:3::1/64
     mac-address 00e0-fc00-0001
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface Vbdif40
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:db8:4::1/64
     mac-address 00e0-fc00-0004
     ipv6 nd collect host enable
     ipv6 nd generate-rd-table enable
     vxlan anycast-gateway enable
    #
    interface GigabitEthernet0/1/1
     undo shutdown
     ip address 10.6.4.2 255.255.255.0
    #
    interface GigabitEthernet0/1/2
     undo shutdown
     ip address 10.6.3.2 255.255.255.0
    #
    interface GigabitEthernet0/1/3.1 mode l2
     encapsulation dot1q vid 30
     rewrite pop single
     bridge-domain 30
    #
    interface GigabitEthernet0/1/4.1 mode l2
     encapsulation dot1q vid 40
     rewrite pop single
     bridge-domain 40
    #
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #               
    interface Nve1
     source 2.2.2.2
     vni 100 head-end peer-list protocol bgp
     vni 110 head-end peer-list protocol bgp
     vni 120 head-end peer-list protocol bgp
     vni 130 head-end peer-list protocol bgp
    #
    bgp 100
     peer 1.1.1.1 as-number 100
     peer 1.1.1.1 connect-interface LoopBack1
     peer 3.3.3.3 as-number 100
     peer 3.3.3.3 connect-interface LoopBack1
     peer 4.4.4.4 as-number 100
     peer 4.4.4.4 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 1.1.1.1 enable
      peer 3.3.3.3 enable
      peer 4.4.4.4 enable
     #
     ipv6-family vpn-instance vpn1
      import-route static
      maximum load-balancing 16  
      advertise l2vpn evpn import-route-multipath
     #
     l2vpn-family evpn
      undo policy vpn-target
      bestroute add-path path-number 16
      peer 1.1.1.1 enable
      peer 1.1.1.1 advertise nd
      peer 1.1.1.1 advertise encap-type vxlan
      peer 3.3.3.3 enable
      peer 3.3.3.3 advertise nd
      peer 3.3.3.3 capability-advertise add-path both
      peer 3.3.3.3 advertise add-path path-number 16
      peer 3.3.3.3 advertise encap-type vxlan
      peer 4.4.4.4 enable
      peer 4.4.4.4 advertise nd
      peer 4.4.4.4 capability-advertise add-path both
      peer 4.4.4.4 advertise add-path path-number 16
      peer 4.4.4.4 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 2.2.2.2 0.0.0.0
      network 10.6.3.0 0.0.0.255
      network 10.6.4.0 0.0.0.255
    #
    route-policy sp permit node 10
     if-match tag 1000
     apply ipv6 gateway-ip origin-nexthop
    #
    route-policy sp deny node 20
    #
    ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:3::2 tag 1000
    ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:4::2 tag 1000
    #
    return
  • VNF1 configuration file

    For details, see the configuration file of a specific device model.

  • VNF2 configuration file

    For details, see the configuration file of a specific device model.

Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking (IPv6 Services)

This section provides an example for configuring three-segment VXLAN to implement Layer 3 interworking between VMs in different DCs.

Networking Requirements

In Figure 1-1131, DC A and DC B reside in different BGP ASs. To allow intra-DC VM communication (VMa1 and VMa2 in DC-A, and VMb1 and VMb2 in DC-B), configure BGP EVPN on the devices in the DCs to create VXLAN tunnels between distributed gateways. To allow IPv6 service interworking in different DCs (for example, VMa1 and VMb2), configure BGP EVPN on Leaf 2 and Leaf 3 to create another VXLAN tunnel. In this way, three-segment VXLAN tunnels are established to implement DC interconnection.

Figure 1-1131 Networking of configuring three-segment VXLAN

Interfaces 1 through 3 in this example represent GE 0/1/0, GE 0/2/0, and GE 0/3/0, respectively.



Table 1-493 Interface IP addresses

Device

Interface

IP Address

Device

Interface

IP Address

Device1

GE 0/1/0

192.168.50.1/24

Device2

GE 0/1/0

192.168.60.1/24

GE 0/2/0

192.168.1.1/24

GE 0/2/0

192.168.1.2/24

LoopBack1

1.1.1.1/32

LoopBack1

2.2.2.2/32

Spine1

GE 0/1/0

192.168.10.1/24

Spine2

GE 0/1/0

192.168.30.1/24

GE 0/2/0

192.168.20.1/24

GE 0/2/0

192.168.40.1/24

LoopBack1

3.3.3.3/32

LoopBack1

4.4.4.4/32

Leaf1

GE 0/1/0

192.168.10.2/24

Leaf4

GE 0/1/0

192.168.40.2/24

GE 0/2/0

-

GE 0/2/0

-

LoopBack1

5.5.5.5/32

LoopBack1

8.8.8.8/32

Leaf2

GE 0/1/0

192.168.20.2/24

Leaf3

GE 0/1/0

192.168.30.2/24

GE 0/2/0

-

GE 0/2/0

-

GE 0/3/0

192.168.50.2/24

GE 0/3/0

192.168.60.2/24

LoopBack1

6.6.6.6/32

LoopBack1

7.7.7.7/32

Configuration Roadmap

The configuration roadmap is as follows:

  1. Configure IP addresses for each node.

  2. Configure an IGP for nodes to communicate with each other.

  3. Configure static routes for DCs to communicate with each other.

  4. Configure BGP EVPN on DC A and DC B to create VXLAN tunnels between distributed gateways.

  5. Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.

Data Preparation

To complete the configuration, you need the following data:

  • VLAN IDs of VMs

  • BD IDs

  • VNI IDs of BDs and VPN instances

Procedure

  1. Assign an IP address to each node interface, including the loopback interface.

    For configuration details, see Configuration Files in this section.

  2. Configure an IGP. In this example, OSPF is used.

    For configuration details, see Configuration Files in this section.

  3. Configure static routes for DCs to communicate with each other.

    For configuration details, see Configuration Files in this section.

  4. Configure BGP EVPN on DC A and DC B to create VXLAN tunnels between distributed gateways.
    1. Configure service access points on leaf nodes.

      # Configure Leaf 1.

      [~Leaf1] bridge-domain 10
      [*Leaf1-bd10] quit
      [*Leaf1] interface GigabitEthernet 0/2/0.1 mode l2
      [*Leaf1-GigabitEthernet0/2/0.1] encapsulation dot1q vid 10
      [*Leaf1-GigabitEthernet0/2/0.1] rewrite pop single
      [*Leaf1-GigabitEthernet0/2/0.1] bridge-domain 10
      [*Leaf1-GigabitEthernet0/2/0.1] quit
      [*Leaf1] commit

      Repeat this step for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see Configuration Files in this section.

    2. Establish IBGP EVPN peer relationships between Leaf 1 and Leaf 2 in DC A and between Leaf 3 and Leaf 4 in DC B.

      # Configure Leaf 1.

      [~Leaf1] bgp 100
      [*Leaf1-bgp] peer 6.6.6.6 as-number 100
      [*Leaf1-bgp] peer 6.6.6.6 connect-interface LoopBack 1
      [*Leaf1-bgp] l2vpn-family evpn
      [*Leaf1-bgp-af-evpn] peer 6.6.6.6 enable
      [*Leaf1-bgp-af-evpn] peer 6.6.6.6 advertise encap-type vxlan
      [*Leaf1-bgp-af-evpn] quit
      [*Leaf1-bgp] quit
      [*Leaf1] commit

      Repeat this step for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see Configuration Files in this section.

    3. Configure VPN and EVPN instances on leaf nodes.

      # Configure Leaf 1.

      [~Leaf1] ip vpn-instance vpn1
      [*Leaf1-vpn-instance-vpn1] vxlan vni 5010
      [*Leaf1-vpn-instance-vpn1] ipv6-family
      [*Leaf1-vpn-instance-vpn1-af-ipv6] route-distinguisher 11:11
      [*Leaf1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
      [*Leaf1-vpn-instance-vpn1-af-ipv6] quit
      [*Leaf1-vpn-instance-vpn1] quit
      [*Leaf1] evpn vpn-instance evrf1 bd-mode
      [*Leaf1-evpn-instance-evrf1] route-distinguisher 10:1
      [*Leaf1-evpn-instance-evrf1] vpn-target 11:1
      [*Leaf1-evpn-instance-evrf1] quit
      [*Leaf1] bridge-domain 10
      [*Leaf1-bd10] vxlan vni 10 split-horizon-mode
      [*Leaf1-bd10] evpn binding vpn-instance evrf1
      [*Leaf1-bd10] quit
      [*Leaf1] commit

      Repeat this step for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see Configuration Files in this section.

    4. Enable ingress replication on leaf nodes.

      # Configure Leaf 1.

      [~Leaf1] interface nve 1
      [*Leaf1-Nve1] source 5.5.5.5
      [*Leaf1-Nve1] vni 10 head-end peer-list protocol bgp
      [*Leaf1-Nve1] quit
      [*Leaf1] commit

      Repeat this step for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see Configuration Files in this section.

    5. Configure Layer 3 VXLAN gateway information on leaf nodes.

      # Configure Leaf 1.

      [~Leaf1] interface vbdif10
      [*Leaf1-Vbdif10] ip binding vpn-instance vpn1
      [*Leaf1-Vbdif10] ipv6 enable
      [*Leaf1-Vbdif10] ipv6 address 2001:DB8:10::1 64
      [*Leaf1-Vbdif10] vxlan anycast-gateway enable
      [*Leaf1-Vbdif10] ipv6 nd collect host enable
      [*Leaf1-Vbdif10] quit
      [*Leaf1] commit

      Repeat this step for Leaf 2, Leaf 3, and Leaf 4. For configuration details, see Configuration Files in this section.

    6. Configure IRB route advertisement between Leaf 1 and Leaf 2 in DC A, between Leaf 3 and Leaf 4 in DC B, and between Leaf 2 and Leaf 3 in DC B.

      # Configure Leaf 1.

      [~Leaf1] bgp 100
      [*Leaf1-bgp] l2vpn-family evpn
      [*Leaf1-bgp-af-evpn] peer 6.6.6.6 advertise irbv6
      [*Leaf1-bgp-af-evpn] quit
      [*Leaf1-bgp] quit
      [*Leaf1] commit

      # Configure Leaf 2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 5.5.5.5 advertise irbv6
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 advertise irbv6
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      Configuring Leaf 4 is similar to configuring Leaf 1, and configuring Leaf 3 is similar to configuring Leaf 2. For configuration details, see Configuration Files in this section.

      Run the display vxlan tunnel command on a leaf node to check VXLAN tunnel information. The following example uses the command output on Leaf 1.
      [~Leaf1] display vxlan tunnel
      Number of vxlan tunnel : 1
      Tunnel ID   Source           Destination      State  Type    Uptime
      ---------------------------------------------------------------------
      4026531841  5.5.5.5          6.6.6.6          up     dynamic 00:05:36

  5. Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.
    1. Establish an EBGP EVPN peer relationship between Leaf 2 and Leaf 3.

      As VPN and EVPN instances have been configured on Leaf 2 and Leaf 3, you only need to configure an EBGP EVPN peer relationship between Leaf 2 and Leaf 3 to ensure IP route reachability.

      # Configure Leaf 2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] peer 7.7.7.7 as-number 200
      [*Leaf2-bgp] peer 7.7.7.7 connect-interface LoopBack1
      [*Leaf2-bgp] peer 7.7.7.7 ebgp-max-hop 255
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 enable
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 advertise encap-type vxlan
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      # Configure Leaf 3.

      [~Leaf3] bgp 200
      [*Leaf3-bgp] peer 6.6.6.6 as-number 100
      [*Leaf3-bgp] peer 6.6.6.6 connect-interface LoopBack1
      [*Leaf3-bgp] peer 6.6.6.6 ebgp-max-hop 255
      [*Leaf3-bgp] l2vpn-family evpn
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 enable
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 advertise encap-type vxlan
      [*Leaf3-bgp-af-evpn] quit
      [*Leaf3-bgp] quit
      [*Leaf3] commit

    2. Configure the regeneration of IRB routes and IP prefix routes in EVPN routing tables.

      # Configure Leaf 2.

      [~Leaf2] bgp 100
      [*Leaf2-bgp] l2vpn-family evpn
      [*Leaf2-bgp-af-evpn] peer 5.5.5.5 import reoriginate
      [*Leaf2-bgp-af-evpn] peer 5.5.5.5 advertise route-reoriginated evpn ipv6
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 import reoriginate
      [*Leaf2-bgp-af-evpn] peer 7.7.7.7 advertise route-reoriginated evpn ipv6
      [*Leaf2-bgp-af-evpn] quit
      [*Leaf2-bgp] quit
      [*Leaf2] commit

      # Configure Leaf 3.

      [~Leaf3] bgp 200
      [*Leaf3-bgp] l2vpn-family evpn
      [*Leaf3-bgp-af-evpn] peer 8.8.8.8 import reoriginate
      [*Leaf3-bgp-af-evpn] peer 8.8.8.8 advertise route-reoriginated evpn ipv6
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 import reoriginate
      [*Leaf3-bgp-af-evpn] peer 6.6.6.6 advertise route-reoriginated evpn ipv6
      [*Leaf3-bgp-af-evpn] quit
      [*Leaf3-bgp] quit
      [*Leaf3] commit

  6. Verify the configuration.

    After completing the configurations, run the display vxlan tunnel command on each leaf node to view VXLAN tunnel information. The following example uses the command output on Leaf 2.

    [~Leaf2] display vxlan tunnel
    Number of vxlan tunnel : 2
    Tunnel ID   Source           Destination      State  Type    Uptime
    ---------------------------------------------------------------------
    4026531841  6.6.6.6          5.5.5.5          up     dynamic 00:11:01
    4026531842  6.6.6.6          7.7.7.7          up     dynamic 00:12:11

    Run the display ipv6 routing-table vpn-instance vpn1 command on each leaf node to view IP route information. The following example uses the command output on Leaf 1.

    [~Leaf1] display ipv6 routing-table vpn-instance vpn1
    Routing Table : vpn1
             Destinations : 6        Routes : 6         
    
    Destination  : 2001:DB8:10::                           PrefixLength : 64
    NextHop      : 2001:DB8:10::1                          Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif10                                 Flags        : D
    
    Destination  : 2001:DB8:10::1                          PrefixLength : 128
    NextHop      : ::1                                     Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : Vbdif10                                 Flags        : D
    
    Destination  : 2001:DB8:20::                           PrefixLength : 64
    NextHop      : ::FFFF:6.6.6.6                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::                                      TunnelID     : 0x0000000027f0000001
    Interface    : VXLAN                                   Flags        : RD
    
    Destination  : 2001:DB8:30::                           PrefixLength : 64
    NextHop      : ::FFFF:6.6.6.6                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::                                      TunnelID     : 0x0000000027f0000001
    Interface    : VXLAN                                   Flags        : RD
    
    Destination  : 2001:DB8:40::                           PrefixLength : 64
    NextHop      : ::FFFF:6.6.6.6                          Preference   : 255
    Cost         : 0                                       Protocol     : IBGP
    RelayNextHop : ::                                      TunnelID     : 0x0000000027f0000001
    Interface    : VXLAN                                   Flags        : RD
    
    Destination  : FE80::                                  PrefixLength : 10
    NextHop      : ::                                      Preference   : 0
    Cost         : 0                                       Protocol     : Direct
    RelayNextHop : ::                                      TunnelID     : 0x0
    Interface    : NULL0                                   Flags        : DB

    After configurations are complete, VMa1 and VMb2 can communicate with each other.

Configuration Files
  • Spine 1 configuration file

    #
    sysname Spine1
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.10.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.20.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 3.3.3.3 255.255.255.255
    #               
    ospf 1          
     area 0.0.0.0   
      network 3.3.3.3 0.0.0.0
      network 192.168.10.0 0.0.0.255
      network 192.168.20.0 0.0.0.255
    #               
    return 
  • Leaf 1 configuration file

    #
    sysname Leaf1
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:DB8:10::1/64
     ipv6 nd collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.10.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #
    interface LoopBack1
     ip address 5.5.5.5 255.255.255.255
    #
    interface Nve1
     source 5.5.5.5
     vni 10 head-end peer-list protocol bgp
    #
    bgp 100
     peer 6.6.6.6 as-number 100
     peer 6.6.6.6 connect-interface LoopBack1
     #              
     ipv4-family unicast
      undo synchronization
      peer 6.6.6.6 enable
     #
     ipv6-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 6.6.6.6 enable
      peer 6.6.6.6 advertise irbv6
      peer 6.6.6.6 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 5.5.5.5 0.0.0.0
      network 192.168.10.0 0.0.0.255
    #
    return
  • Leaf 2 configuration file

    #
    sysname Leaf2
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:DB8:20::1/64
     ipv6 nd collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.20.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #               
    interface GigabitEthernet0/3/0
     undo shutdown  
     ip address 192.168.50.2 255.255.255.0
    #
    interface LoopBack1
     ip address 6.6.6.6 255.255.255.255
    #
    interface Nve1
     source 6.6.6.6
     vni 20 head-end peer-list protocol bgp
    #
    bgp 100
     peer 5.5.5.5 as-number 100
     peer 5.5.5.5 connect-interface LoopBack1
     peer 7.7.7.7 as-number 200
     peer 7.7.7.7 ebgp-max-hop 255
     peer 7.7.7.7 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 5.5.5.5 enable
      peer 7.7.7.7 enable
     #
     ipv6-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 5.5.5.5 enable
      peer 5.5.5.5 advertise irbv6
      peer 5.5.5.5 advertise encap-type vxlan
      peer 5.5.5.5 import reoriginate
      peer 5.5.5.5 advertise route-reoriginated evpn ipv6
      peer 7.7.7.7 enable
      peer 7.7.7.7 advertise irbv6
      peer 7.7.7.7 advertise encap-type vxlan
      peer 7.7.7.7 import reoriginate
      peer 7.7.7.7 advertise route-reoriginated evpn ipv6
    #
    ospf 1
     area 0.0.0.0
      network 6.6.6.6 0.0.0.0
      network 192.168.20.0 0.0.0.255
    #
    ip route-static 7.7.7.7 255.255.255.255 192.168.50.1
    ip route-static 192.168.1.0 255.255.255.0 192.168.50.1
    ip route-static 192.168.60.0 255.255.255.0 192.168.50.1
    #
    return
  • Spine 2 configuration file

    #
    sysname Spine2
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.30.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.40.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 4.4.4.4 255.255.255.255
    #               
    ospf 1          
     area 0.0.0.0   
      network 4.4.4.4 0.0.0.0
      network 192.168.30.0 0.0.0.255
      network 192.168.40.0 0.0.0.255
    #               
    return
  • Leaf 3 configuration file

    #
    sysname Leaf3
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 10
     vxlan vni 10 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif10
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:DB8:30::1/64
     ipv6 nd collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.30.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 10
     rewrite pop single
     bridge-domain 10
    #               
    interface GigabitEthernet0/3/0
     undo shutdown  
     ip address 192.168.60.2 255.255.255.0
    #
    interface LoopBack1
     ip address 7.7.7.7 255.255.255.255
    #
    interface Nve1
     source 7.7.7.7
     vni 10 head-end peer-list protocol bgp
    #
    bgp 200
     peer 6.6.6.6 as-number 100
     peer 6.6.6.6 ebgp-max-hop 255
     peer 6.6.6.6 connect-interface LoopBack1
     peer 8.8.8.8 as-number 200
     peer 8.8.8.8 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 6.6.6.6 enable
      peer 8.8.8.8 enable
     #
     ipv6-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 6.6.6.6 enable
      peer 6.6.6.6 advertise irbv6
      peer 6.6.6.6 advertise encap-type vxlan
      peer 6.6.6.6 import reoriginate
      peer 6.6.6.6 advertise route-reoriginated evpn ipv6
      peer 8.8.8.8 enable
      peer 8.8.8.8 advertise irbv6
      peer 8.8.8.8 advertise encap-type vxlan
      peer 8.8.8.8 import reoriginate
      peer 8.8.8.8 advertise route-reoriginated evpn ipv6
    #
    ospf 1
     area 0.0.0.0
      network 7.7.7.7 0.0.0.0
      network 192.168.30.0 0.0.0.255
    #
    ip route-static 6.6.6.6 255.255.255.255 192.168.60.1
    ip route-static 192.168.1.0 255.255.255.0 192.168.60.1
    ip route-static 192.168.50.0 255.255.255.0 192.168.60.1
    #
    return
  • Leaf 4 configuration file

    #
    sysname Leaf4
    #
    evpn vpn-instance evrf1 bd-mode
     route-distinguisher 10:1
     vpn-target 11:1 export-extcommunity
     vpn-target 11:1 import-extcommunity
    #
    ip vpn-instance vpn1
     ipv6-family
      route-distinguisher 11:11
      apply-label per-instance
      vpn-target 11:1 export-extcommunity evpn
      vpn-target 11:1 import-extcommunity evpn
     vxlan vni 5010
    #
    bridge-domain 20
     vxlan vni 20 split-horizon-mode
     evpn binding vpn-instance evrf1
    #
    interface Vbdif20
     ip binding vpn-instance vpn1
     ipv6 enable
     ipv6 address 2001:DB8:40::1/64
     ipv6 nd collect host enable
     vxlan anycast-gateway enable
    #               
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.40.2 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown       
    #               
    interface GigabitEthernet0/2/0.1 mode l2
     encapsulation dot1q vid 20
     rewrite pop single
     bridge-domain 20
    #
    interface LoopBack1
     ip address 8.8.8.8 255.255.255.255
    #
    interface Nve1
     source 8.8.8.8
     vni 20 head-end peer-list protocol bgp
    #
    bgp 200
     peer 7.7.7.7 as-number 200
     peer 7.7.7.7 connect-interface LoopBack1
     #
     ipv4-family unicast
      undo synchronization
      peer 7.7.7.7 enable
     #
     ipv6-family vpn-instance vpn1
      import-route direct
      advertise l2vpn evpn
     #
     l2vpn-family evpn
      undo policy vpn-target
      peer 7.7.7.7 enable
      peer 7.7.7.7 advertise irbv6
      peer 7.7.7.7 advertise encap-type vxlan
    #
    ospf 1
     area 0.0.0.0
      network 8.8.8.8 0.0.0.0
      network 192.168.40.0 0.0.0.255
    #
    return
  • Device 1 configuration file

    #
    sysname Device1
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.50.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.1.1 255.255.255.0
    #               
    interface LoopBack1
     ip address 1.1.1.1 255.255.255.255
    #
    ip route-static 6.6.6.6 255.255.255.255 192.168.50.2
    ip route-static 7.7.7.7 255.255.255.255 192.168.1.2
    ip route-static 192.168.60.0 255.255.255.0 192.168.1.2
    #               
    return 
  • Device 2 configuration file

    #
    sysname Device2
    #
    interface GigabitEthernet0/1/0
     undo shutdown  
     ip address 192.168.60.1 255.255.255.0
    #               
    interface GigabitEthernet0/2/0
     undo shutdown  
     ip address 192.168.1.2 255.255.255.0
    #               
    interface LoopBack1
     ip address 2.2.2.2 255.255.255.255
    #
    ip route-static 6.6.6.6 255.255.255.255 192.168.1.1
    ip route-static 7.7.7.7 255.255.255.255 192.168.60.2
    ip route-static 192.168.50.0 255.255.255.0 192.168.1.1
    #               
    return 
Translation
Favorite
Download
Update Date:2024-04-01
Document ID:EDOC1100335691
Views:119604
Downloads:659
Average rating:5.0Points

Digital Signature File

digtal sigature tool