NetEngine 8000 M14, M8 and M4 V800R023C00SPC500 Configuration Guide
VXLAN Configuration
- VXLAN Feature Description
- VXLAN Configuration
- Overview of VXLAN
- Configuration Precautions for VXLAN
- Configuring IPv6 VXLAN in Centralized Gateway Mode for Static Tunnel Establishment
- Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN
- Configuring a VXLAN Service Access Point
- Configuring a VXLAN Tunnel
- Configuring a Layer 3 VXLAN Gateway
- (Optional) Configuring a Static MAC Address Entry
- (Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
- Verifying the Configuration of VXLAN in Centralized Gateway Mode Using BGP EVPN
- Configuring IPv6 VXLAN in Centralized Gateway Mode Using BGP EVPN
- Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN
- Configuring a VXLAN Service Access Point
- Configuring a VXLAN Tunnel
- (Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance
- (Optional) Configuring a Layer 3 Gateway on the VXLAN
- (Optional) Configuring VXLAN Gateways to Advertise Specific Types of Routes
- (Optional) Configuring a Static MAC Address Entry
- (Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
- Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN
- Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN
- Configuring a VXLAN Service Access Point
- Configuring an IPv6 VXLAN Tunnel
- (Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance
- (Optional) Configuring a Layer 3 Gateway on the IPv6 VXLAN
- (Optional) Configuring IPv6 VXLAN Gateways to Advertise Specific Types of Routes
- (Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
- Verifying the Configuration
- Configuring Three-Segment VXLAN to Implement DCI
- Configuring the Static VXLAN Active-Active Scenario
- Configuring the Dynamic VXLAN Active-Active Scenario
- Configuring the Dynamic IPv6 VXLAN Active-Active Scenario
- Configuring VXLAN Accessing BRAS
- Configuring NFVI Distributed Gateways (Asymmetric Mode)
- Configuring NFVI Distributed Gateways (Symmetric Mode)
- Maintaining VXLAN
- Configuration Examples for VXLAN
- Example for Configuring Users on the Same Network Segment to Communicate Through a VXLAN Tunnel
- Example for Configuring Users on Different Network Segments to Communicate Through a VXLAN Layer 3 Gateway
- Example for Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN
- Example for Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN
- Example for Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN
- Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking
- Example for Configuring Three-Segment VXLAN to Implement Layer 2 Interworking
- Example for Configuring Static VXLAN in an Active-Active Scenario (Layer 2 Communication)
- Example for Configuring Dynamic VXLAN in an Active-Active Scenario (Layer 3 Communication)
- Example for Configuring VXLAN over IPsec in an Active-Active Scenario
- Example for Configuring the Static VXLAN Active-Active Scenario (in VLAN-Aware Bundle Mode)
- Example for Configuring IPv4 NFVI Distributed Gateway
- Example for Configuring IPv6 NFVI Distributed Gateway
- Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking (IPv6 Services)
VXLAN Feature Description
VXLAN Introduction
Definition
Virtual extensible local area network (VXLAN) is a Network Virtualization over Layer 3 (NVO3) technology that uses MAC-in-UDP encapsulation.
Purpose
VM scale is limited by the network specification.
On a legacy large Layer 2 network, data packets are forwarded at Layer 2 based on MAC entries. However, there is a limit on the MAC table capacity, which subsequently limits the number of VMs.
Network isolation capabilities are limited.
Most networks currently use VLANs to implement network isolation. However, the deployment of VLANs on large-scale virtualized networks has the following limitations:- The VLAN tag field defined in IEEE 802.1Q has only 12 bits and can support only a maximum of 4096 VLANs, which cannot meet user identification requirements of large Layer 2 networks.
- VLANs on legacy Layer 2 networks cannot adapt to dynamic network adjustment.
VM migration scope is limited by the network architecture.
After a VM is started, it may need to be migrated to a new server due to resource issues on the original server, for example, when the CPU usage is too high or memory resources are inadequate. To ensure uninterrupted services during VM migration, the IP address of the VM must remain unchanged. To carry this out, the service network must be a Layer 2 network and also provide multipathing redundancy backup and reliability.
Eliminates VM scale limitations imposed by network specifications.
VXLAN encapsulates data packets sent from VMs into UDP packets and encapsulates IP and MAC addresses used on the physical network into the outer headers. Then the network is only aware of the encapsulated parameters and not the inner data. This greatly reduces the MAC address specification requirements of large Layer 2 networks.
Provides greater network isolation capabilities.
VXLAN uses a 24-bit network segment ID, called VXLAN network identifier (VNI), to identify users. This VNI is similar to a VLAN ID and supports a maximum of 16M [(2^24 - 1)/1024^2] VXLAN segments.
Eliminates VM migration scope limitations imposed by network architecture.
VXLAN uses MAC-in-UDP encapsulation to extend Layer 2 networks. It encapsulates Ethernet packets into IP packets for these Ethernet packets to be transmitted over routes, and does not need to be aware of VMs' MAC addresses. There is no limitation on Layer 3 network architecture, and therefore Layer 3 networks are scalable and have strong automatic fault rectification and load balancing capabilities. This allows for VM migration irrespective of the network architecture.
Benefits
- A maximum of 16M VXLAN segments are supported using 24-bit VNIs, which allows a data center to accommodate multiple tenants.
- Non-VXLAN network edge devices do not need to identify the VM's MAC address, which reduces the number of MAC addresses that have to be learned and enhances network performance.
- MAC-in-UDP encapsulation extends Layer 2 networks, decoupling between physical and virtual networks. Tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management.
VXLAN Basics
VXLAN Basic Concepts
Virtual extensible local area network (VXLAN) is an NVO3 network virtualization technology that encapsulates data packets sent from virtual machines (VMs) into UDP packets and encapsulates IP and MAC addresses used on the physical network in outer headers before sending the packets over an IP network. The egress tunnel endpoint then decapsulates the packets and sends the packets to the destination VM.
VXLAN allows a virtual network to provide access services to a large number of tenants. In addition, tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management. Table 1-465 describes VXLAN concepts.
Concept |
Description |
---|---|
Underlay and overlay networks |
VXLAN allows virtual Layer 2 or Layer 3 networks (overlay networks) to be built over existing physical networks (underlay networks). Overlay networks use encapsulation technologies to transmit tenant packets between sites over Layer 3 forwarding paths provided by underlay networks. Tenants are aware of only overlay networks. |
Network virtualization edge (NVE) |
A network entity that is deployed at the network edge and implements network virtualization functions. NOTE:
vSwitches on devices and servers can function as NVEs. |
VXLAN tunnel endpoint (VTEP) |
A VXLAN tunnel endpoint that encapsulates and decapsulates VXLAN packets. It is represented by an NVE. A VTEP connects to a physical network and is assigned a physical network IP address. This IP address is irrelevant to virtual networks. In VXLAN packets, the source IP address is the local node's VTEP address, and the destination IP address is the remote node's VTEP address. This pair of VTEP addresses corresponds to a VXLAN tunnel. |
VXLAN network identifier (VNI) |
A VXLAN segment identifier similar to a VLAN ID. VMs on different VXLAN segments cannot communicate directly at Layer 2. A VNI identifies only one tenant. Even if multiple terminal users belong to the same VNI, they are considered one tenant. A VNI consists of 24 bits and supports a maximum of 16M tenants. A VNI can be a Layer 2 or Layer 3 VNI.
|
Bridge domain (BD) |
A Layer 2 broadcast domain through which VXLAN data packets are forwarded. VNIs identifying VNs must be mapped to BDs so that a BD can function as a VXLAN network entity to transmit VXLAN traffic. |
Virtual Bridge Domain Interface (VBDIF) |
A Layer 3 logical interface created for a BD. Configuring IP addresses for VBDIF interfaces allows communication between VXLANs on different network segments and between VXLANs and non-VXLANs and implements Layer 2 network access to a Layer 3 network. |
Gateway |
A device that ensures communication between VXLANs identified by different VNIs and between VXLANs and non-VXLANs. A VXLAN gateway can be a Layer 2 or Layer 3 gateway.
|
Combinations of Underlay and Overlay Networks
The infrastructure network on which VXLAN tunnels are established is called the underlay network, and the service network carried over VXLAN tunnels are called the overlay network. The following combinations of underlay and overlay networks exist in VXLAN scenarios.
Category |
Description |
Example |
---|---|---|
IPv4 over IPv4 |
The overlay and underlay networks are both IPv4 networks. |
In Figure 1-1023, Server IP and VTEP IP are both IPv4 addresses. |
IPv6 over IPv4 |
The overlay network is an IPv6 network, and the underlay network is an IPv4 network. |
In Figure 1-1023, Server IP is an IPv6 address, and VTEP IP is an IPv4 address. |
IPv4 over IPv6 |
The overlay network is an IPv4 network, and the underlay network is an IPv6 network. |
In Figure 1-1023, Server IP is an IPv4 address, and VTEP IP is an IPv6 address. |
IPv6 over IPv6 |
The overlay and underlay networks are both IPv6 networks. |
In Figure 1-1023, Server IP and VTEP IP are both IPv6 addresses. |
VXLAN Packet Format
VXLAN is a network virtualization technique that uses MAC-in-UDP encapsulation by adding a UDP header and a VXLAN header before a raw Ethernet packet.
Figure 1-1025 shows VXLAN packet format details.
Field |
Description |
---|---|
VXLAN header |
|
Outer UDP header |
|
Outer IP header |
|
Outer Ethernet header |
|
EVPN VXLAN Fundamentals
Introduction
Ethernet virtual private network (EVPN) is a VPN technology used for Layer 2 internetworking. EVPN is similar to BGP/MPLS IP VPN. EVPN defines a new type of BGP network layer reachability information (NLRI), called the EVPN NLRI. The EVPN NLRI defines new BGP EVPN routes to implement MAC address learning and advertisement between Layer 2 networks at different sites.
VXLAN does not provide a control plane, and VTEP discovery and host information (IP and MAC addresses, VNIs, and gateway VTEP IP address) learning are implemented by traffic flooding on the data plane, resulting in high traffic volumes on DC networks. To address this problem, VXLAN uses EVPN as the control plane. EVPN allows VTEPs to exchange BGP EVPN routes to implement automatic VTEP discovery and host information advertisement, preventing unnecessary traffic flooding.
In summary, EVPN introduces several new types of BGP EVPN routes through BGP extension to advertise VTEP addresses and host information. In this way, EVPN applied to VXLAN networks enables VTEP discovery and host information learning on the control plane instead of on the data plane.
BGP EVPN Routes
EVPN NLRI defines the following BGP EVPN route types applicable to the VXLAN control plane:
Type 2 Route: MAC/IP Route
Figure 1-1026 shows the format of a MAC/IP route.
Table 1-467 describes the meaning of each field.
Field |
Description |
---|---|
Route Distinguisher |
RD value set in an EVI |
Ethernet Segment Identifier |
Unique ID for defining the connection between local and remote devices |
Ethernet Tag ID |
VLAN ID configured on the device |
MAC Address Length |
Length of the host MAC address carried in the route |
MAC Address |
Host MAC address carried in the route |
IP Address Length |
Length of the host IP address carried in the route |
IP Address |
Host IP address carried in the route |
MPLS Label1 |
L2VNI carried in the route |
MPLS Label2 |
L3VNI carried in the route |
MAC/IP routes function as follows on the VXLAN control plane:
MAC address advertisement
To implement Layer 2 communication between intra-subnet hosts, the source and remote VTEPs must learn the MAC addresses of the hosts. The VTEPs function as BGP EVPN peers to exchange MAC/IP routes so that they can obtain the host MAC addresses. The MAC Address field identifies the MAC address of a host.
ARP advertisement
A MAC/IP route can carry both the MAC and IP addresses of a host, and therefore can be used to advertise ARP entries between VTEPs. The MAC Address field identifies the MAC address of the host, whereas the IP Address field identifies the IP address of the host. This type of MAC/IP route is called the ARP route.
IP route advertisement
In distributed VXLAN gateway scenarios, to implement Layer 3 communication between inter-subnet hosts, the source and remote VTEPs that function as Layer 3 gateways must learn the host IP routes. The VTEPs function as BGP EVPN peers to exchange MAC/IP routes so that they can obtain the host IP routes. The IP Address field identifies the destination address of the IP route. In addition, the MPLS Label2 field must carry the L3VNI. This type of MAC/IP route is called the integrated routing and bridging (IRB) route.
An ARP route carries host MAC and IP addresses and an L2VNI. An IRB route carries host MAC and IP addresses, an L2VNI, and an L3VNI. Therefore, IRB routes carry ARP routes and can be used to advertise IP routes as well as ARP entries.
Host IPv6 route advertisement
In a distributed gateway scenario, to implement Layer 3 communication between hosts on different subnets, the VTEPs (functioning as Layer 3 gateways) must learn host IPv6 routes from each other. To achieve this, VTEPs functioning as BGP EVPN peers exchange MAC/IP routes to advertise host IPv6 routes to each other. The IP Address field carried in the MAC/IP routes indicates the destination addresses of host IPv6 routes, and the MPLS Label2 field must carry an L3VNI. MAC/IP routes in this case are also called IRBv6 routes.
An ND route carries host MAC and IPv6 addresses and an L2VNI. An IRBv6 route carries host MAC and IPv6 addresses, an L2VNI, and an L3VNI. Therefore, IRBv6 routes carry ND routes and can be used to advertise both host IPv6 routes and ND entries.
Type 3 Route: Inclusive Multicast Route
An inclusive multicast route comprises a prefix and a PMSI attribute. Figure 1-1027 shows the format of an inclusive multicast route.
Table 1-468 describes the meaning of each field.
Field |
Description |
---|---|
Route Distinguisher |
RD value set in an EVI. |
Ethernet Tag ID |
VLAN ID, which is all 0s in this type of route. |
IP Address Length |
Length of the local VTEP's IP address carried in the route. |
Originating Router's IP Address |
Local VTEP's IP address carried in the route. |
Flags |
Flags indicating whether leaf node information is required for the tunnel. This field is inapplicable in VXLAN scenarios. |
Tunnel Type |
Tunnel type carried in the route. The value can only be 6, representing Ingress Replication in VXLAN scenarios. It is used for BUM packet forwarding. |
MPLS Label |
L2VNI carried in the route. |
Tunnel Identifier |
Tunnel identifier carried in the route. This field is the local VTEP's IP address in VXLAN scenarios. |
Inclusive multicast routes are used on the VXLAN control plane for automatic VTEP discovery and dynamic VXLAN tunnel establishment. VTEPs that function as BGP EVPN peers transmit L2VNIs and VTEPs' IP addresses through inclusive multicast routes. The originating router's IP Address field identifies the local VTEP's IP address; the MPLS Label field identifies an L2VNI. If the remote VTEP's IP address is reachable at Layer 3, a VXLAN tunnel to the remote VTEP is established. In addition, the local end creates a VNI-based ingress replication list and adds the peer VTEP IP address to the list for subsequent BUM packet forwarding.
Type 5 Route: IP Prefix Route
Figure 1-1028 shows the format of an IP prefix route.
Table 1-469 describes the meaning of each field.
Field |
Description |
---|---|
Route Distinguisher |
RD value set in a VPN instance |
Ethernet Segment Identifier |
Unique ID for defining the connection between local and remote devices |
Ethernet Tag ID |
Currently, this field can only be set to 0 |
IP Prefix Length |
Length of the IP prefix carried in the route |
IP Prefix |
IP prefix carried in the route |
GW IP Address |
Default gateway address |
MPLS Label |
L3VNI carried in the route |
An IP prefix route can carry either a host IP address or a network segment address.
When carrying a host IP address, the route is used for IP route advertisement in distributed VXLAN gateway scenarios, which functions the same as an IRB route on the VXLAN control plane.
When carrying a network segment address, the route can be advertised to allow hosts on a VXLAN network to access the specified network segment or external network.
VXLAN Gateway Deployment
To implement Layer 3 interworking, a Layer 3 gateway must be deployed on a VXLAN. VXLAN gateways can be deployed in centralized or distributed mode.
Centralized VXLAN Gateway Mode
In this mode, Layer 3 gateways are configured on one device. On the network shown in Figure 1-1029, traffic across network segments is forwarded through Layer 3 gateways to implement centralized traffic management.
- Advantage: Inter-segment traffic can be centrally managed, and gateway deployment and management is easy.
- Disadvantages:
- Forwarding paths are not optimal. Inter-segment Layer 3 traffic of data centers connected to the same Layer 2 gateway must be transmitted to the centralized Layer 3 gateway for forwarding.
- The ARP entry specification is a bottleneck. ARP entries must be generated for tenants on the Layer 3 gateway. However, only a limited number of ARP entries are allowed by the Layer 3 gateway, impeding data center network expansion.
Distributed VXLAN Gateway Mode
Deploying distributed VXLAN gateways addresses problems that occur in centralized VXLAN gateway networking. Distributed VXLAN gateways use the spine-leaf network. In this networking, leaf nodes, which can function as Layer 3 VXLAN gateways, are used as VTEPs to establish VXLAN tunnels. Spine nodes are unaware of the VXLAN tunnels and only forward VXLAN packets between different leaf nodes. On the network shown in Figure 1-1030, Server 1 and Server 2 on different network segments both connect to Leaf 1. When Server 1 and Server 2 communicate, traffic is forwarded only through Leaf 1, not through any spine node.
A spine node supports high-speed IP forwarding capabilities.
- Function as a Layer 2 VXLAN gateway to connect to physical servers or VMs and allow tenants to access VXLANs.
- Function as a Layer 3 VXLAN gateway to perform VXLAN encapsulation and decapsulation to allow inter-segment VXLAN communication and access to external networks.
- Flexible deployment. A leaf node can function as both Layer 2 and Layer 3 VXLAN gateways.
- Improved network expansion capabilities. A leaf node only needs to learn the ARP or ND entries of servers attached to it. A centralized Layer 3 gateway in the same scenario, however, has to learn the ARP or ND entries of all servers on the network. Therefore, the ARP or ND entry specification is no longer a bottleneck on a distributed VXLAN gateway.
Functional Scenarios
Centralized VXLAN Gateway Deployment in Static Mode
In centralized VXLAN gateway deployment in static mode, the control plane is responsible for VXLAN tunnel establishment and dynamic MAC address learning; the forwarding plane is responsible for intra-subnet known unicast packet forwarding, intra-subnet BUM packet forwarding, and inter-subnet packet forwarding.
Deploying centralized VXLAN gateways in static mode involves heavy workload and is inflexible, and therefore is inapplicable to large-scale networks. As such, deploying centralized VXLAN gateways using BGP EVPN is recommended.
Combination Category |
Implementation Difference |
---|---|
IPv6 over IPv4 |
|
IPv4 over IPv6 |
The VTEPs at both ends of a VXLAN tunnel use IPv6 addresses, and IPv6 Layer 3 route reachability must be implemented between the VTEPs. |
IPv6 over IPv6 |
|
VXLAN Tunnel Establishment
A VXLAN tunnel is identified by a pair of VTEP IP addresses. A VXLAN tunnel can be statically created after you configure local and remote VNIs, VTEP IP addresses, and an ingress replication list, and the tunnel goes Up when the pair of VTEPs is reachable at Layer 3.
On the network shown in Figure 1-1031, Leaf 1 connects to Host 1 and Host 3; Leaf 2 connects to Host 2; Spine functions as a Layer 3 gateway.
To allow Host 3 and Host 2 to communicate, Layer 2 VNIs and an ingress replication list must be configured on Leaf 1 and Leaf 2. The peer VTEPs' IP addresses must be specified in the ingress replication list. A VXLAN tunnel can be established between Leaf 1 and Leaf 2 if their VTEPs have Layer 3 routes to each other.
To allow Host 1 and Host 2 to communicate, Layer 2 VNIs and an ingress replication list must be configured on Leaf 1, Leaf 2, and also Spine. The peer VTEPs' IP addresses must be specified in the ingress replication list. A VXLAN tunnel can be established between Leaf 1 and Spine and between Leaf 2 and Spine if they have Layer 3 routes to the IP addresses of the VTEPs of each other.
Although Host 1 and Host 3 both connect to Leaf 1, they belong to different subnets and must communicate through the Layer 3 gateway (Spine). Therefore, a VXLAN tunnel is also required between Leaf 1 and Spine.
Dynamic MAC Address Learning
VXLAN supports dynamic MAC address learning to allow communication between tenants. MAC address entries are dynamically created and do not need to be manually maintained, greatly reducing maintenance workload. The following example illustrates dynamic MAC address learning for intra-subnet communication on the network shown in Figure 1-1032.
Host 3 sends an ARP request for Host 2's MAC address. The ARP request carries the source MAC address being MAC3, destination MAC address being all Fs, source IP address being IP3, and destination IP address being IP2.
Upon receipt of the ARP request, Leaf 1 determines that the Layer 2 sub-interface receiving the ARP request belongs to a BD that has been bound to a VNI (20), meaning that the ARP request packet must be transmitted over the VXLAN tunnel identified by VNI 20. Leaf 1 then learns the mapping between Host 3's MAC address, BDID (Layer 2 broadcast domain ID), and inbound interface (Port1 for the Layer 2 sub-interface) that has received the ARP request and generates a MAC address entry for Host 3. The MAC address entry's outbound interface is Port1.
Leaf 1 then performs VXLAN encapsulation on the ARP request, with the VNI being the one bound to the BD, source IP address in the outer IP header being the VTEP's IP address of Leaf 1, destination IP address in the outer IP header being the VTEP's IP address of Leaf 2, source MAC address in the outer Ethernet header being NVE1's MAC address of Leaf 1, and destination MAC address in the outer Ethernet header being the MAC address of the next hop pointing to the destination IP address. Figure 1-1033 shows the VXLAN packet format. The VXLAN packet is then transmitted over the IP network based on the IP and MAC addresses in the outer headers and finally reaches Leaf 2.
After Leaf 2 receives the VXLAN packet, it decapsulates the packet and obtains the ARP request originated from Host 3. Leaf 2 then learns the mapping between Host 3's MAC address, BDID, and VTEP's IP address of Leaf 1 and generates a MAC address entry for Host 3. Based on the next hop (VTEP's IP address of Leaf 1), the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Leaf1.
Leaf 2 broadcasts the ARP request in the Layer 2 domain. Upon receipt of the ARP request, Host 2 finds that the destination IP address is its own IP address and saves Host 3's MAC address to the local MAC address table. Host 2 then responds with an ARP reply.
So far, Host 2 has learned Host 3's MAC address. Therefore, Host 2 responds with a unicast ARP reply. The ARP reply is transmitted to Host 3 in the same manner. After Host 2 and Host 3 learn the MAC address of each other, they will subsequently communicate with each other in unicast mode.
Dynamic MAC address learning is required only between hosts and Layer 3 gateways in inter-subnet communication scenarios. The process is the same as that for intra-subnet communication.
Intra-Subnet Known Unicast Packet Forwarding
Intra-subnet known unicast packets are forwarded only through Layer 2 VXLAN gateways and are unknown to Layer 3 VXLAN gateways. Figure 1-1034 shows the intra-subnet known unicast packet forwarding process.
- After Leaf 1 receives Host 3's packet, it determines the Layer 2 BD of the packet based on the access interface and VLAN information and searches for the outbound interface and encapsulation information in the BD.
- Leaf 1's VTEP performs VXLAN encapsulation based on the encapsulation information obtained and forwards the packets through the outbound interface obtained.
- Upon receipt of the VXLAN packet, Leaf 2's VTEP verifies the VXLAN packet based on the UDP destination port number, source and destination IP addresses, and VNI. Leaf 2 obtains the Layer 2 BD based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
- Leaf 2 obtains the destination MAC address of the inner Layer 2 packet, adds VLAN tags to the packets based on the outbound interface and encapsulation information in the local MAC address table, and forwards the packets to Host 2.
Host 2 sends packets to Host 3 in the same manner.
Intra-Subnet BUM Packet Forwarding
Intra-subnet BUM packet forwarding is completed between Layer 2 VXLAN gateways in ingress replication mode. Layer 3 VXLAN gateways do not need to be aware of the process. In ingress replication mode, when a BUM packet enters a VXLAN tunnel, the ingress VTEP uses ingress replication to perform VXLAN encapsulation and send a copy of the BUM packet to every egress VTEP in the list. When the BUM packet leaves the VXLAN tunnel, the egress VTEP decapsulates the BUM packet. Figure 1-1035 shows the BUM packet forwarding process.
- After Leaf 1 receives Terminal A's packet, it determines the Layer 2 BD of the packet based on the access interface and VLAN information.
- Leaf 1's VTEP obtains the ingress replication list for the VNI, replicates packets based on the list, and performs VXLAN encapsulation by adding outer headers. Leaf 1 then forwards the VXLAN packet through the outbound interface.
- Upon receipt of the VXLAN packet, Leaf 2's VTEP and Leaf 3's VTEP verify the VXLAN packet based on the UDP destination port number, source and destination IP addresses, and VNI. Leaf 2/Leaf 3 obtains the Layer 2 BD based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
- Leaf 2/Leaf 3 checks the destination MAC address of the inner Layer 2 packet and finds it a BUM MAC address. Therefore, Leaf 2/Leaf 3 broadcasts the packet onto the network connected to the terminals (not the VXLAN tunnel side) in the Layer 2 broadcast domain. Specifically, Leaf 2/Leaf 3 finds the outbound interfaces and encapsulation information not related to the VXLAN tunnel, adds VLAN tags to the packet, and forwards the packet to Terminal B/Terminal C.
Terminal B/Terminal C responds to Terminal A in the same process as intra-subnet known unicast packet forwarding.
Inter-Subnet Packet Forwarding
Inter-subnet packets must be forwarded through a Layer 3 gateway. Figure 1-1036 shows inter-subnet packet forwarding in centralized VXLAN gateway scenarios.
- After Leaf 1 receives Host 1's packet, it determines the Layer 2 BD of the packet based on the access interface and VLAN information and searches for the outbound interface and encapsulation information in the BD.
- Leaf 1's VTEP performs VXLAN encapsulation based on the outbound interface and encapsulation information and forwards the packets to Spine.
- After Spine receives the VXLAN packet, it decapsulates the packet and finds that the destination MAC address of the inner packet is the MAC address (MAC3) of the Layer 3 gateway interface (VBDIF10) so that the packet must be forwarded at Layer 3.
- Spine removes the inner Ethernet header, parses the destination IP address, and searches the routing table for a next hop address. Spine then searches the ARP table based on the next hop address to obtain the destination MAC address, VXLAN tunnel's outbound interface, and VNI.
- Spine performs VXLAN encapsulation on the inner packet again and forwards the VXLAN packet to Leaf 2, with the source MAC address in the inner Ethernet header being the MAC address (MAC4) of the Layer 3 gateway interface (VBDIF20).
- Upon receipt of the VXLAN packet, Leaf 2's VTEP verifies the VXLAN packet based on the UDP destination port number, source and destination IP addresses, and VNI. Leaf 2 then obtains the Layer 2 broadcast domain based on the VNI and removes the outer headers to obtain the inner Layer 2 packet. It then searches for the outbound interface and encapsulation information in the Layer 2 broadcast domain.
- Leaf 2 adds VLAN tags to the packets based on the outbound interface and encapsulation information and forwards the packets to Host 2.
Host 2 sends packets to Host 1 in the same manner.
Establishment of a VXLAN in Centralized Gateway Mode Using BGP EVPN
During the establishment of a VXLAN in centralized gateway mode using BGP EVPN, the control plane process includes:
The forwarding plane process includes:
This mode uses EVPN to automatically discover VTEPs and dynamically establish VXLAN tunnels, providing high flexibility and is applicable to large-scale VXLAN networking scenarios. It is recommended for establishing VXLANs with centralized gateways.
Combination Type |
Implementation Difference |
---|---|
IPv6 over IPv4 |
|
IPv4 over IPv6 |
|
IPv6 over IPv6 |
|
VXLAN Tunnel Establishment
A VXLAN tunnel is identified by a pair of VTEP IP addresses. During VXLAN tunnel establishment, the local and remote VTEPs attempt to obtain IP addresses of each other. A VXLAN tunnel can be established if the IP addresses obtained are routable at Layer 3. When BGP EVPN is used to dynamically establish a VXLAN tunnel, the local and remote VTEPs first establish a BGP EVPN peer relationship and then exchange BGP EVPN routes to transmit VNIs and VTEP IP addresses.
As shown in Figure 1-1037, two hosts connect to Leaf1, one host connects to Leaf2, and a Layer 3 gateway is deployed on the spine node. A VXLAN tunnel needs to be established between Leaf1 and Leaf2 to implement communication between Host3 and Host2. To implement communication between Host1 and Host2, a VXLAN tunnel needs to be established between Leaf1 and Spine and between Spine and Leaf2. Though Host1 and Host3 both connect to Leaf1, they belong to different subnets and need to communicate through the Layer 3 gateway deployed on Spine. Therefore, a VXLAN tunnel needs to be created between Leaf1 and Spine.
A VXLAN tunnel is determined by a pair of VTEP IP addresses. When a local VTEP receives the same remote VTEP IP address repeatedly, only one VXLAN tunnel can be established, but packets are encapsulated with different VNIs before being forwarded through the tunnel.
The following example illustrates how to dynamically establish a VXLAN tunnel using BGP EVPN between Leaf1 and Leaf2 on the network shown in Figure 1-1038.
First, a BGP EVPN peer relationship is established between Leaf1 and Leaf2. Then, Layer 2 broadcast domains are created on Leaf1 and Leaf2, and VNIs are bound to the Layer 2 broadcast domains. Next, an EVPN instance is configured in each Layer 2 broadcast domain, and an RD, export VPN target (ERT), and import VPN target (IRT) are configured for the EVPN instance. After the local VTEP IP address is configured on Leaf1 and Leaf2, they generate a BGP EVPN route and send it to each other. The BGP EVPN route carries the local EVPN instance's ERT, Next_Hop attribute, and an inclusive multicast route (Type 3 route defined in BGP EVPN). Figure 1-1039 shows the format of an inclusive multicast route, which comprises a prefix and a PMSI attribute. VTEP IP addresses are stored in the Originating Router's IP Address field in the inclusive multicast route prefix, and VNIs are stored in the MPLS Label field in the PMSI attribute. The VTEP IP address is also included in the Next_Hop attribute.
After Leaf1 and Leaf2 receive a BGP EVPN route from each other, they match the ERT of the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. Leaf1 and Leaf2 obtain the peer VTEP IP address (from the Next_Hop attribute) and VNI carried in the route. If the peer VTEP IP address is reachable at Layer 3, they establish a VXLAN tunnel to the peer end. Moreover, the local end creates a VNI-based ingress replication table and adds the peer VTEP IP address to the table for forwarding BUM packets.
The process of dynamically establishing VXLAN tunnels between Leaf1 and Spine and between Leaf2 and Spine using BGP EVPN is similar to the preceding process.
A VPN target is an extended community attribute of BGP. An EVPN instance can have the IRT and ERT configured. The local EVPN instance's ERT must match the remote EVPN instance's IRT for EVPN route advertisement. If not, VXLAN tunnels cannot be dynamically established. If only one end can successfully accept the BGP EVPN route, this end can establish a VXLAN tunnel to the other end, but cannot exchange data packets with the other end. The other end drops packets after confirming that there is no VXLAN tunnel to the end that has sent these packets.
For details about VPN targets, see Basic BGP/MPLS IP VPN.
Dynamic MAC Address Learning
VXLAN supports dynamic MAC address learning to allow communication between tenants. MAC address entries are dynamically created and do not need to be manually maintained, greatly reducing maintenance workload. The following example illustrates dynamic MAC address learning for intra-subnet communication of hosts on the network shown in Figure 1-1040.
Host3 sends dynamic ARP packets when it first communicates with Leaf1. Leaf1 learns the MAC address of Host3 and the mapping between the BDID and packet inbound interface (that is, the physical interface Port 1 corresponding to the Layer 2 sub-interface), and generates a MAC address entry about Host3 in the local MAC address table, with the outbound interface being Port 1. Leaf1 generates a BGP EVPN route based on the ARP entry of Host3 and sends it to Leaf2. The BGP EVPN route carries the local EVPN instance's ERT, Next_Hop attribute, and a Type 2 route (MAC/IP route) defined in BGP EVPN. The Next_Hop attribute carries the local VTEP's IP address. The MAC Address Length and MAC Address fields identify Host3's MAC address. The Layer 2 VNI is stored in the MPLS Label1 field. Figure 1-1041 shows the format of a MAC route or an IP route.
After receiving the BGP EVPN route from Leaf1, Leaf2 matches the ERT of the EVPN instance carried in the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. After accepting the route, Leaf2 obtains the MAC address of Host3 and the mapping between the BDID and the VTEP IP address (Next_Hop attribute) of Leaf1, and generates the MAC address entry of the Host3 in the local MAC address table. The recursion to the outbound interface needs to be performed based on the next hop, and the final recursion result is the VXLAN tunnel destined for Leaf1.
Leaf1 learns the MAC route of Host2 in a similar process.
When hosts on different subnets communicate with each other, only the hosts and Layer 3 gateway need to dynamically learn MAC addresses from each other. This process is similar to the preceding process.
Leaf nodes can learn the MAC addresses of hosts during data forwarding, depending on their capabilities to learn MAC addresses from data packets. If VXLAN tunnels are established using BGP EVPN, leaf nodes can dynamically learn the MAC addresses of hosts through BGP EVPN routes, rather than during data forwarding.
Intra-subnet Forwarding of Known Unicast Packets
Intra-subnet known unicast packets are forwarded only between Layer 2 VXLAN gateways and are unknown to Layer 3 VXLAN gateways. Figure 1-1042 shows the forwarding process of known unicast packets.
- After Leaf1 receives a packet from Host3, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information, and searches for the outbound interface and encapsulation information in the broadcast domain.
- Leaf1's VTEP performs VXLAN encapsulation based on the obtained encapsulation information and forwards the packet through the outbound interface obtained.
- After the VTEP on Leaf2 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
- Leaf2 obtains the destination MAC address of the inner Layer 2 packet, adds a VLAN tag to the packet based on the outbound interface and encapsulation information in the local MAC address table, and forwards the packet to Host2.
Host2 sends packets to Host3 in the same process.
Intra-subnet Forwarding of BUM Packets
Intra-subnet BUM packets are forwarded only between Layer 2 VXLAN gateways, and are unknown to Layer 3 VXLAN gateways. Intra-subnet BUM packets can be forwarded in ingress replication mode. In this mode, when a BUM packet enters a VXLAN tunnel, the access-side VTEP performs VXLAN encapsulation, and then forwards the packet to all egress VTEPs that are in the ingress replication list. When the BUM packet leaves the VXLAN tunnel, the egress VTEP decapsulates the packet. Figure 1-1043 shows the forwarding process of BUM packets.
- After Leaf1 receives a packet from TerminalA, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information in the packet.
- Leaf1's VTEP obtains the ingress replication list for the VNI, replicates the packet based on the list, and performs VXLAN encapsulation. Leaf1 then forwards the VXLAN packet through the outbound interface.
- After the VTEP on Leaf2 or Leaf3 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 or Leaf3 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
- Leaf2 or Leaf3 checks the destination MAC address of the inner Layer 2 packet and finds it a BUM MAC address. Therefore, Leaf2 or Leaf3 broadcasts the packet onto the network connected to terminals (not the VXLAN tunnel side) in the Layer 2 broadcast domain. Specifically, Leaf2 or Leaf3 finds the outbound interfaces and encapsulation information not related to the VXLAN tunnel, adds VLAN tags to the packet, and forwards the packet to TerminalB or TerminalC.
The forwarding process of a response packet from TerminalB/TerminalC to TerminalA is similar to the intra-subnet forwarding process of known unicast packets.
Inter-subnet Packet Forwarding
Inter-subnet packets must be forwarded through a Layer 3 gateway. Figure 1-1044 shows the inter-subnet packet forwarding process in centralized VXLAN gateway scenarios.
- After Leaf1 receives a packet from Host1, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN in the packet, and searches for the outbound interface and encapsulation information in the Layer 2 broadcast domain.
- The VTEP on Leaf1 performs VXLAN tunnel encapsulation based on the outbound interface and encapsulation information, and forwards the packet to Spine.
- Spine decapsulates the received VXLAN packet, finds that the destination MAC address in the inner packet is MAC3 of the Layer 3 gateway interface VBDIF10, and determines that the packet needs to be forwarded at Layer 3.
- Spine removes the Ethernet header of the inner packet and parses the destination IP address. It then searches the routing table based on the destination IP address to obtain the next hop address, and searches ARP entries based on the next hop to obtain the destination MAC address, VXLAN tunnel outbound interface, and VNI.
- Spine re-encapsulates the VXLAN packet and forwards it to Leaf2. The source MAC address in the Ethernet header of the inner packet is MAC4 of the Layer 3 gateway interface VBDIF20.
- After the VTEP on Leaf2 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. The VTEP then obtains the Layer 2 broadcast domain based on the VNI, decapsulates the packet to obtain the inner Layer 2 packet, and searches for the outbound interface and encapsulation information in the corresponding Layer 2 broadcast domain.
- Leaf2 adds a VLAN tag to the packet based on the outbound interface and encapsulation information, and forwards the packet to Host2.
Host2 sends packets to Host1 through a similar process.
Establishment of a VXLAN in Distributed Gateway Mode Using BGP EVPN
During the establishment of a VXLAN in distributed gateway mode using BGP EVPN, the control plane process is as follows:
The forwarding plane process includes:
This mode supports the advertisement of host IP routes, MAC addresses, and ARP entries. For details, see EVPN VXLAN Fundamentals. This mode is recommended for establishing VXLANs with distributed gateways.
Combination Type |
Implementation Difference |
---|---|
IPv6 over IPv4 |
|
IPv4 over IPv6 |
|
IPv6 over IPv6 |
|
VXLAN Tunnel Establishment
A VXLAN tunnel is identified by a pair of VTEP IP addresses. During VXLAN tunnel establishment, the local and remote VTEPs attempt to obtain IP addresses of each other. A VXLAN tunnel can be established if the IP addresses obtained are routable at Layer 3. When BGP EVPN is used to dynamically establish a VXLAN tunnel, the local and remote VTEPs first establish a BGP EVPN peer relationship and then exchange BGP EVPN routes to transmit VNIs and VTEP IP addresses.
In distributed VXLAN gateway scenarios, leaf nodes function as both Layer 2 and Layer 3 VXLAN gateways. Spine nodes are unaware of the VXLAN tunnels and only forward VXLAN packets between different leaf nodes. On the control plane, a VXLAN tunnel only needs to be set up between leaf nodes. In Figure 1-1045, a VXLAN tunnel is established between Leaf1 and Leaf2 for Host1 and Host2 or Host3 and Host2 to communicate. Because Host1 and Host3 both connect to Leaf1, they can directly communicate through Leaf1 instead of over a VXLAN tunnel.
A VXLAN tunnel is determined by a pair of VTEP IP addresses. When a local VTEP receives the same remote VTEP IP address repeatedly, only one VXLAN tunnel can be established, but packets are encapsulated with different VNIs before being forwarded through the tunnel.
In distributed gateway scenarios, BGP EVPN can be used to dynamically establish VXLAN tunnels in either of the following situations:
Intra-subnet Communication
On the network shown in Figure 1-1046, intra-subnet communication between Host2 and Host3 requires only Layer 2 forwarding. The process for establishing a VXLAN tunnel using BGP EVPN is as follows.
First, a BGP EVPN peer relationship is established between Leaf1 and Leaf2. Then, Layer 2 broadcast domains are created on Leaf1 and Leaf2, and VNIs are bound to the Layer 2 broadcast domains. Next, an EVPN instance is configured in each Layer 2 broadcast domain, and an RD, an ERT, and an IRT are configured for the EVPN instance. After the local VTEP IP address is configured on Leaf1 and Leaf2, they generate a BGP EVPN route and send it to each other. The BGP EVPN route carries the local EVPN instance's ERT and an inclusive multicast route (Type 3 route defined in BGP EVPN). Figure 1-1047 shows the format of an inclusive multicast route, which comprises a prefix and a PMSI attribute. VTEP IP addresses are stored in the Originating Router's IP Address field in the inclusive multicast route prefix, and VNIs are stored in the MPLS Label field in the PMSI attribute. The VTEP IP address is also included in the Next_Hop attribute.
After Leaf1 and Leaf2 receive a BGP EVPN route from each other, they match the ERT of the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. Leaf1 and Leaf2 obtain the peer VTEP IP address (from the Next_Hop attribute) and VNI carried in the route. If the peer VTEP IP address is reachable at Layer 3, they establish a VXLAN tunnel to the peer end. Moreover, the local end creates a VNI-based ingress replication table and adds the peer VTEP IP address to the table for forwarding BUM packets.
A VPN target is an extended community attribute of BGP. An EVPN instance can have the IRT and ERT configured. The local EVPN instance's ERT must match the remote EVPN instance's IRT for EVPN route advertisement. If not, VXLAN tunnels cannot be dynamically established. If only one end can successfully accept the BGP EVPN route, this end can establish a VXLAN tunnel to the other end, but cannot exchange data packets with the other end. The other end drops packets after confirming that there is no VXLAN tunnel to the end that has sent these packets.
For details about VPN targets, see Basic BGP/MPLS IP VPN.
Inter-Subnet Communication
Inter-subnet communication between Host1 and Host2 requires Layer 3 forwarding. When VXLAN tunnels are established using BGP EVPN, Leaf1 and Leaf2 must advertise host IP routes. Typically, 32-bit host IP routes are advertised. Because different leaf nodes may connect to the same network segment on the VXLAN network, the network segment routes advertised by the leaf nodes may conflict. This conflict may cause host unreachability of some leaf nodes. Leaf nodes can advertise network segment routes in the following scenarios:
The network segment that a leaf node connects to is unique on a VXLAN, and a large number of specific host routes are available. In this case, the routes of the network segment to which the host IP routes belong can be advertised so that leaf nodes do not have to store all these routes.
When hosts on a VXLAN need to access external networks, leaf nodes can advertise routes destined for external networks onto the VXLAN to allow other leaf nodes to learn the routes.
Before establishing a VXLAN tunnel, perform configurations listed in the following table on Leaf1 and Leaf2.
Step |
Function |
---|---|
Create a Layer 2 broadcast domain and associate a Layer 2 VNI with the Layer 2 broadcast domain. |
A broadcast domain functions as a VXLAN network entity to transmit VXLAN data packets. |
Establish a BGP EVPN peer relationship between Leaf1 and Leaf2. |
This configuration is used to exchange BGP EVPN routes. |
Configure an EVPN instance in a Layer 2 broadcast domain, and configure an RD, an ERT, and an IRT for the EVPN instance. |
This configuration is used to generate BGP EVPN routes. |
Configure L3VPN instances for tenants and bind the L3VPN instances to the VBDIF interfaces of the Layer 2 broadcast domain. |
This configuration is used to differentiate and isolate IP routing tables of different tenants. |
Specify a Layer 3 VNI for an L3VPN instance. |
This configuration allows the leaf nodes to determine the L3VPN routing table for forwarding data packets. |
Configure the export VPN target (eERT) and import VPN target (eIRT) for EVPN routes in the L3VPN instance. |
This configuration controls the local L3VPN instance to advertise and receive BGP EVPN routes. |
Configure the type of route to be advertised between Leaf1 and Leaf2. |
This configuration is used to advertise IP routes between Host1 and Host 2. Two types of routes are available, IRB and IP prefix routes, which can be selected as needed.
|
Dynamic VXLAN tunnel establishment varies depending on how host IP routes are advertised.
Host IP routes are advertised through IRB routes. (Figure 1-1048 shows the process.)
When Host1 communicates with Leaf1 for the first time, Leaf1 learns the ARP entry of Host1 after receiving dynamic ARP packets. Leaf1 then finds the L3VPN instance bound to the VBDIF interface of the Layer 2 broadcast domain where Host1 resides, and obtains the Layer 3 VNI associated with the L3VPN instance. The EVPN instance of Leaf1 then generates an IRB route based on the information obtained. Figure 1-1049 shows the IRB route. The host IP address is stored in the IP Address Length and IP Address fields; the Layer 3 VNI is stored in the MPLS Label2 field.
Leaf1 generates and sends a BGP EVPN route to Leaf2. The BGP EVPN route carries the local EVPN instance's ERT, extended community attribute, Next_Hop attribute, and the IRB route. The extended community attribute carries the tunnel type (VXLAN tunnel) and local VTEP MAC address; the Next_Hop attribute carries the local VTEP IP address.
After Leaf2 receives the BGP EVPN route from Leaf1, Leaf2 processes the route as follows:
If the ERT carried in the route is the same as the IRT of the local EVPN instance, the route is accepted. After the EVPN instance obtains IRB routes, it can extract ARP routes from the IRB routes for the advertisement of host ARP entries.
If the ERT carried in the route is the same as the eIRT of the local L3VPN instance, the route is accepted. Then, the L3VPN instance obtains the IRB route carried in the route, extracts the host IP address and Layer 3 VNI of Host1, and saves the host IP route of Host1 to the routing table. The outbound interface is obtained through recursion based on the next hop of the route. The final recursion result is the VXLAN tunnel to Leaf1, as shown in Figure 1-1050.
A BGP EVPN route is discarded only when the ERT in the route is different from the local EVPN instance's IRT and local L3VPN instance's eIRT.
If the route is accepted by the EVPN instance or L3VPN instance, Leaf2 obtains Leaf1's VTEP IP address from the Next_Hop attribute. If the VTEP IP address is routable at Layer 3, a VXLAN tunnel to Leaf1 is established.
Leaf1 establishes a VXLAN tunnel to Leaf2 through a similar process.
Host IP routes are advertised through IP prefix routes, as shown in Figure 1-1051.
Leaf1 generates a direct route to Host1's IP address. Then, Leaf1 has an L3VPN instance configured to import the direct route, so that Host1's IP route is saved to the routing table of the L3VPN instance and the Layer 3 VNI associated with the L3VPN instance is added. Figure 1-1052 shows the local host IP route.
If network segment route advertisement is required, use a dynamic routing protocol, such as OSPF. Then configure an L3VPN instance to import the routes of the dynamic routing protocol.
Leaf1 is configured to advertise IP prefix routes in the L3VPN instance. Figure 1-1053 shows the IP prefix route. The host IP address is stored in the IP Prefix Length and IP Prefix fields; the Layer 3 VNI is stored in the MPLS Label field. Leaf1 generates and sends a BGP EVPN route to Leaf2. The BGP EVPN route carries the local L3VPN instance's eERT, extended community attribute, Next_Hop attribute, and the IP prefix route. The extended community attribute carries the tunnel type (VXLAN tunnel) and local VTEP MAC address; the Next_Hop attribute carries the local VTEP IP address.
After Leaf2 receives the BGP EVPN route from Leaf1, Leaf2 processes the route as follows:
Matches the eERT of the route against the eIRT of the local L3VPN instance. If a match is found, the route is accepted. Then, the L3VPN instance obtains the IP prefix type route carried in the route, extracts the host IP address and Layer 3 VNI of Host1, and saves the host IP route of Host1 to the routing table. The outbound interface is obtained through recursion based on the next hop of the route. The final recursion result is the VXLAN tunnel to Leaf1, as shown in Figure 1-1054.
If the route is accepted by the EVPN instance or L3VPN instance, Leaf2 obtains Leaf1's VTEP IP address from the Next_Hop attribute. If the VTEP IP address is routable at Layer 3, a VXLAN tunnel to Leaf1 is established.
Leaf1 establishes a VXLAN tunnel to Leaf2 through a similar process.
Dynamic MAC address learning
VXLAN supports dynamic MAC address learning to allow communication between tenants. MAC address entries are dynamically created and do not need to be manually maintained, greatly reducing maintenance workload. In distributed VXLAN gateway scenarios, inter-subnet communication requires Layer 3 forwarding; MAC address learning is implemented using dynamic ARP packets between the local host and gateway. The following example illustrates dynamic MAC address learning for intra-subnet communication of hosts on the network shown in Figure 1-1055.
Host3 sends dynamic ARP packets when it first communicates with Leaf1. Leaf1 learns the MAC address of Host3 and the mapping between the BDID and packet inbound interface (that is, the physical interface Port 1 corresponding to the Layer 2 sub-interface), and generates a MAC address entry about Host3 in the local MAC address table, with the outbound interface being Port 1. Leaf1 generates a BGP EVPN route based on the ARP entry of Host3 and sends it to Leaf2. The BGP EVPN route carries the local EVPN instance's ERT, Next_Hop attribute, and a Type 2 route (MAC/IP route) defined in BGP EVPN. The Next_Hop attribute carries the local VTEP's IP address. The MAC Address Length and MAC Address fields identify Host3's MAC address. The Layer 2 VNI is stored in the MPLS Label1 field. Figure 1-1056 shows the format of a MAC route or an IP route.
After receiving the BGP EVPN route from Leaf1, Leaf2 matches the ERT of the EVPN instance carried in the route against the IRT of the local EVPN instance. If a match is found, the route is accepted. If no match is found, the route is discarded. After accepting the route, Leaf2 obtains the MAC address of Host3 and the mapping between the BDID and the VTEP IP address (Next_Hop attribute) of Leaf1, and generates the MAC address entry of the Host3 in the local MAC address table. The outbound interface is obtained through recursion based on the next hop, and the final recursion result is the VXLAN tunnel destined for Leaf1.
Leaf1 learns the MAC route of Host2 through a similar process.
Leaf nodes can learn the MAC addresses of hosts during data forwarding, depending on their capabilities to learn MAC addresses from data packets. If VXLAN tunnels are established using BGP EVPN, leaf nodes can dynamically learn the MAC addresses of hosts through BGP EVPN routes, rather than during data forwarding.
Intra-subnet Forwarding of Known Unicast Packets
Intra-subnet known unicast packets are forwarded only between Layer 2 VXLAN gateways and are unknown to Layer 3 VXLAN gateways. Figure 1-1057 shows the forwarding process of known unicast packets.
- After Leaf1 receives a packet from Host3, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information, and searches for the outbound interface and encapsulation information in the broadcast domain.
- Leaf1's VTEP performs VXLAN encapsulation based on the obtained encapsulation information and forwards the packet through the outbound interface obtained.
- After the VTEP on Leaf2 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
- Leaf2 obtains the destination MAC address of the inner Layer 2 packet, adds a VLAN tag to the packet based on the outbound interface and encapsulation information in the local MAC address table, and forwards the packet to Host2.
Host2 sends packets to Host3 through a similar process.
Intra-subnet Forwarding of BUM Packets
Intra-subnet BUM packets are forwarded only between Layer 2 VXLAN gateways, and are unknown to Layer 3 VXLAN gateways. Intra-subnet BUM packets can be forwarded in ingress replication mode. In this mode, when a BUM packet enters a VXLAN tunnel, the access-side VTEP performs VXLAN encapsulation, and then forwards the packet to all egress VTEPs that are in the ingress replication list. When the BUM packet leaves the VXLAN tunnel, the egress VTEP decapsulates the packet. Figure 1-1058 shows the forwarding process of BUM packets.
- After Leaf1 receives a packet from TerminalA, it determines the Layer 2 broadcast domain of the packet based on the access interface and VLAN information in the packet.
- Leaf1's VTEP obtains the ingress replication list for the VNI, replicates the packet based on the list, and performs VXLAN encapsulation. Leaf1 then forwards the VXLAN packet through the outbound interface.
- After the VTEP on Leaf2 or Leaf3 receives the VXLAN packet, it checks the UDP destination port number, source and destination IP addresses, and VNI of the packet to determine the packet validity. Leaf2 or Leaf3 obtains the Layer 2 broadcast domain based on the VNI and performs VXLAN decapsulation to obtain the inner Layer 2 packet.
- Leaf2 or Leaf3 checks the destination MAC address of the inner Layer 2 packet and finds it a BUM MAC address. Therefore, Leaf2 or Leaf3 broadcasts the packet onto the network connected to terminals (not the VXLAN tunnel side) in the Layer 2 broadcast domain. Specifically, Leaf2 or Leaf3 finds the outbound interfaces and encapsulation information not related to the VXLAN tunnel, adds VLAN tags to the packet, and forwards the packet to TerminalB or TerminalC.
The forwarding process of a response packet from TerminalB/TerminalC to TerminalA is similar to the intra-subnet forwarding process of known unicast packets.
Inter-subnet Packet Forwarding
Inter-subnet packets must be forwarded through a Layer 3 gateway. Figure 1-1059 shows the inter-subnet packet forwarding process in distributed VXLAN gateway scenarios.
- After Leaf1 receives a packet from Host1, it finds that the destination MAC address of the packet is a gateway MAC address so that the packet must be forwarded at Layer 3.
- Leaf1 first determines the Layer 2 broadcast domain of the packet based on the inbound interface and then finds the L3VPN instance to which the VBDIF interface of the Layer 2 broadcast domain is bound. Leaf1 searches the routing table of the L3VPN instance for a matching host route based on the destination IP address of the packet and obtains the Layer 3 VNI and next hop address corresponding to the route. Figure 1-1060 shows the host route in the L3VPN routing table. If the outbound interface is a VXLAN tunnel, Leaf1 determines that VXLAN encapsulation is required and then:
- Obtains MAC addresses based on the VXLAN tunnel's source and destination IP addresses and replaces the source and destination MAC addresses in the inner Ethernet header.
- Encapsulates the Layer 3 VNI into the packet.
- Encapsulates the VXLAN tunnel's destination and source IP addresses in the outer header. The source MAC address is the MAC address of the outbound interface on Leaf1, and the destination MAC address is the MAC address of the next hop.
- The VXLAN packet is then transmitted over the IP network based on the IP and MAC addresses in the outer headers and finally reaches Leaf2.
- After Leaf2 receives the VXLAN packet, it decapsulates the packet and finds that the destination MAC address is its own MAC address. It then determines that the packet must be forwarded at Layer 3.
- Leaf2 finds the corresponding L3VPN instance based on the Layer 3 VNI carried in the packet. Then, Leaf2 searches the routing table of the L3VPN instance and finds that the next hop of the packet is the gateway interface address. Leaf2 then replaces the destination MAC address with the MAC address of Host2, replaces the source MAC address with the MAC address of Leaf2, and forwards the packet to Host2.
Host2 sends packets to Host1 in a similar process.
When Huawei devices need to communicate with non-Huawei devices, ensure that the non-Huawei devices use the same forwarding mode. Otherwise, the Huawei devices may fail to communicate with non-Huawei devices.
Function Enhancements
Establishment of a Three-Segment VXLAN for Layer 3 Communication Between DCs
Background
To meet the requirements of inter-regional operations, user access, geographical redundancy, and other scenarios, an increasing number of enterprises deploy DCs across regions. Data Center Interconnect (DCI) is a solution that enables communication between VMs in different DCs. Using technologies such as VXLAN and BGP EVPN, DCI securely and reliably transmits DC packets over carrier networks. Three-segment VXLAN can be configured to enable inter-subnet communication between VMs in different DCs.
Benefits
Three-segment VXLAN enables Layer 3 communication between DC and offers the following benefits to users:
- Hosts in different DCs can communicate at Layer 3.
- Different DCs do not need to run the same routing protocol for communication.
- Different DCs do not require information orchestration for communication.
Implementation
Three-segment VXLAN establishes one VXLAN tunnel segment in each of the DCs and also establishes one VXLAN tunnel segment between the DCs. As shown in Figure 1-1061, BGP EVPN is used to create VXLAN tunnels in distributed gateway mode within both DC A and DC B so that the VMs in each DC can communicate with each other. Leaf2 and Leaf3 are the edge devices within the DCs that connect to the backbone network. BGP EVPN is used to configure a VXLAN tunnel between Leaf2 and Leaf3, so that the VXLAN packets received by one DC can be decapsulated, re-encapsulated, and sent to the peer DC. This process provides E2E transport for inter-DC VXLAN packets and ensures that VMs in different DCs can communicate with each other.
This function applies only to IPv4 over IPv4 networks.
In three-segment VXLAN scenarios, only VXLAN tunnels in distributed gateway mode can be deployed in DCs.
Control Plane
The following describes how three-segment VXLAN tunnels are established.
The process of advertising routes on Leaf1 and Leaf4 is not described in this section. For details, see VXLAN Tunnel Establishment.
- Leaf4 learns the IP address of VMb2 in DC B and saves it to the routing table for the L3VPN instance. Leaf4 then sends a BGP EVPN route to Leaf3.
- As shown in Figure 1-1062, Leaf3 receives the BGP EVPN route and obtains the host IP route contained in it. Leaf3 then establishes a VXLAN tunnel to Leaf 4 according to the process described in VXLAN Tunnel Establishment. Leaf3 sets the next hop of the route to its own VTEP address, re-encapsulates the route with the Layer 3 VNI of the L3VPN instance, and sets the source MAC address of the route to its own MAC address. Finally, Leaf3 sends the re-encapsulated BGP EVPN route to Leaf2.
- Leaf2 receives the BGP EVPN route and obtains the host IP route contained in it. Leaf2 then establishes a VXLAN tunnel to Leaf3 according to the process described in VXLAN Tunnel Establishment. Leaf2 sets the next hop of the route to its own VTEP address, re-encapsulates the route with the Layer 3 VNI of the L3VPN instance, and sets the source MAC address of the route to its own MAC address. Finally, Leaf2 sends the re-encapsulated BGP EVPN route to Leaf1.
- Leaf1 receives the BGP EVPN route and establishes a VXLAN tunnel to Leaf2 according to the process described in VXLAN Tunnel Establishment.
Data Packet Forwarding
A general overview of the packet forwarding process on Leaf1 and Leaf4 is provided below. For detailed information, see Intra-subnet Packet Forwarding.
- Leaf1 receives Layer 2 packets destined for VMb2 from VMa1 and determines that the destination MAC addresses in these packets are all gateway interface MAC addresses. Leaf1 then terminates these Layer 2 packets and finds the L3VPN instance corresponding to the BDIF interface through which VMa1 accesses the broadcast domain. Leaf1 then searches the L3VPN instance routing table for the VMb2 host route, encapsulates the received packets as VXLAN packets, and sends them to Leaf2 over the VXLAN tunnel.
- As shown in Figure 1-1063, Leaf2 receives and parses these VXLAN packets. After finding the L3VPN instance corresponding to the Layer 3 VNI of the packets, Leaf2 searches the L3VPN instance routing table for the VMb2 host route. Leaf2 then re-encapsulates these VXLAN packets (setting the Layer 3 VNI and inner destination MAC address to the Layer 3 VNI and MAC address carried in the VMb2 host route sent by Leaf3). Finally, Leaf2 sends these packets to Leaf3.
- As shown in Figure 1-1063, Leaf3 receives and parses these VXLAN packets. After finding the L3VPN instance corresponding to the Layer 3 VNI of the packets, Leaf3 searches the L3VPN instance routing table for the VMb2 host route. Leaf3 then re-encapsulates these VXLAN packets (setting the Layer 3 VNI and inner destination MAC address to the Layer 3 VNI and MAC address carried in the VMb2 host route sent by Leaf4). Finally, Leaf3 sends these packets to Leaf4.
- Leaf4 receives and parses these VXLAN packets. After finding the L3VPN instance corresponding to the Layer 3 VNI of the packets, Leaf4 searches the L3VPN instance routing table for the VMb2 host route. Using this routing information, Leaf4 forwards these packets to VMb2.
Other Functions
Local leaking of EVPN routes is needed in scenarios where different VPN instances are used for the access of different services in a DC and but an external VPN instance is used to communicate with other DCs to block VPN instance allocation information within the DC from the outside. Depending on route sources, this function can be used in the following scenarios:
Local VPN routes are advertised through EVPN after being locally leaked
- The function to import VPN routes to a local VPN instance named vpn1 is configured in the BGP VPN instance IPv4 or IPv6 address family.
- vpn1 sends received routes to the VPNv4 or VPNv6 component, which then checks whether the ERT of vpn1 is the same as the IRT of the external VPN instance vpn2. If they are the same, the VPNv4 or VPNv6 component imports these routes to vpn2.
- vpn2 sends locally leaked routes to the EVPN component and advertises these routes as BGP EVPN routes to peers. In this case, vpn2 must be able to advertise locally leaked routes as BGP EVPN routes.
Remote public network routes are advertised through EVPN after being locally leaked
- The EVPN component receives public network routes from a remote peer.
- The EVPN component imports the received routes to vpn1.
- vpn1 sends received routes to the VPNv4 or VPNv6 component, which then checks whether the ERT of vpn1 is the same as the IRT of vpn2. If they are the same, the VPNv4 or VPNv6 component imports these routes to vpn2. In this case, vpn1 must be able to perform remote and local route leaking route leaking in succession.
- vpn2 sends locally leaked routes to the EVPN component and advertises these routes as BGP EVPN routes to peers. In this case, vpn2 must be able to advertise locally leaked routes as BGP EVPN routes.
Using Three-Segment VXLAN to Implement Layer 2 Interconnection Between DCs
Background
Figure 1-1066 shows the scenario where three-segment VXLAN is deployed to implement Layer 2 interconnection between DCs. VXLAN tunnels are configured both within DC A and DC B and between transit leaf nodes in both DCs. To enable communication between VM1 and VM2, implement Layer 2 communication between DC A and DC B. If the VXLAN tunnels within DC A and DC B use the same VXLAN Network Identifier (VNI), this VNI can also be used to establish a VXLAN tunnel between Transit Leaf1 and Transit Leaf2. In practice, however, different DCs have their own VNI spaces. Therefore, the VXLAN tunnels within DC A and DC B tend to use different VNIs. In this case, to establish a VXLAN tunnel between Transit Leaf1 and Transit Leaf2, VNIs conversion must be implemented.
Benefits
This solution offers the following benefits to users:
Implements Layer 2 interconnection between hosts in different DCs.
Decouples the VNI space of the network within a DC from that of the network between DCs, simplifying network maintenance.
Isolates network faults within a DC from those between DCs, facilitating fault location.
Principles
Currently, this solution is implemented in the local VNI mode. It is similar to downstream label allocation. The local VNI of the peer transit leaf node functions as the outbound VNI, which is used by packets that the local transit leaf node sends to the peer transit leaf node for VXLAN encapsulation.
Control Plane
This function is only supported for IPv4 over IPv4 networks.
The establishment of VXLAN tunnels between leaf nodes is the same as VXLAN tunnel establishment for intra-subnet interworking in common VXLAN scenarios. Therefore, the detailed process is not described here. Regarding the control plane, MAC address learning by a host is described here.
On the network shown in Figure 1-1067, the control plane is implemented as follows:
Server Leaf1 learns VM1's MAC address, generates a BGP EVPN route, and sends it to Transit Leaf1. The BGP EVPN route contains the following information:
Type 2 route: EVPN instance's RD value, VM1's MAC address, and Server Leaf1's local VNI.
Next hop: Server Leaf1's VTEP IP address.
Extended community attribute: encapsulated tunnel type (VXLAN).
ERT: EVPN instance's export RT value.
Upon receipt, Transit Leaf1 adds the BGP EVPN route to its local EVPN instance and generates a MAC address entry for VM1 in the EVPN instance-bound BD. Based on the next hop and encapsulated tunnel type, the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Server Leaf1. The VNI in VXLAN tunnel encapsulation information is Transit Leaf1's local VNI.
Transit Leaf1 re-originates the BGP EVPN route and then advertises the route to Transit Leaf2. The re-originated BGP EVPN route contains the following information:
Type 2 route: EVPN instance's RD value, VM1's MAC address, and Transit Leaf1's local VNI.
Next hop: Transit Leaf1's VTEP IP address.
Extended community attribute: encapsulated tunnel type (VXLAN).
ERT: EVPN instance's export RT value.
Upon receipt, Transit Leaf2 adds the re-originated BGP EVPN route to its local EVPN instance and generates a MAC address entry for VM1 in the EVPN instance-bound BD. Based on the next hop and encapsulated tunnel type, the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Transit Leaf1. The outbound VNI in VXLAN tunnel encapsulation information is Transit Leaf1's local VNI.
Transit Leaf2 re-originates the BGP EVPN route and then advertises the route to Server Leaf2. The re-originated BGP EVPN route contains the following information:
Type 2 route: EVPN instance's RD value, VM1's MAC address, and Transit Leaf2's local VNI.
Next hop: Transit Leaf2's VTEP IP address.
Extended community attribute: encapsulated tunnel type (VXLAN).
ERT: EVPN instance's export RT value.
Upon receipt, Server Leaf2 adds the re-originated BGP EVPN route to its local EVPN instance and generates a MAC address entry for VM1 in the EVPN instance-bound BD. Based on the next hop and encapsulated tunnel type, the MAC address entry's outbound interface recurses to the VXLAN tunnel destined for Transit Leaf2. The VNI in VXLAN tunnel encapsulation information is Server Leaf2's local VNI.
The preceding process takes MAC address learning by VM1 for example. MAC address learning by VM2 is the same, which is not described here.
Forwarding Plane
Figure 1-1068 shows the known unicast packets are forwarded. The following example process shows how VM2 sends Layer 2 packets to VM1:
After receiving a Layer 2 packet from VM2 through a BD Layer 2 sub-interface, Server Leaf2 searches the BD's MAC address table based on the destination MAC address for the VXLAN tunnel's outbound interface and obtains VXLAN tunnel encapsulation information (local VNI, destination VTEP IP address, and source VTEP IP address). Based on the obtained information, the Layer 2 packet is encapsulated through the VXLAN tunnel and then forwarded to Transit Leaf2.
Upon receipt, Transit Leaf2 decapsulates the VXLAN packet, finds the target BD based on the VNI, searches the BD's MAC address table based on the destination MAC address for the VXLAN tunnel's outbound interface, and obtains the VXLAN tunnel encapsulation information (outbound VNI, destination VTEP IP address, and source VTEP IP address). Based on the obtained information, the Layer 2 packet is encapsulated through the VXLAN tunnel and then forwarded to Transit Leaf1.
Upon receipt, Transit Leaf1 decapsulates the VXLAN packet. Because the packet's VNI is Transit Leaf1's local VNI, the target BD can be found based on this VNI. Transit Leaf1 also searches the BD's MAC address table based on the destination MAC address for the VXLAN tunnel's outbound interface and obtains the VXLAN tunnel encapsulation information (local VNI, destination VTEP IP address, and source VTEP IP address). Based on the obtained information, the Layer 2 packet is encapsulated through the VXLAN tunnel and then forwarded to Server Leaf1.
Upon receipt, Server Leaf1 decapsulates the VXLAN packet and forwards it at Layer 2 to VM1.
In the scenario with three-segment VXLAN for Layer 2 interworking, BUM packet forwarding is the same as that in the common VXLAN scenario except that the split horizon group is used to prevent loops. The similarities are not described here.
After receiving BUM packets from a Server Leaf node in the same DC, a Transit Leaf node obtains the split horizon group to which the source VTEP belongs. Because all nodes in the same DC belong to the default split horizon group, BUM packets will not be replicated to other Server Leaf nodes within the DC. Because the peer Transit Leaf node belongs to a different split horizon group, BUM packets will be replicated to the peer Transit Leaf node.
Upon receipt, the peer Transit Leaf node obtains the split horizon group to which the source VTEP belongs. Because the Transit Leaf nodes at both ends belong to the same split horizon group, BUM packets will not be replicated to the peer Transit Leaf node. Because the Server Leaf nodes within the DC belong to a different split horizon group, BUM packets will be replicated to them.
VXLAN Active-Active Reliability
Basic Concepts
The network in Figure 1-1069 shows a scenario where an enterprise site (CPE) connects to a data center. The VPN GWs (PE1 and PE2) and CPE are connected through VXLAN tunnels to exchange the L2/L3 services between the CPE and data center. The data center gateway (CE1) is dual-homed to PE1 and PE2 to access the VXLAN network for enhanced network access reliability. If one PE fails, services can be rapidly switched to the other PE, minimizing service loss.
PE1 and PE2 on the network use the same virtual address as an NVE interface address (Anycast VTEP address) at the network side. In this way, the CPE is aware of only one remote NVE interface. After the CPE establishes a VXLAN tunnel with this virtual address, the packets from the CPE can reach CE1 through either PE1 or PE2. However, when a single-homed CE, such as CE2 or CE3, exists on the network, the packets from the CPE to the single-homed CE may need to detour to the other PE after reaching one PE. To achieve PE1-PE2 reachability, a bypass VXLAN tunnel needs to be established between PE1 and PE2. To establish this tunnel, an EVPN peer relationship is established between PE1 and PE2, and different addresses, namely, bypass VTEP addresses, are configured for PE1 and PE2.
Control Plane
PE1 and PE2 exchange Inclusive Multicast routes (Type 3) whose source IP address is their shared anycast VTEP address. Each route carries a bypass VXLAN extended community attribute, which contains the bypass VTEP address of PE1 or PE2.
After receiving the Inclusive Multicast route from each other, PE1 and PE2 consider that they form an anycast relationship based on the following details: The source IP address (anycast VTEP address) of the route is identical to PE1's and PE2's local virtual addresses, and the route carries a bypass VXLAN extended community attribute. PE1 and PE2 then establish a bypass VXLAN tunnel between them.
PE1 and PE2 learn the MAC addresses of the CEs through the upstream packets from the AC side and advertise the MAC/IP routes (Type 2) to each other. The routes carry the ESIs of the access links of the CEs, information about the VLANs that the CEs access, and the bypass VXLAN extended community attribute.
PE1 and PE2 learn the MAC address of the CPE through downstream packets from the network side. After learning that the next-hop address of the MAC route can be recursed to a static VXLAN tunnel, PE1 and PE2 advertise the route to each other through an MAC/IP route, without changing the next-hop address.
Data Packets Processing
Layer 2 unicast packet forwarding
Uplink
As shown in Figure 1-1070, after receiving Layer 2 unicast packets destined for the CPE from CE1, CE2, and CE3, PE1 and PE2 search for their local MAC address table to obtain outbound interfaces, perform VXLAN encapsulation on the packets, and forward them to the CPE.
Downlink
As shown in Figure 1-1071:
After receiving a Layer 2 unicast packet sent by the CPE to CE1, PE1 performs VXLAN decapsulation on the packet, searches the local MAC address table for the destination MAC address, obtains the outbound interface, and forwards the packet to CE1.
After receiving a Layer 2 unicast packet sent by the CPE to CE2, PE1 performs VXLAN decapsulation on the packet, searches the local MAC address table for the destination MAC address, obtains the outbound interface, and forwards the packet to CE2.
After receiving a Layer 2 unicast packet sent by the CPE to CE3, PE1 performs VXLAN decapsulation on the packet, searches the local MAC address table for the destination MAC address, and forwards it to PE2 over the bypass VXLAN tunnel. After the packet reaches PE2, PE2 searches the destination MAC address, obtains the outbound interface, and forwards the packet to CE3.
The process for PE2 to forward packets from the CPE is the same as that for PE1 to forward packets from the CPE.
BUM packet forwarding
As shown in Figure 1-1072, if the destination address of a BUM packet from the CPE is the Anycast VTEP address of PE1 and PE2, the BUM packet may be forwarded to either PE1 or PE2. If the BUM packet reaches PE2 first, PE2 sends a copy of the packet to CE3 and CE1. In addition, PE2 sends a copy of the packet to PE1 through the bypass VXLAN tunnel between PE1 and PE2. After the copy of the packet reaches PE1, PE1 sends it to CE2, not to the CPE or CE1. In this way, CE1 receives only one copy of the packet.
As shown in Figure 1-1073, after a BUM packet from CE2 reaches PE1, PE1 sends a copy of the packet to CE1 and the CPE. In addition, PE1 sends a copy of the packet to PE2 through the bypass VXLAN tunnel between PE1 and PE2. After the copy of the packet reaches PE2, PE2 sends it to CE3, not to the CPE or CE1.
As shown in Figure 1-1074, after a BUM packet from CE1 reaches PE1, PE1 sends a copy of the packet to CE2 and the CPE. In addition, PE1 sends a copy of the packet to PE2 through the bypass VXLAN tunnel between PE1 and PE2. After the copy of the packet reaches PE2, PE2 sends it to CE3, not to the CPE or CE1.
Layer 3 packets transmitted on the same subnet
Uplink
As shown in Figure 1-1070, after receiving Layer 3 unicast packets destined for the CPE from CE1, CE2, and CE3, PE1 and PE2 search for the destination address and directly forward them to the CPE because they are on the same network segment.
Downlink
As shown in Figure 1-1071:
After the Layer 3 unicast packet sent from the CPE to CE1 reaches PE1, PE1 searches for the destination address and directly sends it to CE1 because they are on the same network segment.
After the Layer 3 unicast packet sent from the CPE to CE2 reaches PE1, PE1 searches for the destination address and directly sends it to CE2 because they are on the same network segment.
After the Layer 3 unicast packet sent from the CPE to CE3 reaches PE1, PE1 searches for the destination address and sends it to PE2, then sends it to CE3, because they are on the same network segment.
The process for PE2 to forward packets from the CPE is the same as that for PE1 to forward packets from the CPE.
Layer 3 packets transmitted across subnets
Uplink
As shown in Figure 1-1070:
Because the CPE is on a different network segment from PE1 and PE2, the destination MAC address of a Layer 3 unicast packet sent from CE1, CE2, or CE3 to the CPE is the MAC address of the BDIF interface on the Layer 3 gateway of PE1 or PE2. After receiving the packet, PE1 or PE2 removes the Layer 2 tag from the packet, searches for a matching Layer 3 routing entry, and obtains the outbound interface that is the BDIF interface connecting the CPE to the Layer 3 gateway. The BDIF interface searches the ARP table, obtains the destination MAC address, encapsulates the packet into a VXLAN packet, and sends it to the CPE through the VXLAN tunnel.
After receiving the Layer 3 packet from PE1 or PE2, the CPE removes the Layer 2 tag from the packet because the destination MAC address is the MAC address of the BDIF interface on the CPE. Then the CPE searches the Layer 3 routing table to obtain a next-hop address to forward the packet.
Downlink
As shown in Figure 1-1071:
Before sending a Layer 3 unicast packet to CE1 across subnets, the CPE searches its Layer 3 routing table and obtains the outbound interface that is the BDIF interface on the Layer 3 gateway connecting to PE1. The BDIF interface searches the ARP table to obtain the destination MAC address, encapsulates the packet into a VXLAN packet, and forwards it to PE1 over the VXLAN tunnel.
After receiving the packet from the CPE, PE1 removes the Layer 2 tag from the packet because the destination address of the packet is the MAC address of PE1's BDIF interface. Then PE1 searches the Layer 3 routing table and obtains the outbound interface that is the BDIF interface connecting PE1 to its attached CE. The BDIF interface searches its ARP table and obtains the destination address, performs Layer-2 encapsulation for the packet, and sends it to CE1.
The process for PE2 to forward packets from the CPE is the same as that for PE1 to forward packets from the CPE.
NFVI Distributed Gateway (Asymmetric Mode)
Huawei's network functions virtualization infrastructure (NFVI) telco cloud solution incorporates Data Center Interconnect (DCI) and data center network (DCN) solutions. A large volume of UE traffic enters the DCN and accesses the vUGW and vMSE on the DCN. After being processed by the vUGW and vMSE, the UE traffic IPv4 or IPv6 is forwarded over the DCN to destination devices on the Internet. Likewise, return traffic sent from the destination devices to UEs also undergoes this process. To meet the preceding requirements and ensure that the UE traffic is load-balanced within the DCN, you need to deploy the NFVI distributed gateway function on DCN devices.
The vUGW is a unified packet gateway developed based on Huawei's CloudEdge solution. It can be used for 3rd Generation Partnership Project (3GPP) access in general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), and Long Term Evolution (LTE) modes. The vUGW can function as a gateway GPRS support node (GGSN), serving gateway (S-GW), or packet data network gateway (P-GW) to meet carriers' various networking requirements in different phases and operational scenarios.
The vMSE is developed based on Huawei's multi-service engine (MSE). The carrier's network has multiple functional boxes deployed, such as the firewall box, video acceleration box, header enrichment box, and URL filtering box. All functions are added through patch installation. As time goes by, the network becomes increasingly slow, complicating service rollout and maintenance. To solve this problem, the vMSE integrates the functions of these boxes and manages these functions in a unified manner, providing value-added services for the data services initiated by users.
Networking Overview
Figure 1-1075 and Figure 1-1076 show NFVI distributed gateway networking. The DC gateways are the DCN's border gateways, which exchange Internet routes with the external network through PEs. L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions (VNFs). VNF1 and VNF2 can be deployed as virtualized NEs to implement the vUGW and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through the interface processing unit (IPU).
The VXLAN active-active/quad-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. All DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between them.
In the NFVI distributed gateway scenario, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can function as either a DC gateway or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.
Each L2GW/L3GW in Figure 1-1075 represents two devices on the live network. anycast VXLAN active-active is configured on the devices for them to function as one to improve network reliability.
The method of deploying the VXLAN quad-active gateway function on DC gateways is similar to that of deploying the VXLAN active-active gateway function on DC gateways. This section uses the VXLAN active-active gateway function as an example.
Function Deployment
A VPN BGP peer relationship is set up between each VNF and DC gateway, so that the VNF can advertise UE routes to the DC gateway.
Static VPN routes are configured on L2GW/L3GW1 and L2GW/L3GW2 for them to access VNFs. The routes' destination IP addresses are the VNFs' IP addresses, and the next hop addresses are the IP addresses of the IPUs.
A BGP EVPN peer relationship is established between each DC gateway and L2GW/L3GW. An L2GW/L3GW can flood static routes to the VNFs to other devices through BGP EVPN peer relationships. A DC gateway can advertise local loopback routes and default routes to the L2GWs/L3GWs through the BGP EVPN peer relationships.
Traffic exchanged between a UE and the Internet through a VNF is called north-south traffic, whereas traffic exchanged between VNF1 and VNF2 is called east-west traffic. Load balancing is configured on DC gateways and L2GWs/L3GWs to balance both north-south and east-west traffic.
Generation of Forwarding Entries
BDs are deployed on each L2GW/L3GW and bound to links connecting to the IPU interfaces on the associated network segments. Then, VBDIF interfaces are configured as the gateways of these IPU interfaces. The number of BDs is the same as that of network segments to which the IPU interfaces belong. A static VPN route is configured on each L2GW/L3GW, so that the L2GW/L3GW can generate a route forwarding entry with the destination address being the VNF address, next hop being the IPU address, and outbound interface being the associated VBDIF interface.
Figure 1-1077 Static route forwarding entry on an L2GW/L3GWAn L2GW/L3GW learns IPU MAC address and ARP information through the data plane, and then advertises the information as an EVPN route to DC gateways. The information is then used to generate an ARP entry and MAC forwarding entry for Layer 2 forwarding.
The destination MAC addresses in MAC forwarding entries on the L2GW/L3GW are the MAC addresses of the IPUs. For IPUs directly connecting to an L2GW/L3GW (for example, in Figure 1-1075, IPU1, IPU2, and IPU3 directly connect to L2GW/L3GW1), these IPUs are used as outbound interfaces in the MAC forwarding entries on the L2GW/L3GW. For IPUs connecting to the other L2GW/L3GW (for example, IPU4 and IPU5 connect to L2GW/L3GW2 in Figure 1-1075), the MAC forwarding entries use the VTEP address of the other L2GW/L3GW (L2GW/L3GW2) as the next hop and carry the L2 VNI used for Layer 2 forwarding.
In MAC forwarding entries on a DC gateway, the destination MAC address is the IPU MAC address, and the next hop is the L2GW/L3GW VTEP address. These MAC forwarding entries also store the L2 VNI information of the corresponding BDs.
To forward incoming traffic only at Layer 2, you are advised to configure devices to advertise only ARP (ND) routes to each other. In this way, the DC gateway and L2GW/L3GW do not generate IP prefix routes based on IP addresses. If the devices are configured to advertise IRB (IRBv6) routes to each other, enable the IRB asymmetric mode on devices that receive routes.
Figure 1-1078 MAC forwarding entries on the DC gateway and L2GW/L3GWAfter static VPN routes are configured on the L2GW/L3GW, they are imported into the BGP EVPN routing table and then sent as IP prefix routes to the DC gateway through the BGP EVPN peer relationship.
There are multiple links and static routes between the L2GW/L3GW and VNF. To implement load balancing, you need to enable the Add-Path function when configuring static routes to be imported to the BGP EVPN routing table.
By default, the next hop address of an IP prefix route received by the DC gateway is the IP address of the L2GW/L3GW, and the route recurses to a VXLAN tunnel. In this case, incoming traffic is forwarded at Layer 3. To forward incoming traffic at Layer 2, a routing policy must be configured on the L2GW/L3GW to add the Gateway IP attribute to the static routes destined for the DC gateway. Gateway IP addresses are the IP addresses of IPU interfaces. After receiving an IP prefix route carrying the Gateway IP attribute, the DC gateway does not recurse the route to a VXLAN tunnel. Instead, it performs IP recursion. Finally, the destination address of a route forwarding entry on the DC gateway is the IP address of the VNF, the next hop is the IP address of an IPU interface, and the outbound interface is the VBDIF interface corresponding to the network segment on which the IPU resides. If traffic needs to be sent to the VNF, the forwarding entry can be used to find the corresponding VBDIF interface, which then can be used to find the corresponding ARP entry and MAC entry for Layer 2 forwarding.
Figure 1-1079 Forwarding entries on the DC gateway and L2GW/L3GWTo establish a VPN BGP peer relationship with the VNF, the DC gateway needs to advertise its loopback address to the L2GW/L3GW. In addition, because the DC gateway uses the anycast VTEP address for the L2GW/L3GW, the VNF1-to-DCGW1 loopback protocol packets may be sent to DCGW2. Therefore, the DC gateway needs to advertise its loopback address to the other DC gateway. Finally, each L2GW/L3GW has a forwarding entry for the VPN route to the loopback addresses of DC gateways, and each DC gateway has a forwarding entry for the VPN route to the loopback address of the other DC gateway. After the VNF and DC gateways establish BGP peer relationships, the VNF can send UE routes to the DC gateways, and the next hops of these routes are the VNF IP address.
Figure 1-1080 Forwarding entries on the DC gateway and L2GW/L3GWThe DCN does not need to be aware of external routes. Therefore, a route policy must be configured on the DC gateway, so that the DC gateway can send default routes and loopback routes to the L2GW/L3GW.
Figure 1-1081 Forwarding entries on the DC gateway and L2GW/L3GWAs the border gateway of the DCN, the DC gateway can exchange Internet routes with external PEs, such as routes to server IP addresses on the Internet.
Figure 1-1082 Forwarding entries on the DC gateway and L2GW/L3GWTo implement load balancing during traffic transmission, load balancing and Add-Path can be configured on the DC gateway and L2GW/L3GW. This balances both north-south and east-west traffic.
North-south traffic balancing: Take DCGW1 in Figure 1-1075 as an example. DCGW1 can receive EVPN routes to VNF2 from L2GW/L3GW1 and L2GW/L3GW2. By default, after load balancing is configured, DCGW1 sends half of traffic destined for VNF2 to L2GW/L3GW1 and half of traffic destined for VNF2 to L2GW/L3GW2. However, L2GW/L3GW1 has only one link to VNF2, while L2GW/L3GW2 has two links to VNF2. As a result, the traffic is not evenly balanced. To address this issue, the Add-Path function must be configured on the L2GW/L3GWs. After Add-Path is configured, L2GW/L3GW2 advertises two routes with the same destination address to DCGW1 to implement load balancing.
East-west traffic balancing: Take L2GW/L3GW1 in Figure 1-1075 as an example. Because Add-Path is configured on L2GW/L3GW2, L2GW/L3GW1 receives two EVPN routes from L2GW/L3GW2. In addition, L2GW/L3GW1 has a static route with the next hop being IPU3. The destination address of these three routes is the IP address of VNF2. To implement load balancing, load balancing among static and EVPN routes must be configured.
Traffic Forwarding Process
Upon receipt of UE traffic, the base station encapsulates these packets and redirect them to a GPRS tunneling protocol (GTP) tunnel whose destination address is the VNF IP address. The encapsulated packets reach the DC gateway through IP forwarding.
Upon receipt, the DC gateway searches its virtual routing and forwarding (VRF) table and finds a matching forwarding entry whose next hop is an IPU IP address and outbound interface is a VBDIF interface. Therefore, the received packets match the network segment on which the VBDIF interface resides. The DC gateway searches for the desired ARP entry on the network segment, finds a matching MAC forwarding entry based on the ARP entry, and recurses the route to a VXLAN tunnel based on the MAC forwarding entry. Then, the packets are forwarded to the L2GW/L3GW over a VXLAN tunnel.
Upon receipt, the L2GW/L3GW finds the target BD based on the L2 VNI, searches for a matching MAC forwarding entry in the BD, and then forwards the packets to the VNF based on the MAC forwarding entry.
After the packets reach the VNF, the VNF removes their GTP tunnel header, searches the routing table based on their destination IP addresses, and forwards them to the L2GW/L3GW through the VNF's default gateway.
After the packets reach the L2GW/L3GW, the L2GW/L3GW searches their VRF table for a matching forwarding entry. Over the default route advertised by the DC gateway to the L2GW/L3GW, the packets are encapsulated with the L3 VNI and then forwarded to the DC gateway through the VXLAN tunnel.
Upon receipt, the DC gateway searches the corresponding VRF table for a matching forwarding entry based on the L3 VNI and forwards these packets to the Internet.
A device on the Internet sends response traffic to a UE. The destination address of the response traffic is the destination address of the UE route. The route is advertised by the VNF to the DC gateway through the VPN BGP peer relationship, and the DC gateway in turn advertises the route to the Internet. Therefore, the response traffic must first be forwarded to the VNF first.
Upon receipt, the DC gateway searches the routing table for a forwarding entry that matches the UE route. The route is advertised over the VPN BGP peer relationship between the DC gateway and VNF and recurses to one or more VBDIF interfaces. Traffic is load-balanced among these VBDIF interfaces. A matching MAC forwarding entry is found based on the ARP information on these VBDIF interfaces. Based on the MAC forwarding entry, the response packets are encapsulated with the L2 VNI and then forwarded to the L2GW/L3GW over a VXLAN tunnel.
Upon receipt, the L2GW/L3GW finds the target BD based on the L2 VNI, searches for a matching MAC forwarding entry in the BD, obtains the outbound interface information from the MAC forwarding entry, and forwards these packets to the VNF.
Upon receipt, the VNF processes them and finds the base station corresponding to the destination address of the UE. The VNF then encapsulates tunnel information into these packets (with the base station as the destination) and forwards these packets to the L2GW/L3GW through the default gateway.
Upon receipt, the L2GW/L3GW searches its VRF table for the default route advertised by the DC gateway to the L2GW/L3GW. Then, the L2GW/L3GW encapsulates these packets with the L3 VNI and forwards them to the DC gateway over a VXLAN tunnel.
Upon receipt, the DC gateway searches its VRF table for the default (or specific) route based on the L3 VNI and forwards these packets to the destination base station. The base station then decapsulates these packets and sends them to the target UE.
VNF1 sends a received packet to VNF2 for processing. VNF2 re-encapsulates the packet by using its own address as the destination address of the packet and sends the packet to the L2GW/L3GW over the default route.
Upon receipt, the L2GW/L3GW searches its VRF table and finds that multiple load-balancing forwarding entries exist. Some entries use the IPU as the outbound interface, and some entries use the L2GW/L3GW as the next hop.
If the path to the other L2GW/L3GW (L2GW/L3GW2) is selected preferentially, the packet is encapsulated with the L2 VNI and forwarded to L2GW/L3GW2 over a VXLAN tunnel. L2GW/L3GW2 finds the target BD based on the L2 VNI and the destination MAC address, and forwards the packet to VNF2.
Upon receipt, VNF2 processes the packet and forwards it to the Internet server. The subsequent forwarding process is the same as the process for forwarding north-south traffic.
NFVI Distributed Gateway (Symmetric Mode)
Huawei's network functions virtualization infrastructure (NFVI) telco cloud solution incorporates Data Center Interconnect (DCI) and data center network (DCN) solutions. A large volume of UE traffic enters the DCN and accesses the vUGW and vMSE on the DCN. After being processed by the vUGW and vMSE, the UE traffic (IPv4 or IPv6) is forwarded over the DCN to destination devices on the Internet. Likewise, return traffic sent from the destination devices to UEs also undergoes this process. To meet the preceding requirements and ensure that the UE traffic is load-balanced within the DCN, you need to deploy the NFVI distributed gateway function on DCN devices.
The vUGW is a unified packet gateway developed based on Huawei's CloudEdge solution. It can be used for 3rd Generation Partnership Project (3GPP) access in general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), and Long Term Evolution (LTE) modes. The vUGW can function as a gateway GPRS support node (GGSN), serving gateway (S-GW), or packet data network gateway (P-GW) to meet carriers' various networking requirements in different phases and operational scenarios.
The vMSE is developed based on Huawei's multi-service engine (MSE). The carrier's network has multiple functional boxes deployed, such as the firewall box, video acceleration box, header enrichment box, and URL filtering box. All functions are added through patch installation. As time goes by, the network becomes increasingly slow, complicating service rollout and maintenance. To solve this problem, the vMSE integrates the functions of these boxes and manages these functions in a unified manner, providing value-added services for the data services initiated by users.
Networking
Figure 1-1086 and Figure 1-1087 show NFVI distributed gateway networking. The DC gateways are the DCN's border gateways, which exchange Internet routes with the external network through PEs. L2GW/L3GW1 and L2GW/L3GW2 connect to virtualized network functions (VNFs). VNF1 and VNF2 can be deployed as virtualized NEs to respectively provide vUGW and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through interface processing units (IPUs).
The VXLAN active-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. Both DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between L2GW/L3GW1 and L2GW/L3GW2.
In the NFVI distributed gateway scenario, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M functions as either a DCGW or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.
Each L2GW/L3GW in Figure 1-1086 represents two devices on the live network. anycast VXLAN active-active is configured on the devices for them to function as one to improve network reliability.
Function Deployment
Establish VPN BGP peer relationships between VNFs and DC gateways, so that VNFs can advertise UE routes to DC gateways.
Configure VPN static routes on L2GW/L3GW1 and L2GW/L3GW2, or configure L2GWs/L3GWs to establish VPN IGP neighbor relationships with VNFs to obtain VNF routes with next hop addresses being IPU addresses.
Establish BGP EVPN peer relationships between any two of the DC gateways and L2GWs/L3GWs. L2GWs/L3GWs can then advertise VNF routes to DC gateways and other L2GWs/L3GWs through BGP EVPN peer relationships. DC gateways can advertise the local loopback route and default route as well as obtained UE routes to L2GWs/L3GWs through BGP EVPN peer relationships.
Traffic forwarded between the UE and Internet through VNFs is called north-south traffic, and traffic forwarded between VNF1 and VNF2 is called east-west traffic. To balance both types of traffic, you need to configure load balancing on DC gateways and L2GWs/L3GWs.
Generation of Forwarding Entries
Asymmetric Mode |
Symmetric Mode |
---|---|
All traffic is forwarded at Layer 2 from DC gateways to VNFs after entering the DCN, regardless of whether it is from UEs to the Internet or vice versa. However, after traffic leaves the DCN, it is forwarded at Layer 3 from VNFs to DC gateways. This prevents traffic loops between DC gateways and L2GWs/L3GWs. On the network shown in Figure 1-1087, IPUs connect to multiple L2GWs/L3GWs. If Layer 3 forwarding is used between DC gateways and VNFs, some traffic forwarded by an L2GW/L3GW to the VNF will be forwarded to another L2GW/L3GW due to load balancing. For example, L2GW/L3GW2 forwards some of the traffic to L2GW/L3GW1 and vice versa. As a result, a traffic loop occurs. If Layer 2 forwarding is used, the L2GW/L3GW does not forward the Layer 2 traffic received from another L2GW/L3GW back, preventing traffic loops. |
After traffic enters the DCN, the traffic is forwarded from DC gateways to the VNF at Layer 3. The traffic from the VNF to DC gateways and then out of the DCN is also forwarded at Layer 3. On the network shown in Figure 1-1087, IPUs connect to multiple L2GWs/L3GWs. Layer 3 forwarding is used between DC gateways and VNFs, and some traffic forwarded by an L2GW/L3GW to the VNF will be forwarded over a VXLAN tunnel to another L2GW/L3GW due to load balancing. After receiving VXLAN traffic, an L2GW/L3GW searches for matching routes. If these routes work in hybrid load-balancing mode, the L2GW/L3GW preferentially selects the access-side outbound interface to forward the traffic, preventing loops. |
BDs are deployed on each L2GW/L3GW and bound to links connecting to the IPU interfaces on the associated network segments. Then, VBDIF interfaces are configured as the gateways of these IPU interfaces. The number of BDs is the same as that of network segments to which the IPU interfaces belong. A VPN static route is configured on each L2GW/L3GW or a VPN IGP neighbor relationship is established between each L2GW/L3GW and the VNF, so that the L2GW/L3GW can generate a route forwarding entry with the destination address being the VNF address, next hop being the IPU address, and outbound interface being the associated VBDIF interface.
Figure 1-1088 Route forwarding entry for traffic from an L2GW/L3GW to the VNFAfter VPN static or IGP routes are configured on the L2GW/L3GW, they are imported into the BGP EVPN routing table and then sent as IP prefix routes to the DC gateway through the BGP EVPN peer relationship.
There are multiple links and routes between the L2GW/L3GW and VNF. To implement load balancing, you need to enable the Add-Path function when configuring routes to be imported into the BGP EVPN routing table.
The next hop address of an IP prefix route received by the DC gateway is the IP address of the L2GW/L3GW, and the route recurses to a VXLAN tunnel. In this case, incoming traffic is forwarded at Layer 3.
Figure 1-1089 Forwarding entries on the DC gateway and L2GW/L3GWTo establish a VPN BGP peer relationship with the VNF, the DC gateway needs to advertise its loopback address to the L2GW/L3GW. In addition, because the DC gateway uses the anycast VTEP address for the L2GW/L3GW, the VNF1-to-DCGW1 loopback protocol packets may be sent to DCGW2. Therefore, the DC gateway needs to advertise its loopback address to the other DC gateway. Finally, each L2GW/L3GW has a forwarding entry for the VPN route to the loopback addresses of DC gateways, and each DC gateway has a forwarding entry for the VPN route to the loopback address of the other DC gateway. After the VNF and DC gateways establish BGP peer relationships, the VNF can send UE routes to the DC gateways, and the next hops of these routes are the VNF IP address.
Figure 1-1090 Forwarding entries on the DC gateway and L2GW/L3GWIn symmetric mode, the L2GW/L3GW needs to learn UE routes. Therefore, a route-policy needs to be configured on the DC gateway to enable the DC gateway to advertise UE routes to the L2GW/L3GW after setting the original next hops of these routes as the gateway address. Except UE routes, the DCN does not need to be aware of other external routes. Therefore, another route-policy needs to be configured on the DC gateway to ensure that the DC gateway advertises only loopback routes and default routes to the L2GW/L3GW.
Figure 1-1091 Forwarding entries on the DC gateway and L2GW/L3GWAs the border gateway of the DCN, the DC gateway can exchange Internet routes with external PEs, such as routes to server IP addresses on the Internet.
Figure 1-1092 Forwarding entries on the DC gateway and L2GW/L3GWTo implement load balancing during traffic transmission, load balancing and Add-Path can be configured on the DC gateway and L2GW/L3GW. This balances both north-south and east-west traffic.
North-south traffic balancing: Take DCGW1 in Figure 1-1086 as an example. DCGW1 can receive EVPN routes to VNF2 from L2GW/L3GW1 and L2GW/L3GW2. By default, after load balancing is configured, DCGW1 sends half of traffic destined for VNF2 to L2GW/L3GW1 and half of traffic destined for VNF2 to L2GW/L3GW2. However, L2GW/L3GW1 has only one link to VNF2, while L2GW/L3GW2 has two links to VNF2. As a result, the traffic is not evenly balanced. To address this issue, the Add-Path function must be configured on the L2GW/L3GWs. After Add-Path is configured, L2GW/L3GW2 advertises two routes with the same destination address to DCGW1 to implement load balancing.
East-west traffic balancing: Take L2GW/L3GW1 in Figure 1-1086 as an example. Because Add-Path is configured on L2GW/L3GW2, L2GW/L3GW1 receives two EVPN routes from L2GW/L3GW2. In addition, L2GW/L3GW1 has a static route or IGP route with the next hop being IPU3. The destination address of these three routes is the IP address of VNF2. To implement load balancing, hybrid load balancing among EVPN routes and routes of other routing protocols needs to be deployed.
Traffic Forwarding Process
Upon receipt of UE traffic, the base station encapsulates these packets and redirect them to a GPRS tunneling protocol (GTP) tunnel whose destination address is the VNF IP address. The encapsulated packets reach the DC gateway through IP forwarding.
After receiving these packets, the DC gateway searches the VRF table and finds that the next hop of the forwarding entry corresponding to the VNF address is an IPU address and the outbound interface is a VXLAN tunnel. The DC gateway then performs VXLAN encapsulation and forwards the packets to the L2GW/L3GW at Layer 3.
Upon receipt of these packets, the L2GW/L3GW finds the corresponding VPN instance based on the L3 VNI, searches for a matching route in the VPN instance's routing table based on the VNF address, and forwards the packets to the VNF.
After the packets reach the VNF, the VNF removes their GTP tunnel header, searches the routing table based on their destination IP addresses, and forwards them to the L2GW/L3GW through the VNF's default gateway.
After the packets reach the L2GW/L3GW, the L2GW/L3GW searches their VRF table for a matching forwarding entry. Over the default route advertised by the DC gateway to the L2GW/L3GW, the packets are encapsulated with the L3 VNI and then forwarded to the DC gateway through the VXLAN tunnel.
Upon receipt, the DC gateway searches the corresponding VRF table for a matching forwarding entry based on the L3 VNI and forwards these packets to the Internet.
A device on the Internet sends response traffic to a UE. The destination address of the response traffic is the destination address of the UE route. The route is advertised by the VNF to the DC gateway through the VPN BGP peer relationship, and the DC gateway in turn advertises the route to the Internet. Therefore, the response traffic must first be forwarded to the VNF first.
After the response traffic reaches the DC gateway, the DC gateway searches the routing table for forwarding entries corresponding to UE routes. These routes are learned by the DC gateway from the VNF over the VPN BGP peer relationship. These routes finally recurse to VXLAN tunnels, the response packets are encapsulated into VXLAN packets and forwarded to the L2GW/L3GW at Layer 3.
After these packets reach the L2GW/L3GW, the L2GW/L3GW finds the corresponding VPN instance based on the L3 VNI, searches for a route corresponding to the UE address in the VPN instance's routing table, and forwards these packets to the VNF.
Upon receipt, the VNF processes them and finds the base station corresponding to the destination address of the UE. The VNF then encapsulates tunnel information into these packets (with the base station as the destination) and forwards these packets to the L2GW/L3GW through the default gateway.
Upon receipt, the L2GW/L3GW searches its VRF table for the default route advertised by the DC gateway to the L2GW/L3GW. Then, the L2GW/L3GW encapsulates these packets with the L3 VNI and forwards them to the DC gateway over a VXLAN tunnel.
Upon receipt, the DC gateway searches its VRF table for the default (or specific) route based on the L3 VNI and forwards these packets to the destination base station. The base station then decapsulates these packets and sends them to the target UE.
VNF1 sends a received packet to VNF2 for processing. VNF2 re-encapsulates the packet by using its own address as the destination address of the packet and sends the packet to the L2GW/L3GW1 over the default route.
Upon receipt, the L2GW/L3GW1 searches its VRF table and finds that multiple load-balancing routes exist. Some routes use the IPU as the outbound interface, and some routes use L2GW/L3GW2 as the next hop.
- If these routes work in hybrid load-balancing mode, L2GW/L3GW1 preferentially selects only the routes with the outbound interfaces being IPUs and steers packets to VNF2 to prevent loops. If these routes do not work in hybrid load-balancing mode, L2GW/L3GW1 forwards packets in load-balancing route. Packets are encapsulated into VXLAN packets before they are sent to L2GW/L3GW2 at Layer 2. After these packets reach L2GW/L3GW2, L2GW/L3GW2 finds the corresponding BD based on the L2 VNI, then finds the destination MAC address, and finally forwards these packets to VNF2.
Upon receipt, VNF2 processes the packet and forwards it to the Internet server. The subsequent forwarding process is the same as the process for forwarding north-south traffic.
Application Scenarios for VXLAN
Application for Communication Between Terminal Users on a VXLAN
Service Description
Currently, data centers are expanding on a large scale for enterprises and carriers, with increasing deployment of virtualization and cloud computing. In addition, to accommodate more services while reducing maintenance costs, data centers are employing large Layer 2 and virtualization technologies.
As server virtualization is implemented in the physical network infrastructure for data centers, VXLAN, an NVO3 technology, has adapted to the trend by providing virtualization solutions for data centers.
Networking Description
On the network shown in Figure 1-1096, an enterprise has VMs deployed in different data centers. Different network segments run different services. The VMs running the same service or different services in different data centers need to communicate with each other. For example, VMs of the financial department residing on the same network segment need to communicate, and VMs of the financial and engineering departments residing on different network segments also need to communicate.
Feature Deployment
- Deploy Device 1 and Device 2 as Layer 2 VXLAN gateways and establish a VXLAN tunnel between Device 1 and Device 2 to allow communication between terminal users on the same network segment.
- Deploy Device 3 as a Layer 3 VXLAN gateway and establish a VXLAN tunnel between Device 1 and Device 3 and between Device 2 and Device 3 to allow communication between terminal users on different network segments.
Configure VXLAN on devices to trigger VXLAN tunnel establishment and dynamic learning of ARP and MAC address entries. By now, terminal users on the same network segment and different network segments can communicate through the Layer 2 and Layer 3 VXLAN gateways based on ARP and routing entries.
Application for Communication Between Terminal Users on a VXLAN and Legacy Network
Service Description
Currently, data centers are expanding on a large scale for enterprises and carriers, with increasing deployment of virtualization and cloud computing. In addition, to accommodate more services while reducing maintenance costs, data centers are employing large Layer 2 and virtualization technologies.
As server virtualization is implemented in the physical network infrastructure for data centers, VXLAN, an NVO3 technology, has adapted to the trend by providing virtualization solutions for data centers, allowing intra-VXLAN communication and communication between VXLANs and legacy networks.
Networking Description
On the network shown in Figure 1-1097, an enterprise has VMs deployed for the finance and engineering departments and a legacy network for the human resource department. The finance and engineering departments need to communicate with the human resource department.
Feature Deployment
As shown in Figure 1-1097:
Deploy Device 2 as Layer 2 VXLAN gateway and Device 3 as a Layer 3 VXLAN gateway. The VXLAN gateways are VXLANs' edge devices connecting to legacy networks and are responsible for VXLAN encapsulation and decapsulation. Establish a VXLAN tunnel between Device 2 and Device 3 for VXLAN packet transmission.
- Device 1 receives the packet and sends it to Device 3 through IP network.
- Upon receipt, Device 3 parses the destination IP address, and searches the routing table for a next hop address. Then, Device 3 searches the ARP or ND table based on the next hop address to determine the destination MAC address, VXLAN tunnel's outbound interface, and VNI.
- Device 3 encapsulates the VXLAN tunnel's outbound interface and VNI into the packet and sends the VXLAN packet to Device 2.
- Upon receipt, Device 2 decapsulates the VXLAN packet, finds the outbound interface based on the destination MAC address, and forwards the packet to VM1.
Application in VM Migration Scenarios
Service Description
Enterprises configure server virtualization on DCNs to consolidate IT resources, improve resource use efficiency, and reduce network costs. With the wide deployment of server virtualization, an increasing number of VMs are running on physical servers, and many applications are running in virtual environments, which bring great challenges to virtual networks.
Network Description
On the network shown in Figure 1-1098, an enterprise has two servers in the DC: engineering and finance departments on Server1 and the marketing department on Server2.
The computing space on Server1 is insufficient, but Server2 is not fully used. The network administrator wants to migrate the engineering department to Server2 without affecting services.
This scenario applies to IPv4 over IPv4, IPv6 over IPv4, IPv4 over IPv6, and IPv6 over IPv6 networks. Figure 1-1098 shows an IPv4 over IPv4 network.
Feature Deployment
To ensure uninterrupted services during the migration of the engineering department, the IP and MAC addresses of the engineering department must remain unchanged. This requires that the two servers belong to the same Layer 2 network. If conventional migration methods are used, the administrator may have to purchase additional physical devices to distribute traffic and reconfigure VLANs. These methods may also result in network loops and additional system and management costs.
VXLAN can be used to migrate the engineering department to Server2. VXLAN is a network virtualization technology that uses MAC-in-UDP encapsulation. This technology can establish a large Layer 2 network connecting all terminals with reachable IP routes, as long as the physical network supports IP forwarding.
The engineering department is migrated to Server2 through the VXLAN tunnel. Online users are unaware of the migration. After the engineering department is migrated from Server1 to Server2, terminals send gratuitous ARP or RARP packets to update all gateways' MAC addresses and ARP entries of the original VMs to those of the VMs to which the R&D department is migrated.
Terminology for VXLAN
Terms
Term |
Description |
---|---|
NVO3 |
Network Virtualization over L3. A network virtualization technology implemented at Layer 3 for traffic isolation and IP independence between multi-tenants of data centers so independent Layer 2 subnets can be provided for tenants. In addition, NVO3 supports VM deployment and migration on Layer 2 subnets of tenants. |
VXLAN |
Virtual extensible local area network. An NVO3 network virtualization technology that encapsulates data packets sent from VMs into UDP packets and encapsulates IP and MAC addresses used on the physical network in the outer headers before sending the packets over an IP network. The egress tunnel endpoint then decapsulates the packets and sends the packets to the destination VM. |
Acronyms and Abbreviations
Acronym and Abbreviation |
Full Name |
---|---|
BD |
bridge domain |
BUM |
broadcast, unknown unicast, and multicast |
VNI |
VXLAN network identifier |
VTEP |
VXLAN tunnel endpoints |
VXLAN Configuration
This section describes how to configure VXLAN on devices, without any controller.
Overview of VXLAN
VXLAN allows a virtual network to provide access services to a large number of tenants. In addition, tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management.
Background
VM scale is limited by network specifications.
On a large Layer 2 network, data packets are forwarded at Layer 2 based on MAC entries. However, the MAC table capacity is limited, which subsequently limits the number of VMs.
Network isolation capabilities are limited.
Most networks currently use VLANs to implement network isolation. However, the deployment of VLANs on large-scale virtualized networks has the following limitations:- The VLAN tag field defined in IEEE 802.1Q has only 12 bits and can support only a maximum of 4094 VLANs, which cannot meet user identification requirements of large Layer 2 networks.
- VLANs on legacy Layer 2 networks cannot adapt to dynamic network adjustment.
VM migration scope is limited by the network architecture.
A running VM may need to be migrated to a new server due to resource issues on the original server (for example, migration may be required if the CPU usage is too high, or memory resources are inadequate). To ensure service continuity during VM migration, the IP address of the VM must remain unchanged. Therefore, the service network must be a Layer 2 network and provide multipathing redundancy backup and reliability.
VXLAN addresses the preceding problems on large Layer 2 networks.
Eliminates VM scale limitations imposed by network specifications.
VXLAN encapsulates data packets sent from VMs into UDP packets and encapsulates IP and MAC addresses used on the physical network into the outer headers. As a result, the network is aware of only the encapsulated parameters and not the inner data. This implementation greatly reduces the MAC address specification requirements of large Layer 2 networks.
Provides greater network isolation capabilities.
VXLAN uses a 24-bit network segment ID, called a VXLAN network identifier (VNI), to identify users. This VNI is similar to a VLAN ID, but supports a maximum of 16M VXLAN segments.
Eliminates VM migration scope limitations imposed by network architecture.
VXLAN uses MAC-in-UDP encapsulation to extend Layer 2 networks. It encapsulates Ethernet packets into IP packets for these Ethernet packets to be transmitted over routes, and does not need to be aware of VMs' MAC addresses. Because there is no limitation on Layer 3 network architecture, Layer 3 networks are scalable and have strong automatic fault rectification and load balancing capabilities. This allows for VM migration irrespective of the network architecture.
Related Concepts
VXLAN allows a virtual network to provide access services to a large number of tenants. In addition, tenants are able to plan their own virtual networks, not limited by the physical network IP addresses or broadcast domains. This greatly simplifies network management. Table 1-474 describes VXLAN concepts.
Concept |
Description |
---|---|
Underlay and overlay networks |
VXLAN allows virtual Layer 2 or Layer 3 networks (overlay networks) to be built over existing physical networks (underlay networks). Overlay networks use encapsulation technologies to transmit tenant packets between sites over Layer 3 forwarding paths provided by underlay networks. Tenants are aware of only overlay networks. |
Network virtualization edge (NVE) |
A network entity that is deployed at the network edge and implements network virtualization functions. NOTE:
vSwitches on devices and servers can function as NVEs. |
VXLAN tunnel endpoint (VTEP) |
A VXLAN tunnel endpoint that encapsulates and decapsulates VXLAN packets. It is represented by an NVE. A VTEP connects to a physical network and is assigned a physical network IP address. This IP address is irrelevant to virtual networks. In VXLAN packets, the source IP address is the local node's VTEP address, and the destination IP address is the remote node's VTEP address. This pair of VTEP addresses corresponds to a VXLAN tunnel. |
VXLAN network identifier (VNI) |
A VXLAN segment identifier similar to a VLAN ID. VMs on different VXLAN segments cannot communicate directly at Layer 2. A VNI identifies only one tenant. Even if multiple terminal users belong to the same VNI, they are considered one tenant. A VNI consists of 24 bits and supports a maximum of 16M tenants. A VNI can be a Layer 2 or Layer 3 VNI.
|
Bridge domain (BD) |
A Layer 2 broadcast domain through which VXLAN data packets are forwarded. On a VXLAN network, a VNI can be mapped to a BD so that the BD can function as a VXLAN network entity to forward data packets. |
VBDIF interface |
A Layer 3 logical interface created for a BD. Configuring IP addresses for VBDIF interfaces allows communication between VXLANs on different network segments and between VXLANs and non-VXLANs and implements Layer 2 network access to a Layer 3 network. |
Gateway |
A device that ensures communication between VXLANs identified by different VNIs and between VXLANs and non-VXLANs (similar to a VLAN). A VXLAN gateway can be a Layer 2 or Layer 3 gateway.
|
NVE Deployment Mode
On VXLANs, VTEPs are represented by NVEs, and therefore VXLAN tunnels can be established after NVEs are deployed. The following NVE deployment modes are available where NVEs are deployed.
Hardware mode: On the network shown in Figure 1-1100, all NVEs are deployed on NVE-capable devices, which perform VXLAN encapsulation and decapsulation.
Software mode: On the network shown in Figure 1-1101, all NVEs are deployed on vSwitches, which perform VXLAN encapsulation and decapsulation.
Hybrid mode: On the network shown in Figure 1-1102, some NVEs are deployed on vSwitches, and others on NVE-capable devices. Both vSwitches and NVE-capable devices may perform VXLAN encapsulation and decapsulation.
This document describes how to configure VXLAN when NVEs are deployed on NVE-capable devices. If software mode is used, devices only need to transparently transmit VXLAN packets.
Configuration Precautions for VXLAN
Feature Requirements
Feature Requirements |
Series |
Models |
---|---|---|
A VXLAN tunnel does not support MTU configuration, and packets cannot be fragmented before entering the VXLAN tunnel. Although packets entering a VXLAN tunnel can be fragmented based on the MTU of the outbound interface, the outbound VXLAN tunnel node can reassemble only a few packets. Therefore, you need to properly plan the MTU of the network-side interface to prevent packets from being fragmented after entering the VXLAN tunnel. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
Restrictions of EVPN control plane of VXLAN networks are as follows: 1. BDs, VNIs, and EVPNs support only 1:1 binding. 2. A BD must be bound to a VNI before being bound to an EVPN. 3. VNI peer statistics collection and VNI statistics collection use the same statistical resource and cannot be configured together. Traffic statistics by VNI+peer support only the split-horizon-mode of VNIs, and do not support common VNIs. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
Restrictions for VXLAN dual-active access are as follows: 1. Currently, only Eth-Trunk interfaces are supported for active-active reliability. 2. Active-active reliability does not support shutdown of sub-interfaces. (Upstream Eth-Trunk traffic is not switched. As a result, traffic is interrupted.) Downstream local bias pruning is based on the main interface and does not switch the process.) 3. The shutdown bd scenario is not supported. 4. The configurations of active-active interfaces must be the same. 5. Dynamic ESIs are not supported. 6. After MAC FRR is enabled, MAC addresses are deleted because MAC addresses need to be learned in FRR format. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
When a VXLAN tunnel is bound to a VNI, the VNI is bound to a BD, and a VBDIF interface is created to function as a Layer 3 gateway, the VXLAN tunnel does not support the multicast function on the VBDIF interface. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
VNI-based HQoS supports only level-3 scheduling (GQ, SQ, and FQ), and does not support DP and VI level scheduling. Configuring interface-based HQoS is recommended. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
Only two VXLANv4 fragments can be reassembled on the same board. Inter-board reassembly is not supported. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
VXLAN tunnels with the same VNI do not support both IPv4 and IPv6. If IPv4 and IPv6 VXLAN tunnels coexist between two devices: 1. Packets are preferentially transmitted over the IPv4 VXLAN tunnel. 2. Packet loss or excess packets may occur during tunnel switching. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
IPv6 VXLAN tunnels do not support packet redundancy avoidance during BUM traffic switchback. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
After a BD accesses a VSI or VXLAN, Layer 2 sub-interfaces cannot be bound to the BD. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
VXLAN does not meet DHCP snooping requirements. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
After a BD accesses a VSI or VXLAN, a VBDIF interface cannot be created. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
If an EVPN instance has been bound to a BD, the binding relationship between the EVPN instance and VNI cannot be modified or deleted. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
If the number of VXLAN tunnels exceeds the upper limit, new tunnels cannot be created, and an alarm is generated to notify the user of the tunnel creation failure cause. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
After a distributed gateway is configured, you need to specify the non-gateway IP address of the local device as the source IP address of ICMP Echo Request packets to be sent when pinging the host address from the gateway. |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
A VNI can be bound to only one service instance (BD/VRF/EVPL). |
NetEngine 8000 M |
NetEngine 8000 M14/NetEngine 8000 M14K/NetEngine 8000 M4/NetEngine 8000 M8/NetEngine 8000 M8K/NetEngine 8000E M14/NetEngine 8000E M8/NetEngine 8100 M14/NetEngine 8100 M8 |
Configuring IPv6 VXLAN in Centralized Gateway Mode for Static Tunnel Establishment
IPv6 VXLAN can be deployed in centralized gateway mode so that all inter-subnet traffic is forwarded through Layer 3 gateways, thereby implementing centralized traffic management.
Usage Scenario
To allow intra- and inter-subnet communication between a tenant's VMs located in different geological locations on an IPv6 network, properly deploy Layer 2 and Layer 3 gateways on the network and establish IPv6 VXLAN tunnels.
- To allow VM1 on Server2 and VM1 on Server3 to communicate, deploy Layer 2 gateways on Device1 and Device2 and establish an IPv6 VXLAN tunnel between Device1 and Device2. This ensures that the VMs on the same network segment can communicate.
- To allow VM1 on Server1 and VM1 on Server3 to communicate, deploy a Layer 3 gateway on Device3 and establish one IPv6 VXLAN tunnel between Device1 and Device3 and another one between Device2 and Device3. This ensures that the VMs on different network segments can communicate.
The VMs and Layer 3 VXLAN gateway can be allocated either IPv4 or IPv6 addresses. This means that either an IPv4 or IPv6 overlay network can be used with IPv6 VXLAN. Figure 1-1103 shows an IPv4 overlay network.
Layer 3 gateways must be deployed on the IPv6 VXLAN if VMs must communicate with VMs on other network segments or with external networks. Layer 3 gateways do not need to be deployed for VMs communicating on the same network segment.
Configuration Task |
IPv4 Overlay Network |
IPv6 Overlay Network |
---|---|---|
Configure a Layer 3 gateway on an IPv6 VXLAN. |
Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway. |
Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway. |
Configuring a VXLAN Service Access Point
On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. A Layer 2 sub-interface can transmit data packets through a BD after being associated with it.
Context
Traffic Encapsulation Type |
Description |
---|---|
dot1q |
This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
|
untag |
This type of sub-interface accepts only packets that do not carry VLAN tags. When setting the encapsulation type to untag for a Layer 2 sub-interface, note the following:
|
default |
This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
|
qinq |
This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags. |
A service access point needs to be configured on a Layer 2 gateway.
Procedure
- Run system-view
The system view is displayed.
- Run bridge-domain bd-id
A BD is created, and the BD view is displayed.
- (Optional) Run description description
A BD description is configured.
- Run quit
Return to the system view.
- Run interface interface-type interface-number.subnum mode l2
A Layer 2 sub-interface is created, and the sub-interface view is displayed.
Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.
- Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }
A traffic encapsulation type is configured for the Layer 2 sub-interface.
- Run rewrite pop { single | double }
The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.
If the received packets each carry a single VLAN tag, specify single.
If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.
- Run bridge-domain bd-id
The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.
If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.
- Run commit
The configuration is committed.
Configuring an IPv6 VXLAN Tunnel
VXLAN is a tunneling technology that uses MAC-in-UDP encapsulation to extend large Layer 2 networks. If an underlay network is an IPv6 network, you can configure an IPv6 VXLAN tunnel for a virtual network to access a large number of tenants.
Context
After you configure local and remote VNIs and VTEP IPv6 addresses, an IPv6 VXLAN tunnel is statically created. This configuration is simple, and no protocol configurations are involved. To ensure the proper forwarding of IPv6 VXLAN packets, IPv6 VXLAN tunnels must be configured on VXLAN gateways.
Procedure
- Run system-view
The system view is displayed.
- Run bridge-domain bd-id
The BD view is displayed.
bd-id specified in this command must be the same as bd-id specified in Step 2 in the service access point configuration.
- Run vxlan vni vni-id
A VNI is created and associated with the BD.
To interconnect a VXLAN with a VPLS network, run the vxlan vni vni-id split-horizon-mode command on the edge device belonging to both networks to create a VNI, associate the VNI with a BD, and implement split horizon forwarding.
- Run quit
Return to the system view.
- Run interface nve nve-number
An NVE interface is created, and the NVE interface view is displayed.
- Run source ipv6-address
Configure an IPv6 address for the source VTEP.
Either a physical or loopback interface's address can be specified as a source VTEP's IPv6 address. Using the loopback interface's address is recommended.
- Run vni vni-id head-end peer-list { ipv6-address } &<1-10>
Configure an IPv6 ingress replication list.
With this function, the ingress of an IPv6 VXLAN tunnel replicates and sends a copy of any received BUM packets to each VTEP in the ingress replication list (a collection of remote VTEP IPv6 addresses).
Currently, BUM packets can be forwarded only through ingress replication. This means that non-Huawei devices must have ingress replication configured to establish IPv6 VXLAN tunnels with Huawei devices. If ingress replication is not configured, the tunnels fail to be established.
- Run commit
The configuration is committed.
(Optional) Configuring a Layer 3 Gateway on an IPv6 VXLAN
To allow users on different network segments to communicate, deploy a Layer 3 gateway and specify the IP address of its VBDIF interface as the default gateway address of the users.
Context
On an IPv6 VXLAN, a BD can be mapped to a VNI (identifying a tenant) in 1:1 mode to transmit VXLAN data packets. You can create a VBDIF interface (logical Layer 3 interface) for each BD to implement communication between VXLAN segments, between VXLAN segments and non-VXLAN segments, and between Layer 2 and Layer 3 networks. After you configure an IP address for a VBDIF interface, the interface functions as the gateway for tenants in the BD to forward packets at Layer 3 based on the IP address.
A VBDIF interface needs to be configured on the Layer 3 gateway of an IPv6 VXLAN for communication between different network segments only; it is not needed for communication on the same network segment.
The DHCP relay function can be configured on a VBDIF interface so that hosts can request IP addresses from an external DHCP server.
Procedure
- Run system-view
The system view is displayed.
- Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
- Configure an IP address for the VBDIF interface to implement Layer 3 communication.
- For an IPv4 overlay network, run ip address ip-address { mask | mask-length } [ sub ]
An IPv4 address is configured for the VBDIF interface.
- For an IPv6 overlay network, perform the following operations:
Run ipv6 enable
The IPv6 function is enabled for the interface.
Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64
A global unicast address is configured for the interface.
- For an IPv4 overlay network, run ip address ip-address { mask | mask-length } [ sub ]
- (Optional) Run mac-address mac-address
A MAC address is configured for the VBDIF interface.
- (Optional) Run bandwidth bandwidth
Bandwidth is configured for the VBDIF interface.
- Run commit
The configuration is committed.
(Optional) Configuring a Static MAC Address Entry
Using static MAC address entries to forward user packets helps reduce BUM traffic on the network and prevent bogus attacks.
(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
MAC address learning limiting helps improve VXLAN network security.
Context
Configure the maximum number of MAC addresses that a device can learn to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum, no more addresses can be learned. However, you can also configure the device to discard packets after learning the maximum allowed number of MAC addresses, improving network security.
Disable MAC address learning for a BD if a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in the BD, reducing the number of MAC address entries. You can also disable MAC address learning on Layer 2 gateways after the VXLAN network topology becomes stable and MAC address learning is complete.
MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.
Procedure
- Limit MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *
A MAC address learning limit rule is configured.
(Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold
The threshold percentages for MAC address limit alarm generation and clearing are configured.
- Run commit
The configuration is committed.
- Disable MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-address learning disable
MAC address learning is disabled.
Run commit
The configuration is committed.
Verifying the Configuration
After configuring IPv6 VXLAN in centralized gateway mode for static tunnel establishment, check IPv6 VXLAN tunnel, VNI, and VBDIF interface information.
Prerequisites
IPv6 VXLAN in centralized gateway mode has been configured for static tunnel establishment.
Procedure
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display vxlan peer [ vni vni-id ] command to check IPv6 ingress replication lists of a VNI or all VNIs.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.
Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN
IPv6 VXLAN can be deployed in centralized gateway mode so that all inter-subnet traffic is forwarded through Layer 3 gateways, thereby implementing centralized traffic management.
Usage Scenario
To allow intra- and inter-subnet communication between a tenant's VMs located in different geological locations on an IPv6 network, properly deploy Layer 2 and Layer 3 gateways on the network and establish IPv6 VXLAN tunnels.
- To allow VM1 on Server2 and VM1 on Server3 to communicate, deploy Layer 2 gateways on Device1 and Device2 and establish an IPv6 VXLAN tunnel between Device1 and Device2. This ensures that the VMs on the same network segment can communicate.
- To allow VM1 on Server1 and VM1 on Server3 to communicate, deploy a Layer 3 gateway on Device3 and establish one IPv6 VXLAN tunnel between Device1 and Device3 and another one between Device2 and Device3. This ensures that the VMs on different network segments can communicate.
The VMs and Layer 3 VXLAN gateway can be allocated either IPv4 or IPv6 addresses. This means that either an IPv4 or IPv6 overlay network can be used with IPv6 VXLAN. Figure 1-1104 shows an IPv4 overlay network.
Layer 3 gateways must be deployed on the IPv6 VXLAN if VMs must communicate with VMs on other network segments or with external networks. Layer 3 gateways do not need to be deployed for VMs communicating on the same network segment.
Configuration Task |
IPv4 Overlay Network |
IPv6 Overlay Network |
---|---|---|
Configure a Layer 3 gateway on an IPv6 VXLAN. |
Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway. |
Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway. |
Configuring a VXLAN Service Access Point
On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. A Layer 2 sub-interface can transmit data packets through a BD after being associated with it.
Context
Traffic Encapsulation Type |
Description |
---|---|
dot1q |
This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
|
untag |
This type of sub-interface accepts only packets that do not carry VLAN tags. When setting the encapsulation type to untag for a Layer 2 sub-interface, note the following:
|
default |
This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
|
qinq |
This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags. |
A service access point needs to be configured on a Layer 2 gateway.
Procedure
- Run system-view
The system view is displayed.
- Run bridge-domain bd-id
A BD is created, and the BD view is displayed.
- (Optional) Run description description
A BD description is configured.
- Run quit
Return to the system view.
- Run interface interface-type interface-number.subnum mode l2
A Layer 2 sub-interface is created, and the sub-interface view is displayed.
Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.
- Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }
A traffic encapsulation type is configured for the Layer 2 sub-interface.
- Run rewrite pop { single | double }
The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.
If the received packets each carry a single VLAN tag, specify single.
If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.
- Run bridge-domain bd-id
The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.
If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.
- Run commit
The configuration is committed.
Configuring a VXLAN Tunnel
To allow VXLAN tunnel establishment using EVPN, establish a BGP EVPN peer relationship, configure an EVPN instance, and configure ingress replication.
Context
Configure a BGP EVPN peer relationship. Configure VXLAN gateways to establish BGP EVPN peer relationships so that they can exchange EVPN routes. If an RR has been deployed, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR.
(Optional) Configure an RR. The deployment of RRs reduces the number of BGP EVPN peer relationships to be established, simplifying configuration. A live-network device can be used as an RR, or a standalone RR can be deployed. Layer 3 VXLAN gateways are generally used as RRs, and Layer 2 VXLAN gateways as RR clients.
Configure an EVPN instance. EVPN instances are used to receive and advertise EVPN routes.
Configure ingress replication. After ingress replication is configured for a VNI, the system uses BGP EVPN to construct a list of remote VTEPs. After a VXLAN gateway receives BUM packets, its sends a copy of the BUM packets to every VXLAN gateway in the list.
BUM packet forwarding is implemented only using ingress replication. To establish a VXLAN tunnel between a Huawei device and a non-Huawei device, ensure that the non-Huawei device also has ingress replication configured. Otherwise, communication fails.
Procedure
- Configure a BGP EVPN peer relationship.
- (Optional) Configure a Layer 3 VXLAN gateway as an RR. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR, reducing the number of BGP EVPN peer relationships to be established and simplifying configuration.
- Configure an EVPN instance.
- Configure an ingress replication list.
- Run commit
The configuration is committed.
Configuring a Layer 3 VXLAN Gateway
To allow users on different network segments to communicate, a Layer 3 VXLAN gateway must be deployed, and the default gateway address of the users must be the IP address of the VBDIF interface of the Layer 3 gateway.
Context
A tenant is identified by a VNI. VNIs can be mapped to BDs in 1:1 mode so that a BD can function as a VXLAN network entity to transmit VXLAN data packets. A VBDIF interface is a Layer 3 logical interface created for a BD. After an IP address is configured for a VBDIF interface of a BD, the VBDIF interface can function as the gateway for tenants in the BD for Layer 3 forwarding. VBDIF interfaces allow Layer 3 communication between VXLANs on different network segments and between VXLANs and non-VXLANs, and implement Layer 2 network access to a Layer 3 network.
VBDIF interfaces are configured on Layer 3 VXLAN gateways for inter-segment communication, and are not needed in the case of intra-segment communication.
The DHCP relay function can be configured on the VBDIF interface so that hosts can request IP addresses from the external DHCP server.
Procedure
- Run system-view
The system view is displayed.
- Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
- Configure an IP address for the VBDIF interface to implement Layer 3 interworking.
On IPv4 overlay networks, run ip address ip-address { mask | mask-length } [ sub ].
An IPv4 address is configured for the VBDIF interface.
- On IPv6 overlay networks, perform the following operations:
Run ipv6 enable
IPv6 is enabled for the VBDIF interface.
Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length }
Or, ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64
A global unicast address is configured for the VBDIF interface.
- (Optional) Run mac-address mac-address
A MAC address is configured for the VBDIF interface.
- (Option) Run bandwidth bandwidth
The bandwidth is configured for the VBDIF interface.
- Run commit
The configuration is committed.
(Optional) Configuring a Static MAC Address Entry
Using static MAC address entries to forward user packets helps reduce BUM traffic on the network and prevent bogus attacks.
(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
MAC address learning limiting helps improve VXLAN network security.
Context
Configure the maximum number of MAC addresses that a device can learn to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum, no more addresses can be learned. However, you can also configure the device to discard packets after learning the maximum allowed number of MAC addresses, improving network security.
Disable MAC address learning for a BD if a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in the BD, reducing the number of MAC address entries. You can also disable MAC address learning on Layer 2 gateways after the VXLAN network topology becomes stable and MAC address learning is complete.
MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.
Procedure
- Limit MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *
A MAC address learning limit rule is configured.
(Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold
The threshold percentages for MAC address limit alarm generation and clearing are configured.
- Run commit
The configuration is committed.
- Disable MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-address learning disable
MAC address learning is disabled.
Run commit
The configuration is committed.
Verifying the Configuration of VXLAN in Centralized Gateway Mode Using BGP EVPN
After configuring VXLAN in centralized gateway mode for dynamic tunnel establishment, check VXLAN tunnel, VNI, and VBDIF interface information.
Prerequisites
VXLAN in centralized gateway mode has been configured for dynamic tunnel establishment.
Procedure
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to check BGP EVPN peer information.
- Run the display vxlan peer [ vni vni-id ] command to check ingress replication lists of a VNI or all VNIs.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check VNI information.
- Run the display interface vbdif [ bd-id ] command to check VBDIF interface information and statistics.
- Run the display mac-limit bridge-domain bd-id command to check MAC address limiting configurations of a BD.
- Run the display bgp evpn all routing-table command to check EVPN route information.
Configuring IPv6 VXLAN in Centralized Gateway Mode Using BGP EVPN
IPv6 VXLAN can be deployed in centralized gateway mode so that all inter-subnet traffic is forwarded through Layer 3 gateways, thereby implementing centralized traffic management.
Usage Scenario
To allow intra- and inter-subnet communication between a tenant's VMs located in different geological locations on an IPv6 network, properly deploy Layer 2 and Layer 3 gateways on the network and establish IPv6 VXLAN tunnels.
- To allow VM1 on Server2 and VM1 on Server3 to communicate, deploy Layer 2 gateways on Device1 and Device2 and establish an IPv6 VXLAN tunnel between Device1 and Device2. This ensures that the VMs on the same network segment can communicate.
- To allow VM1 on Server1 and VM1 on Server3 to communicate, deploy a Layer 3 gateway on Device3 and establish one IPv6 VXLAN tunnel between Device1 and Device3 and another one between Device2 and Device3. This ensures that the VMs on different network segments can communicate.
The VMs and Layer 3 VXLAN gateway can be allocated either IPv4 or IPv6 addresses. This means that either an IPv4 or IPv6 overlay network can be used with IPv6 VXLAN. Figure 1-1105 shows an IPv4 overlay network.
Layer 3 gateways must be deployed on the IPv6 VXLAN if VMs must communicate with VMs on other network segments or with external networks. Layer 3 gateways do not need to be deployed for VMs communicating on the same network segment.
Configuration Task |
IPv4 Overlay Network |
IPv6 Overlay Network |
---|---|---|
Configure a Layer 3 gateway on an IPv6 VXLAN. |
Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway. |
Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway. |
Configuring a VXLAN Service Access Point
On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. After a Layer 2 sub-interface is associated with a bridge domain (BD), which is used as a broadcast domain on the IPv6 VXLAN, the sub-interface can transmit data packets through this BD.
Context
Traffic Encapsulation Type |
Description |
---|---|
dot1q |
This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
|
untag |
This type of sub-interface accepts only packets that do not carry any VLAN tag. The untag traffic encapsulation type has the following restrictions:
|
default |
This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
|
qinq |
This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags. |
Configure a service access point on a Layer 2 gateway:
Procedure
- Run system-view
The system view is displayed.
- Run bridge-domain bd-id
A BD is created, and the BD view is displayed.
- (Optional) Run description description
A BD description is configured.
- Run quit
Return to the system view.
- Run interface interface-type interface-number.subnum mode l2
A Layer 2 sub-interface is created, and the sub-interface view is displayed.
Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.
- Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }
A traffic encapsulation type is configured for the Layer 2 sub-interface.
- Run rewrite pop { single | double }
The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.
If the received packets each carry a single VLAN tag, specify single.
If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.
- Run bridge-domain bd-id
The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.
If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.
- Run commit
The configuration is committed.
Configuring an IPv6 VXLAN Tunnel
Using BGP EVPN to establish an IPv6 VXLAN tunnel between VTEPs involves a series of operations. These include establishing a BGP EVPN peer relationship, configuring an EVPN instance, and configuring ingress replication.
Context
Configure a BGP EVPN peer relationship. After the gateways on an IPv6 VXLAN establish such a relationship, they can exchange EVPN routes. If an RR is deployed on the network, each gateway only needs to establish a BGP EVPN peer relationship with the RR.
(Optional) Configure an RR. The deployment of RRs simplifies configurations because fewer BGP EVPN peer relationships need to be established. An existing device can be configured to also function as an RR, or a new device can be deployed for this specific purpose. Layer 3 gateways on an IPv6 VXLAN are generally used as RRs, and Layer 2 gateways used as RR clients.
Configure an EVPN instance. EVPN instances are used to receive and advertise EVPN routes.
Configure ingress replication. After ingress replication is configured on a VXLAN gateway, the gateway uses BGP EVPN to construct a list of remote VTEP peers that share the same VNI with itself. After the gateway receives BUM packets, its sends a copy of the BUM packets to each gateway in the list.
Currently, BUM packets can be forwarded only through ingress replication. This means that non-Huawei devices must have ingress replication configured to establish IPv6 VXLAN tunnels with Huawei devices. If ingress replication is not configured, the tunnels fail to be established.
Procedure
- Configure a BGP EVPN peer relationship.
- (Optional) Configure the Layer 3 gateway as an RR on the IPv6 VXLAN. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. This simplifies configurations because fewer BGP EVPN peer relationships need to be established.
- Configure an EVPN instance.
- Configure ingress replication.
- (Optional) Enable the device to add its router ID (a private extended community attribute) to locally generated EVPN routes.
- Run commit
The configuration is committed.
(Optional) Configuring a Layer 3 Gateway on an IPv6 VXLAN
To allow users on different network segments to communicate, deploy a Layer 3 gateway and specify the IP address of its VBDIF interface as the default gateway address of the users.
Context
On an IPv6 VXLAN, a BD can be mapped to a VNI (identifying a tenant) in 1:1 mode to transmit VXLAN data packets. VBDIF interfaces, which are Layer 3 logical interfaces created for a BD, can be used to implement communication between VXLANs on different network segments or between VXLANs and non-VXLANs, or they can be used for Layer 2 network access to a Layer 3 network. When configured with an IP address, the VBDIF interface of a BD functions as a tenant's gateway within the BD to transmit Layer 3 packets.
A VBDIF interface needs to be configured on the Layer 3 gateway of an IPv6 VXLAN for communication between different network segments only; it is not needed for communication on the same network segment.
The DHCP relay function can be configured on a VBDIF interface so that hosts can request IP addresses from an external DHCP server.
Procedure
- Run system-view
The system view is displayed.
- Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
- Configure an IP address for the VBDIF interface to implement Layer 3 interworking.
- For an IPv4 overlay network, run the ip address ip-address { mask | mask-length } [ sub ] command to configure an IPv4 address for the VBDIF interface.
- For an IPv6 overlay network, perform the following operations:
Run the ipv6 enable command to enable the IPv6 function for the interface.
Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64 command to configure a global unicast address for the interface.
- (Optional) Run mac-address mac-address
A MAC address is configured for the VBDIF interface.
- (Optional) Run bandwidth bandwidth
Bandwidth is configured for the VBDIF interface.
- Run commit
The configuration is committed.
(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
MAC address learning limiting helps improve VXLAN network security.
Context
The maximum number of MAC addresses that a device can learn can be configured to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum number of MAC addresses allowed, no more addresses can be learned. The device can also be configured to discard packets after learning the maximum allowed number of MAC addresses, improving network security.
If a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in a BD, MAC address learning for the BD can be disabled to conserve MAC address table space. After the network topology of a VXLAN becomes stable and MAC address learning is complete, MAC address learning can also be disabled.
MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.
Procedure
- Configure MAC address learning limiting.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-limit { action { discard | forward } | maximum max [ rate interval ] }*
A MAC address learning limit rule is configured.
(Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold
The threshold percentages for MAC address limit alarm generation and clearing are configured.
Run commit
The configuration is committed.
- Disable MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-address learning disable
MAC address learning is disabled.
Run commit
The configuration is committed.
Verifying the Configuration
After configuring IPv6 VXLAN in centralized gateway mode using BGP EVPN, verify information about the IPv6 VXLAN tunnels, VNI status, and VBDIF interfaces.
Procedure
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display evpn vpn-instance command to check EVPN instance information.
- Run the display bgp evpn peer [ [ ipv6-address ] verbose ] command to check information about BGP EVPN peers.
- Run the display vxlan peer [ vni vni-id ] command to check the ingress replication lists of all VNIs or a specified one.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.
Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN
Distributed VXLAN gateways can be configured to address problems that occur in centralized gateway networking. Such problems include sub-optimal forwarding paths and bottlenecks on Layer 3 gateways in terms of ARP or ND entry specifications.
Usage Scenario
- Forwarding paths are not optimal. All Layer 3 traffic must be transmitted to the centralized Layer 3 gateway for forwarding.
- The ARP or ND entry specification is a bottleneck. ARP or ND entries for tenants must be generated on the Layer 3 gateway, but only a limited number of ARP or ND entries are allowed by the Layer 3 gateway, impeding data center network expansion.
To address these problems, distributed VXLAN gateways can be configured. On the network shown in Figure 1-1107, Server1 and Server2 on different subnets both connect to Leaf1. When Server1 and Server2 communicate, traffic is forwarded only through Leaf1, not through any spine node.
Flexible deployment. A leaf node can function as both Layer 2 and Layer 3 VXLAN gateways.
Improved network expansion capabilities. Unlike a centralized Layer 3 gateway, which has to learn the ARP or ND entries of all servers on a network, a distributed gateway needs to learn the ARP or ND entries of only the servers attached to it. This addresses the problem of the ARP or ND entry specifications being a bottleneck for packet forwarding.
Either IPv4 or IPv6 addresses can be configured for the VMs and Layer 3 VXLAN gateway. This means that a VXLAN overlay network can be an IPv4 or IPv6 network. Figure 1-1107 shows an IPv4 overlay network.
If only VMs on the same subnet need to communicate with each other, Layer 3 VXLAN gateways do not need to be deployed. If VMs on different subnets need to communicate with each other or VMs on the same subnet need to communicate with external networks, Layer 3 VXLAN gateways must be deployed.
The following table lists the differences in distributed gateway configuration between IPv4 and IPv6 overlay networks.
Configuration Task |
IPv4 Overlay Network |
IPv6 Overlay Network |
---|---|---|
Configure a VPN instance for route leaking with an EVPN instance. |
Enable the IPv4 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv4 address family view. |
Enable the IPv6 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv6 address family view. |
Configure an IPv6 Layer 3 VXLAN gateway. |
Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway. |
Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway. |
Configure a gateway on an IPv6 VXLAN to advertise a specific type of route. |
|
|
Configuring a VXLAN Service Access Point
On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. A Layer 2 sub-interface can transmit data packets through a BD after being associated with it.
Context
Traffic Encapsulation Type |
Description |
---|---|
dot1q |
This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
|
untag |
This type of sub-interface accepts only packets that do not carry VLAN tags. When setting the encapsulation type to untag for a Layer 2 sub-interface, note the following:
|
default |
This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
|
qinq |
This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags. |
A service access point needs to be configured on a Layer 2 gateway.
Procedure
- Run system-view
The system view is displayed.
- Run bridge-domain bd-id
A BD is created, and the BD view is displayed.
- (Optional) Run description description
A BD description is configured.
- Run quit
Return to the system view.
- Run interface interface-type interface-number.subnum mode l2
A Layer 2 sub-interface is created, and the sub-interface view is displayed.
Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.
- Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }
A traffic encapsulation type is configured for the Layer 2 sub-interface.
- Run rewrite pop { single | double }
The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.
If the received packets each carry a single VLAN tag, specify single.
If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.
- Run bridge-domain bd-id
The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.
If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.
- Run commit
The configuration is committed.
Configuring a VXLAN Tunnel
To allow VXLAN tunnel establishment using EVPN, configure an EVPN instance, establish a BGP EVPN peer relationship, and configure ingress replication.
Context
Configure a BGP EVPN peer relationship. Configure VXLAN gateways to establish BGP EVPN peer relationships so that they can exchange EVPN routes. If an RR has been deployed, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR.
(Optional) Configure an RR. If you configure an RR, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. The deployment of RRs reduces the number of BGP EVPN peer relationships to be established, simplifying configuration. An existing device can be configured to also function as an RR, or a new device can be deployed for this specific purpose. Spine nodes are generally used as RRs, and leaf nodes used as RR clients.
Configure an EVPN instance. EVPN instances are used to receive and advertise EVPN routes.
Configure ingress replication. After ingress replication is configured for a VNI, the system uses BGP EVPN to construct a list of remote VTEPs. After a VXLAN gateway receives BUM packets, its sends a copy of the BUM packets to every VXLAN gateway in the list.
BUM packet forwarding is implemented only using ingress replication. To establish a VXLAN tunnel between a Huawei device and a non-Huawei device, ensure that the non-Huawei device also has ingress replication configured. Otherwise, communication fails.
Procedure
- Configure a BGP EVPN peer relationship. If an RR has been deployed, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. If the spine node and gateway reside in different ASs, the gateway must establish an EBGP EVPN peer relationship with the spine node.
- (Optional) Configure an RR. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR, reducing the number of BGP EVPN peer relationships to be established and simplifying configuration.
- Configure an EVPN instance.
- Configure an ingress replication list.
- (Optional) Configure MAC addresses for NVE interfaces.
In distributed VXLAN gateway (EVPN BGP) scenarios, if you want to use active-active VXLAN gateways to load-balance traffic, configure the same VTEP MAC address on the two VXLAN gateways. Otherwise, the two gateways cannot forward traffic properly on the VXLAN network.
- Run commit
The configuration is committed.
(Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance
To enable communication between VMs on different subnets, configure a VPN instance for route leaking with an EVPN instance. This configuration enables Layer 3 connectivity. To isolate multiple tenants, you can use different VPN instances.
(Optional) Configuring a Layer 3 Gateway on the VXLAN
To enable communication between VMs on different subnets, configure Layer 3 gateways on the VXLAN, enable the distributed gateway function, and configure host route advertisement.
Procedure
- Run system-view
The system view is displayed.
- Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
- Run ip binding vpn-instance vpn-instance-name
The VBDIF interface is bound to a VPN instance.
- Configure an IP address for the VBDIF interface to implement Layer 3 communication.
- For an IPv4 overlay network, run the ip address ip-address { mask | mask-length } [ sub ] command to configure an IPv4 address for the VBDIF interface.
- For an IPv6 overlay network, perform the following operations:
Run the ipv6 enable command to enable IPv6 for the VBDIF interface.
Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64 command to configure a global unicast address for the VBDIF interface.
- (Optional) Run bandwidth bandwidth
Bandwidth is configured for the VBDIF interface.
- (Optional) Run mac-address mac-address
A MAC address is configured for the VBDIF interface.
By default, the MAC address of a VBDIF interface is the system MAC address. On a network where distributed or active-active Layer 3 gateways need to be simulated into one, you need to run the mac-address command to configure the same MAC address for the VBDIF interfaces of these Layer 3 gateways.
If VMs on the same subnet connect to different Layer 3 gateways on a VXLAN, the VBDIF interfaces of the Layer 3 gateways must have the same IP address and same MAC address configured. In this way, the configurations of the Layer 3 gateways do not need to be changed when the VMs' locations are changed, reducing the maintenance workload.
- Run vxlan anycast-gateway enable
The distributed gateway function is enabled.
After the distributed gateway function is enabled on a Layer 3 gateway, this gateway discards network-side ARP or NS messages and learns those only from the user side.
- Perform one of the following steps to configure host route advertisement.Table 1-480 Configuring host route advertisement
Overlay Network Type
Type of Route to Be Advertised Between Gateways
Host Route Advertisement Configuration
IPv4
IRB route
Run the arp collect host enable command in the VBDIF interface view.
IP prefix route
Run the arp vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv4 address family view of the VPN instance to which the VBDIF interface is bound.
IPv6
IRB route
Run the ipv6 nd collect host enable command in the VBDIF interface view.
IP prefix route
Run the nd vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv6 address family view of the VPN instance to which the VBDIF interface is bound.
- Run commit
The configuration is committed.
(Optional) Configuring VXLAN Gateways to Advertise Specific Types of Routes
To enable communication between VMs on different subnets, configure VXLAN gateways to exchange IRB or IP prefix routes. This configuration enables the gateways to learn the IP routes of the related hosts or the subnets where the hosts reside.
Context
By default, VXLAN gateways can exchange MAC routes, but must be configured to exchange IRB or IP prefix routes if VMs need to communicate across subnets. If an RR is deployed on the network, IRB or IP prefix routes must be exchanged only between the VXLAN gateways and RR.
Host routes can be advertised through IRB routes (recommended), IP prefix routes, or both. In contrast, subnet routes of hosts can be advertised through IP prefix routes only.
(Optional) Configuring a Static MAC Address Entry
Using static MAC address entries to forward user packets helps reduce BUM traffic on the network and prevent bogus attacks.
(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
MAC address learning limiting helps improve VXLAN network security.
Context
Configure the maximum number of MAC addresses that a device can learn to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum, no more addresses can be learned. However, you can also configure the device to discard packets after learning the maximum allowed number of MAC addresses, improving network security.
Disable MAC address learning for a BD if a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in the BD, reducing the number of MAC address entries. You can also disable MAC address learning on Layer 2 gateways after the VXLAN network topology becomes stable and MAC address learning is complete.
MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.
Procedure
- Limit MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *
A MAC address learning limit rule is configured.
(Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold
The threshold percentages for MAC address limit alarm generation and clearing are configured.
- Run commit
The configuration is committed.
- Disable MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-address learning disable
MAC address learning is disabled.
Run commit
The configuration is committed.
Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN
After configuring VXLAN in distributed gateway mode using BGP EVPN, verify the configuration, and you can find that VXLAN tunnels are dynamically established and are in the Up state.
Procedure
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to check BGP EVPN peer information.
- Run the display vxlan peer [ vni vni-id ] command to check ingress replication lists of a VNI or all VNIs.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check VNI information.
- Run the display interface vbdif [ bd-id ] command to check VBDIF interface information and statistics.
- Run the display mac-limit bridge-domain bd-id command to check MAC address limiting configurations of a BD.
- Run the display bgp evpn all routing-table command to check EVPN route information.
Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN
Distributed IPv6 VXLAN gateways can be configured to address problems that occur in centralized gateway networking. Such problems include sub-optimal forwarding paths and bottlenecks on Layer 3 gateways in terms of ARP or ND entry specifications.
Usage Scenario
On the network shown in Figure 1-1108, Server1 and Server2 on different subnets both connect to Leaf1. When Server1 and Server2 communicate, traffic is forwarded only through Leaf1, not through any spine node.
Flexible deployment. A leaf node can function as both Layer 2 and Layer 3 IPv6 VXLAN gateways.
Improved network expansion capabilities. Unlike a centralized Layer 3 gateway, which has to learn the ARP or ND entries of all servers on a network, a distributed gateway needs to learn the ARP or ND entries of only the servers attached to it. This addresses the problem of the ARP or ND entry specifications being a bottleneck for packet forwarding.
Either IPv4 or IPv6 addresses can be configured for the VMs and Layer 3 VXLAN gateway. This means that a VXLAN overlay network can be an IPv4 or IPv6 network. Figure 1-1108 shows an IPv4 overlay network.
If only VMs on the same subnet need to communicate with each other, Layer 3 IPv6 VXLAN gateways do not need to be deployed. If VMs on different subnets need to communicate with each other or VMs on the same subnet need to communicate with external networks, Layer 3 IPv6 VXLAN gateways must be deployed.
Configuration Task |
IPv4 Overlay Network |
IPv6 Overlay Network |
---|---|---|
Configure a VPN instance for route leaking with an EVPN instance. |
Enable the IPv4 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv4 address family view. |
Enable the IPv6 address family of the involved VPN instance and then complete other configurations in the VPN instance IPv6 address family view. |
Configure a Layer 3 gateway on an IPv6 VXLAN. |
Configure an IPv4 address for the VBDIF interface of the Layer 3 gateway. |
Configure an IPv6 address for the VBDIF interface of the Layer 3 gateway. |
Configure IPv6 VXLAN gateways to exchange specific types of routes. |
|
|
Configuring a VXLAN Service Access Point
On an IPv6 VXLAN, Layer 2 sub-interfaces are used for service access and can have different encapsulation types configured to transmit various types of data packets. After a Layer 2 sub-interface is associated with a BD, which is used as a broadcast domain on the IPv6 VXLAN, the sub-interface can transmit data packets through this BD.
Context
Traffic Encapsulation Type |
Description |
---|---|
dot1q |
This type of sub-interface accepts only packets with a specified VLAN tag. The dot1q traffic encapsulation type has the following restrictions:
|
untag |
This type of sub-interface accepts only packets that do not carry any VLAN tag. The untag traffic encapsulation type has the following restrictions:
|
default |
This type of sub-interface accepts all packets, regardless of whether they carry VLAN tags. The default traffic encapsulation type has the following restrictions:
|
qinq |
This type of sub-interface receives packets that carry two or more VLAN tags and determines whether to accept the packets based on the innermost two VLAN tags. |
Procedure
- Run system-view
The system view is displayed.
- Run bridge-domain bd-id
A BD is created, and the BD view is displayed.
- (Optional) Run description description
A BD description is configured.
- Run quit
Return to the system view.
- Run interface interface-type interface-number.subnum mode l2
A Layer 2 sub-interface is created, and the sub-interface view is displayed.
Before running this command, ensure that the involved Layer 2 main interface does not have the port link-type dot1q-tunnel command configuration. If the configuration exists, run the undo port link-type command to delete it.
- Run encapsulation { dot1q [ vid vid ] | default | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] } ] }
A traffic encapsulation type is configured for the Layer 2 sub-interface.
- Run rewrite pop { single | double }
The Layer 2 sub-interface is enabled to remove single or double VLAN tags from received packets.
If the received packets each carry a single VLAN tag, specify single.
If the traffic encapsulation type has been specified as qinq using the encapsulation qinq vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } command in the preceding step, specify double.
- Run bridge-domain bd-id
The Layer 2 sub-interface is added to the BD so that it can transmit data packets through this BD.
If a default Layer 2 sub-interface is added to a BD, no VBDIF interface can be created for the BD.
- Run commit
The configuration is committed.
Configuring an IPv6 VXLAN Tunnel
Using BGP EVPN to establish an IPv6 VXLAN tunnel between VTEPs involves a series of operations. These include establishing a BGP EVPN peer relationship, configuring an EVPN instance, and configuring ingress replication.
Context
Configure a BGP EVPN peer relationship. After the gateways on an IPv6 VXLAN establish such a relationship, they can exchange EVPN routes. If an RR is deployed on the network, each gateway only needs to establish a BGP EVPN peer relationship with the RR.
(Optional) Configure an RR. The deployment of RRs simplifies configurations because fewer BGP EVPN peer relationships need to be established. An existing device can be configured to also function as an RR, or a new device can be deployed for this specific purpose. Spine nodes on an IPv6 VXLAN are generally used as RRs, and leaf nodes used as RR clients.
Configure an EVPN instance. EVPN instances can be used to manage EVPN routes received from and advertised to BGP EVPN peers.
Configure ingress replication. After ingress replication is configured on an IPv6 VXLAN gateway, the gateway uses BGP EVPN to construct a list of remote VTEP peers that share the same VNI with itself. After the gateway receives BUM packets, its sends a copy of the BUM packets to each gateway in the list.
Currently, BUM packets can be forwarded only through ingress replication. This means that non-Huawei devices must have ingress replication configured to establish IPv6 VXLAN tunnels with Huawei devices. If ingress replication is not configured, the tunnels fail to be established.
Procedure
- Configure a BGP EVPN peer relationship.
- (Optional) Configure the spine node as an RR. If an RR is configured, each VXLAN gateway only needs to establish a BGP EVPN peer relationship with the RR. This simplifies configurations because fewer BGP EVPN peer relationships need to be established.
- Configure an EVPN instance.
- Configure ingress replication.
- (Optional) Configure a MAC address for the NVE interface.
To use active-active VXLAN gateways in distributed VXLAN gateway (EVPN BGP) scenarios, configure the same VTEP MAC address on the two gateways.
- (Optional) Enable the device to add its router ID (a private extended community attribute) to locally generated EVPN routes.
- Run commit
The configuration is committed.
(Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance
To enable communication between VMs on different subnets, configure a VPN instance for route leaking with an EVPN instance. This configuration enables Layer 3 connectivity. To isolate multiple tenants, you can use different VPN instances.
(Optional) Configuring a Layer 3 Gateway on the IPv6 VXLAN
To enable communication between VMs on different subnets, configure Layer 3 gateways on the IPv6 VXLAN, enable the distributed gateway function, and configure host route advertisement.
Procedure
- Run system-view
The system view is displayed.
- Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
- Run ip binding vpn-instance vpn-instance-name
The VBDIF interface is bound to a VPN instance.
- Configure an IP address for the VBDIF interface to implement Layer 3 interworking.
- For an IPv4 overlay network, run the ip address ip-address { mask | mask-length } [ sub ] command to configure an IPv4 address for the VBDIF interface.
- For an IPv6 overlay network, perform the following operations:
Run the ipv6 enable command to enable the IPv6 function for the interface.
Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } or ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length } eui-64 command to configure a global unicast address for the interface.
- (Optional) Run mac-address mac-address
A MAC address is configured for the VBDIF interface.
By default, the MAC address of a VBDIF interface is the system MAC address. On a network where distributed or active-active gateways need to be simulated into one, you need to run the mac-address command to configure the same MAC address for the VBDIF interfaces of these Layer 3 gateways.
If VMs on the same subnet connect to different Layer 3 gateways on an IPv6 VXLAN, the VBDIF interfaces of the Layer 3 gateways must have the same IP address and MAC address configured. In this way, the configurations of the Layer 3 gateways do not need to be changed when the VMs' locations are changed. This reduces the maintenance workload.
- (Optional) Run bandwidth bandwidth
Bandwidth is configured for the VBDIF interface.
- Run vxlan anycast-gateway enable
The distributed gateway function is enabled.
After the distributed gateway function is enabled on a Layer 3 gateway, this gateway discards network-side ARP or NS messages and learns those only from the user side.
- Perform one of the following steps to configure host route advertisement.Table 1-482 Configuring host route advertisement
Overlay Network Type
Type of Route to Be Advertised Between Gateways
Configuration Method
IPv4
IRB route
Run the arp collect host enable command in the VBDIF interface view.
IP prefix route
Run the arp vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv4 address family view of the VPN instance to which the VBDIF interface is bound.
IPv6
IRB route
Run the ipv6 nd collect host enable command in the VBDIF interface view.
IP prefix route
Run the nd vlink-direct-route advertise [ route-policy route-policy-name | route-filter route-filter-name ] command in the IPv6 address family view of the VPN instance to which the VBDIF interface is bound.
- Run commit
The configuration is committed.
(Optional) Configuring IPv6 VXLAN Gateways to Advertise Specific Types of Routes
To enable communication between VMs on different subnets, configure IPv6 VXLAN gateways to exchange IRB or IP prefix routes. This configuration enables the gateways to learn the IP routes of the related hosts or the subnets where the hosts reside.
Context
By default, IPv6 VXLAN gateways can exchange MAC routes, but must be configured to exchange IRB or IP prefix routes if VMs need to communicate across different subnets. If an RR is deployed on the network, IRB or IP prefix routes must be exchanged only between the VXLAN gateways and RR.
Host routes can be advertised through IRB routes (recommended), IP prefix routes, or both. In contrast, subnet routes of hosts can be advertised only through IP prefix routes.
(Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
MAC address learning limiting helps improve VXLAN network security.
Context
The maximum number of MAC addresses that a device can learn can be configured to limit the number of access users and defend against attacks on MAC address tables. If the device has learned the maximum number of MAC addresses allowed, no more addresses can be learned. The device can also be configured to discard packets after learning the maximum allowed number of MAC addresses, improving network security.
If a Layer 3 VXLAN gateway does not need to learn MAC addresses of packets in a BD, MAC address learning for the BD can be disabled to conserve MAC address table space. After the network topology of a VXLAN becomes stable and MAC address learning is complete, MAC address learning can also be disabled.
MAC address learning can be limited only on Layer 2 VXLAN gateways and can be disabled on both Layer 2 and Layer 3 VXLAN gateways.
Procedure
- Configure MAC address learning limiting.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-limit { action { discard | forward } | maximum max [ rate interval ] } *
A MAC address learning limit rule is configured.
(Optional) Run mac-limit up-threshold up-threshold down-threshold down-threshold
The threshold percentages for MAC address limit alarm generation and clearing are configured.
Run commit
The configuration is committed.
- Disable MAC address learning.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
The BD view is displayed.
Run mac-address learning disable
MAC address learning is disabled.
Run commit
The configuration is committed.
Verifying the Configuration
After configuring IPv6 VXLAN in distributed gateway mode using BGP EVPN, verify information about the IPv6 VXLAN tunnels, VNI status, and VBDIF interfaces.
Procedure
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display evpn vpn-instance command to check EVPN instance information.
- Run the display bgp evpn peer [ [ ipv6-address ] verbose ] command to check information about BGP EVPN peers.
- Run the display vxlan peer [ vni vni-id ] command to check the ingress replication lists of all VNIs or a specified one.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.
Configuring Three-Segment VXLAN to Implement DCI
Three-Segment VXLAN can be configured to enable communication between VMs in different DCs.
Usage Scenario
To meet the requirements of geographical redundancy, inter-regional operations, and user access, an increasing number of enterprises are deploying data centers (DCs) across multiple regions. Data Center Interconnect (DCI) is a solution that enables intercommunication between the VMs of multiple DCs. Using technologies such as VXLAN and BGP EVPN, DCI securely and reliably transmits DC packets over carrier networks. Three-segment VXLAN can be configured to enable communications between VMs in different DCs.
Configuring Three-Segment VXLAN to Implement Layer 3 Interworking
The three-segment VXLAN can be configured to enable communications between inter-subnet VMs in DCs that belong to different ASs.
Context
As shown in Figure 1-1109, BGP EVPN must be configured to create VXLAN tunnels between distributed gateways in each DC and to create VXLAN tunnels between leaf nodes so that the inter-subnet VMs in DC A and DC B can communicate with each other.
When DC A and DC B belong to the same BGP AS, Leaf2 or Leaf3 does not forward EVPN routes received from an IBGP EVPN peer to other IBGP EVPN peers. Therefore, it is necessary to configure Leaf2 and Leaf3 as route reflectors (RRs).
Procedure
- Configure BGP EVPN within DC A and DC B to establish VXLAN tunnels. For details, see Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN.
- Configure BGP EVPN on Leaf2 and Leaf3 to establish a VXLAN tunnel between them. For details, see Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN.
- (Optional) Configure Leaf2 and Leaf3 as RRs. For details, see Configuring a BGP Route Reflector.
- Configure Leaf2 and Leaf3 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.
- (Optional) Configure local EVPN route leaking on Leaf2 and Leaf3. To use different VPN instances for different service access in a data center, and to shield the VPN instance allocation within the data center from outside by using an external VPN instance for communication with other data centers, perform the following steps on each edge leaf node:
- Run the commit command to commit the configuration.
Configuring Three-Segment VXLAN to Implement Layer 2 Interworking
Three-segment VXLAN tunnels can be configured to enable communication between VMs that belong to the same subnet but different DCs.
Context
On the network shown in Figure 1-1110, VXLAN tunnels are configured both within DC A and DC B and between transit leaf nodes in both DCs. To enable communication between VM 1 and VM 2, implement Layer 2 communication between DC A and DC B. If the VXLAN tunnels within DC A and DC B use the same VXLAN Network Identifier (VNI), this VNI can also be used to establish a VXLAN tunnel between Transit Leaf 1 and Transit Leaf 2. In practice, however, different DCs have their own VNI spaces, and therefore the VXLAN tunnels within DC A and DC B mostly likely use different VNIs. To configure a VXLAN tunnel between Transit Leaf 1 and Transit Leaf 2 in such cases, perform a VNI conversion.
For example, in Figure 1-1110, the VXLAN tunnel in DC A uses the VNI 10, and that in DC B uses the VNI 20. Transit Leaf 2's VNI (20) must be configured as the outbound VNI on Transit Leaf 1, and Transit Leaf 1's VNI (10) as the outbound VNI on Transit Leaf 2. After the configuration is complete, Layer 2 packets can be forwarded properly. Take DC A sending packets to DC B as an example. After receiving VXLAN packets within DC A, Transit Leaf 1 decapsulates the packets and then uses the outbound VNI 20 to re-encapsulate the packets before sending them to Transit Leaf 2. Upon receipt, Transit Leaf 2 forwards them as normal VXLAN packets.
Layer 2 communication between VMs in different DCs is implemented here, therefore avoiding the need to configure a Layer 3 gateway.
If DC A and DC B belong to the same AS, configure an RR on the edge device. If DC A and DC B do not belong to the same AS, establish an EBGP EVPN peer relationship between edge devices.
Procedure
- Configure BGP EVPN within DC A and DC B to establish VXLAN tunnels. For details, see Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN. There is no need to configure a Layer 3 VXLAN gateway.
- Configure BGP EVPN on Transit Leaf 1 and Transit Leaf 2 to establish a VXLAN tunnel between them. For details, see Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN. There is no need to configure a Layer 3 VXLAN gateway.
- Configure Transit Leaf 1 and Transit Leaf 2 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.
Run bgp as-number
The BGP view is displayed.
Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Run peer { group-name | ipv4-address } split-group split-group-name
A split horizon group (SHG) to which BGP EVPN peers (or peer groups) belong is configured.
In Layer 2 interworking scenarios, to prevent forwarding BUM traffic from causing a loop, an SHG must be configured. Separately specify the name of the SHG between Transit Leaf 1 and Transit Leaf 2 on each, so that devices within DC A and DC B belong to the default SHG and Transit Leaf 1 and Transit Leaf 2 belong to the specified SHG. In this manner, when a transit leaf node receives BUM traffic, it does not forward traffic to a device belonging to the same SHG, therefore preventing loops.
Run peer { ipv4-address | group-name } import reoriginate
The function to re-originate routes received from BGP EVPN peers is enabled.
Enable on transit leaf nodes the function to re-originate routes received from BGP EVPN peers within DCs and between the DCs (between transit leaf nodes).
Run peer { ipv4-address | group-name } advertise route-reoriginated evpn mac
The function to advertise re-originated EVPN routes to BGP EVPN peers is enabled.
In Layer 2 interworking scenarios, configure the function to advertise only re-originated MAC routes to BGP EVPN peers. Enable on transit leaf nodes the function to advertise re-originated MAC routes to BGP EVPN peers within DCs and between the DCs (between transit leaf nodes).
Run commit
The configuration is committed.
Verifying the Configuration of Using Three-Segment VXLAN to Implement DCI
After configuring three-segment VXLAN to implement DCI, verify the configuration, such as EVPN instances and VXLAN tunnel information.
Procedure
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to check BGP EVPN peer information.
- Run the display vxlan peer [ vni vni-id ] command to check ingress replication lists of a VNI or all VNIs.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check VNI information.
- Run the display interface vbdif [ bd-id ] command to check VBDIF interface information and statistics.
- Run the display mac-limit bridge-domain bd-id command to check MAC address limiting configurations of a BD.
- Run the display bgp evpn all routing-table command to check EVPN route information.
Configuring the Static VXLAN Active-Active Scenario
In the scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. In this way, carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in case of a fault.
Context
On the network shown in Figure 1-1111, CE1 is dual-homed to PE1 and PE2. PE1 and PE2 use a virtual address as an NVE interface address at the network side, namely, an Anycast VTEP address. In this way, the CPE is aware of only one remote NVE interface. A VTEP address is configured on the CPE to establish a VXLAN tunnel with the Anycast VTEP address so that PE1, PE2, and the CPE can communicate.
The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.
Before an IPv6 network is used to transmit traffic between a CPE and PE, an IPv4 over IPv6 tunnel must be configured between them. To enable a VXLAN tunnel to recurse routes to the IPv4 over IPv6 tunnel, static routes must be configured on the CPE and PE, and the outbound interface of the route destined for the VXLAN tunnel's destination IP address must be set to the IPv4 over IPv6 tunnel interface.
Procedure
- Configure AC-side service access.
- Configure static VXLAN tunnels between the CPE and PEs. For configuration details, see Configuring a VXLAN Tunnel.
- Configure a bypass VXLAN tunnel between PE1 and PE2.
- Configure FRR on the PEs.
Layer 2 communication
- Run the evpn command to enter the EVPN view.
- Run the vlan-extend private enable command to enable the routes to be advertised to a peer to carry the newly added VLAN extended community attribute.
- Run the vlan-extend redirect enable command to enable the function of redirecting the received routes that carry the VLAN private extended community attribute.
- Run the local-remote frr enable command to enable FRR for MAC routes between the local and remote ends.
- Run the quit command to exit from the EVPN view.
- Run the commit command to commit the configuration.
Layer 3 communication
- Run the bgp as-number command to enter the BGP view.
- Run the ipv4-family vpn-instance vpn-instance-name command to enable the BGP-VPN instance IPv4 address family and displays the address family view.
- Run the auto-frr command to enable BGP auto FRR.
- Run the peer { ipv4-address | group-name } enable command to enable the function of exchanging EVPN routes with a specified peer or peer group. The IP address is a CE address.
- Run the advertise l2vpn evpn command to enable the VPN instance to advertise EVPN IP prefix routes.
- Run the quit command to exit from the BGP-VPN instance IPv4 address family view.
- Run the quit command to exit from the BGP view.
- Run the commit command to commit the configuration.
- (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
- (Optional) Configure a VXLAN over IPsec tunnel between the CPE and PE to enhance the security for packets traversing an insecure network.
For configuration details, see the section Example for Configuring VXLAN over IPsec.
Checking the Configuration
After configuring the VXLAN active-active scenario, check information on the VXLAN tunnel, VNI status, and VBDIF. For details, see the section Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN.
Configuring the Dynamic VXLAN Active-Active Scenario
In the scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. In this way, carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in case of a fault.
Context
On the network shown in Figure 1-1112, CE1 is dual-homed to PE1 and PE2. PE1 and PE2 use a virtual address as an NVE interface address at the network side, namely, an Anycast VTEP address. In this way, the CPE is aware of only one remote VTEP IP. A VTEP address is configured on the CPE to establish a dynamic VXLAN tunnel with the Anycast VTEP address so that PE1, PE2, and the CPE can communicate.
The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.
Procedure
- Configure AC-side service access.
- Configure static VXLAN tunnels between the CPE and PEs. For configuration details, see the section Configuring a VXLAN Tunnel.
- Configure a bypass VXLAN tunnel between PE1 and PE2.
- Configure FRR on the PEs.
Layer 2 communication
- Run the evpn command to enter the EVPN view.
- Run the vlan-extend private enable command to enable routes to be sent to carry the VLAN private extended community attribute.
- Run the vlan-extend redirect enable command to enable the function of redirecting received routes the VLAN private extended community attribute.
- Run the local-remote frr enable command to enable FRR for MAC routes between the local and remote ends.
- Run the quit command to exit the EVPN view.
- Run the commit command to commit the configuration.
Layer 3 communication
- Run the bgp as-number command to enter the BGP view.
- Run the ipv4-family vpn-instance vpn-instance-name command to enable the BGP-VPN instance IPv4 address family and displays the address family view.
- Run the auto-frr command to enable BGP auto FRR.
- Run the peer { ipv4-address | group-name } as-number as-number command to specify a peer IP address and the number of the AS where the peer resides.
- Run the advertise l2vpn evpn command to enable the VPN instance to advertise EVPN IP prefix routes.
- Run the quit command to exit from the BGP-VPN instance IPv4 address family view.
- Run the quit command to exit the BGP view.
- Run the commit command to commit the configuration.
- (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
- (Optional) Configure a VXLAN over IPsec tunnel between the CPE and PE to enhance the security for packets traversing an insecure network.
For configuration details, see the section Example for Configuring VXLAN over IPsec.
Checking the Configuration
After configuring the VXLAN active-active scenario, check information on the VXLAN tunnel, VNI status, and VBDIF. For details, see the section Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN.
Configuring the Dynamic IPv6 VXLAN Active-Active Scenario
In scenarios where an IPv6-based data center is interconnected with an enterprise site, a CE can be dual-homed to an IPv6 VXLAN to implement rapid convergence if a fault occurs, thereby enhancing access reliability and improving service stability.
Context
On the network shown in Figure 1-1113, CE1 is dual-homed to PE1 and PE2. Both PEs use the same virtual address as an NVE interface address (namely, an Anycast VTEP address) at the network side. In this way, the CPE is aware of only one remote VTEP address. To allow the CPE to communicate with PE1 and PE2, a VTEP address must be configured on the CPE to establish an IPv6 VXLAN tunnel with the Anycast VTEP address.
The packets from the CPE can reach CE1 through either PE1 or PE2. However, when a single-homed CE (CE2 and CE3 in this example) exists on the network, the packets from the CPE to the single-homed CE may need to detour to the other PE after reaching one PE. To ensure PE1-PE2 reachability, a bypass VXLAN tunnel must be established between PE1 and PE2.
Procedure
- Configure AC-side service access.
- Configure an IPv6 VXLAN tunnel between the CPE and each PE using BGP EVPN. For details, see Configuring an IPv6 VXLAN Tunnel.
- Configure a bypass VXLAN tunnel between PE1 and PE2.
- Configure FRR on each PE.
For Layer 2 communication:
- Run the evpn command to enter the EVPN view.
- Run the vlan-extend private enable command to enable the routes to be sent to a peer to carry the VLAN private extended community attribute.
- Run the vlan-extend redirect enable command to enable the function of redirecting the received routes that carry the VLAN private extended community attribute.
- Run the local-remote frr enable command to enable FRR for MAC routes between the local and remote ends.
- Run the quit command to exit the EVPN view.
- Run the commit command to commit the configuration.
For Layer 3 communication:
- Run the bgp as-number command to enter the BGP view.
- Run the ipv6-family vpn-instance vpn-instance-name command to enter the BGP-VPN instance IPv6 address family view.
- Run the auto-frr command to enable BGP auto FRR.
- Run the peer { ipv6-address | group-name } as-number as-number command to specify a peer IP address and the number of the AS where the peer resides.
- Run the advertise l2vpn evpn command to enable a VPN instance to advertise IP routes to an EVPN instance.
- Run the quit command to exit the BGP-VPN instance IPv6 address family view.
- Run the quit command to exit the BGP view.
- Run the commit command to commit the configuration.
Verifying the Configuration
After configuring a dynamic IPv6 VXLAN active-active scenario, verify the configuration.
- Run the display bridge-domain [ binding-info | [ bd-id [ brief | verbose | binding-info ] ] ] command to check BD configurations.
- Run the display interface nve [ nve-number | main ] command to check NVE interface information.
- Run the display evpn vpn-instance command to check EVPN instance information.
- Run the display bgp evpn peer [ [ ipv6-address ] verbose ] command to check information about BGP EVPN peers.
- Run the display vxlan peer [ vni vni-id ] command to check the ingress replication lists of all VNIs or a specified one.
- Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check IPv6 VXLAN tunnel information.
- Run the display vxlan vni [ vni-id [ verbose ] ] command to check IPv6 VXLAN configurations and the VNI status.
Configuring VXLAN Accessing BRAS
When telco cloud gateways use VXLAN for user access, you need to configure VXLAN accessing BRAS on the device responsible for user access.
Context
On the network shown in Figure 1-1114, telco cloud gateways (DCGW1 and DCGW2) connect to the aggregation device CPE through VXLAN tunnels. VXLAN is also deployed between the CPE and physical UP (pUP). To enable users to access the external network, configure VXLAN accessing BRAS on the pUP.
Creating an L2VE Interface
Configure an L2VE interface on the pUP to terminate VXLAN services and bind the interface to a VE group.
Procedure
- Run system-view
The system view is displayed.
- Run interface virtual-ethernet ve-number or interface global-ve ve-number
A VE or global VE interface is created, and its view is displayed.
- Run ve-group ve-group-id l2-terminate
The VE or global VE interface is configured as an L2VE interface that terminates VXLAN services and bound to a VE group.
A VE group can contain only one L2VE interface and one L3VE interface. The two VE interfaces in a VE group must reside on the same board.
The two global VE interfaces in a VE group can reside on different boards.
- Run commit
The configuration is committed.
Creating an L3VE Interface
Configure an L3VE interface used for BRAS access on the pUP and bind the L3VE interface to a VE group.
Procedure
- Run system-view
The system view is displayed.
- Run interface virtual-ethernet ve-number or interface global-ve ve-number
A VE or global VE interface is created, and its view is displayed.
- Run ve-group ve-group-id l3-access
The VE or global VE interface is configured as an L3VE interface used for BRAS access and bound to a VE group.
A VE group can contain only one L2VE interface and one L3VE interface. The two VE interfaces in a VE group must reside on the same board.
The two global VE interfaces in a VE group can reside on different boards.
- Run commit
The configuration is committed.
Associating the L2VE Interface with a BD
Associate the L2VE interface with a BD on the pUP, so that VXLAN services can be terminated on the L2VE interface.
Procedure
- Run system-view
The system view is displayed.
- Run interface virtual-ethernet ve-number.subinterface-number or interface global-ve ve-number.subinterface-number
The VE or global VE Layer 2 sub-interface view is displayed.
- Run encapsulation { dot1q [ vid low-pe-vid [ to high-pe-vid ] ] | untag | qinq [ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } ] }
A packet encapsulation type is configured, so that a specific type of interface can transmit data packets of the specified encapsulation type.
- Run rewrite pop { single | double }
The function to remove VLAN tags from received packets is enabled.
For single-tagged packets received by the Layer 2 sub-interface, specify single to enable the sub-interface to remove the VLAN tag from each packet.
If the packet encapsulation type is set to QinQ in the previous step, specify double to enable the sub-interface to remove double VLAN tags from each double-tagged packet received.
- Run bridge-domain bd-id
The Layer 2 sub-interface is associated with a BD, so that the sub-interface can forward packets through the BD.
The BD must have been associated with a VNI in the VXLAN configuration.
- Run commit
The configuration is committed.
Configuring the L3VE Interface as a BAS Interface
Configure the L3VE interface on the pUP as a BAS interface for BRAS access.
Context
- The BAS interface is directly configured on the pUP. This mode applies to scenarios where CU separation is not deployed.
- The BAS interface configurations are delivered by a CP to the pUP. This mode applies to CU separation scenarios.
Procedure
- Directly perform configurations on the pUP. For configuration details, see Configuring PPPoE Access.
- Use a CP to deliver configurations to the pUP. In this case, the UP plane configurations need to be completed on the CP and delivered to the pUP through a southbound channel. For details, see VNE 9000 (vBRAS-CP) Product Documentation > CU Separation Configuration > User Access Configuration > PPPoE Access Configuration.
Configuring NFVI Distributed Gateways (Asymmetric Mode)
In the Network Function Virtualization Infrastructure (NFVI) telco cloud solution, the NFVI distributed gateway function enables mobile phone traffic to pass through the data center network (DCN) and to be processed by the virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE). In addition, traffic can be balanced during internal transmission over the DCN.
Usage Scenario
Huawei's NFVI telco cloud solution incorporates Data Center Interconnect (DCI) and DCN solutions. A large volume of mobile phone traffic enters the DCN and accesses its vUGW and vMSE. After being processed by the vUGW and vMSE, the mobile phone traffic is forwarded over the Internet through the DCN to the destination devices. Equally, response traffic sent over the Internet from the destination devices to the mobile phones also undergoes this process. For this to take place and to ensure that the traffic is balanced within the DCN, you need to deploy the NFVI distributed gateway function on the DCN.
Figure 1-1115 or Figure 1-1116 shows the network of NFVI distributed gateways. DC gateways are the boundary gateways of the DCN network and can be used to exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 are connected to virtualized network function (VNF) devices. VNF1 and VNF2 can be deployed as virtualized NEs to implement the vUGW and vMSE functions and connected to the L2GW/L3GW1 and L2GW/L3GW2 through the interface processing unit (IPU).
The VXLAN active-active/quad-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. All DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between L2GW/L3GW1 and L2GW/L3GW2.
The deployment method of the VXLAN quad-active gateway function is similar to that of the VXLAN active-active gateway function. If you want to deploy the VXLAN quad-active gateway function on DC gateways, see Configuring the Dynamic VXLAN Active-Active Scenarioor Configuring the Dynamic IPv6 VXLAN Active-Active Scenario.
A VPN BGP peer relationship is set up between a VNF and DCGW so that the VNF can advertise user equipment (UE) routes to the DCGW.
Static VPN routes are configured on L2GW/L3GW1 and L2GW/L3GW2 to connect to the VNFs. The routes' destination IP addresses are the VNFs' IP addresses, and the next hops are the IP addresses of the IPUs.
A BGP EVPN peer relationship is established (full-mesh) between any two of the DCGWs and L2GW/L3GWs. An L2GW/L3GW can flood static routes to the VNFs to other devices through BGP EVPN peer relationships. A DCGW can advertise local loopback routes and default routes to the L2GW/L3GWs through the BGP EVPN peer relationships.
Traffic between a mobile phone and the Internet that is forwarded through a VNF is called north-south traffic, whereas the traffic between VNF1 and VNF2 is called east-west traffic. To balance both of these, you need to configure load balancing on the DCGWs and L2GW/L3GWs.
The NFVI distributed gateway function is supported for both IPv4 and IPv6 services. If a configuration step is not differentiated in terms of IPv4 and IPv6, this step applies to both IPv4 and IPv6 services.
When the NFVI distributed gateway is used, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M functions as either a DCGW or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.
Pre-configuration Tasks
Before configuring NFVI distributed gateways (asymmetric mode), complete the following tasks:
Configure the static VXLAN active-active scenario, configure the dynamic VXLAN active-active scenario, or Configuring the Dynamic IPv6 VXLAN Active-Active Scenario on each DC gateway and L2GW/L3GW.
Complete the configurations described in Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN, or Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN on each L2GW/L3GW.
Configure static routes to VNF1 and VNF2 on each L2GW/L3GW. For configuration details, see Creating IPv4 Static Routes or Creating IPv6 Static Routes.
Configuring an L3VPN Instance on a DCGW
You can configure an L3VPN instance to store and manage received mobile phone routes and VPN routes reachable to VNFs.
Procedure
- Run system-view
The system view is displayed.
- Run ip vpn-instance vpn-instance-name
A VPN instance is created, and the VPN instance view is displayed.
- Run vxlan vni vni-id
A VNI is created and associated with the VPN instance.
- Enter the VPN instance IPv4/IPv6 address family view.
Run ipv4-family
The VPN instance IPv4 address family view is displayed.
Run ipv6-family
The VPN instance IPv6 address family view is displayed.
- Configure an RD for the VPN instance.
Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv6 address family.
- Configure VPN targets for the VPN instance.
Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn
VPN targets used to import routes into and from the remote device's L3VPN instance are configured for the VPN instance IPv4 address family.
Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn
VPN targets used to import routes into and from the remote device's L3VPN instance are configured for the VPN instance IPv6 address family.
When the local device advertises EVPN routes to the remote device, the EVPN routes carry the export VPN target configured using this command. When the local device receives an EVPN route from the remote end, the route can be imported into the routing table of the VPN instance IPv4/IPv6 address family only if the VPN target carried in the EVPN route is included in the import VPN target list of the VPN instance IPv4/IPv6 address family.
- Run quit
Exit from the VPN instance IPv4/IPv6 address family view.
- Run quit
Exit from the VPN instance view.
- Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
The number of VBDIF interfaces to be created is the same as the number of planned BDs.
- Run ip binding vpn-instance vpn-instance-name
The VBDIF interface is bound to the VPN instance.
- (Optional) Run ipv6 enable
IPv6 is enabled on the interface. This step is mandatory if an IPv6 address is planned for the VBDIF interface.
- Configure an IPv4/IPv6 address for the VBDIF interface.
Run ip address ip-address { mask | mask-length }
An IPv4 address is configured for the interface.
Run ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-length }
An IPv6 address is configured for the interface.
- Run vxlan anycast-gateway enable
The distributed gateway function is enabled.
- Configure a DCGW to generate ARP (ND) entries for Layer 2 forwarding based on ARP/ND information in EVPN routes.
Run arp generate-rd-table enable
The DCGW is enabled to generate ARP entries used for Layer 2 forwarding based on ARP information.
Run ipv6 nd generate-rd-table enable
The DCGW is enabled to generate ND entries used for Layer 2 forwarding based on ND information.
- Run commit
The configuration is committed.
Configuring Route Advertisement on a DC-GW
After route advertisement is configured on a DC-GW, the DC-GW can construct its own forwarding entries based on received EVPN or BGP routes.
Procedure
- Configure EVPN on the DC-GW to advertise default static routes and loopback routes in a VPN instance.
- Configure the DC-GW to establish a VPN BGP peer relationship with a VNF.
- (Optional) Configure the asymmetric mode for IRB routes. If an L2GW/L3GW is configured to advertise IRB(IRBv6) routes to the DC-GW, you need to configure the IRB asymmetric function on the DC-GW.
- Run commit
The configuration is committed.
Configuring Route Advertisement on an L2GW/L3GW
After route advertisement is configured on an L2GW/L3GW, the L2GW/L3GW can construct its own forwarding entries based on received EVPN or BGP routes.
Procedure
- Configure an L2GW/L3GW to generate ARP/ND entries for Layer 2 forwarding based on ARP(ND) information in EVPN routes.
- Configure an L3VPN instance on the L2GW/L3GW to advertise static VPN routes reachable to a VNF to EVPN.
- Configure the L2GW/L3GW to advertise IRB(IRBv6) or ARP(ND) routes to a DC gateway.
- Run commit
The configuration is committed.
Verifying the NFVI Distributed Gateway Configuration
After configuring the NFVI distributed gateway function, verify the configuration.
Procedure
- Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer command on each DCGW to check whether the VPN BGP peer relationships between the DCGW and VNFs are Established.
- Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table ordisplay bgp vpnv6 vpn-instance vpn-instance-name routing-table command on each DCGW to check whether the DCGW has received mobile phone routes from the VNF and whether the next hop of the routes is the VNF IP address.
- Run the display ip routing-table vpn-instance vpn-instance-name ordisplay ipv6 routing-table vpn-instance vpn-instance-name command on each DCGW to check the DCGW's VPN routing table. The command output shows information about mobile phone routes and the outbound interfaces are VBDIF interfaces.
Configuring NFVI Distributed Gateways (Symmetric Mode)
In the Network Function Virtualization Infrastructure (NFVI) telco cloud solution, the NFVI distributed gateway function enables mobile phone traffic to traverse the DCN in load-balancing mode and to be processed by the virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE) on the DCN.
Usage Scenario
Huawei's NFVI telco cloud solution incorporates DCI and DCN solutions. A large volume of UE traffic enters the DCN and accesses the vUGW and vMSE on the DCN. After being processed by the vUGW and vMSE, the UE traffic is forwarded to destination devices on the Internet. Similarly, response traffic sent over the Internet from the destination devices to UEs also undergoes this process. To meet the preceding requirements and ensure that the UE traffic is load-balanced within the DCN, you need to deploy the NFVI distributed gateway function on DCN devices.
Figure 1-1117 shows the networking diagram of NFVI distributed gateways. DC gateways are the DCN's border gateways and can be used to exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 connect to virtualized network functions (VNFs). VNF1 and VNF2 can be deployed as virtualized NEs to respectively provide vUGW and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through the interface processing unit (IPU).
This networking combines the distributed gateway function and the VXLAN active-active gateway function:
- The VXLAN active-active gateway function is deployed on DC gateways. Specifically, a bypass VXLAN tunnel is established between DC gateways. Both DC gateways use the same virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between L2GW/L3GW1 and L2GW/L3GW2.
On the NFVI distributed gateway network, the number of bridge domains (BDs) must be planned according to the number of network segments that the IPUs belong to. For example, if five IPU interfaces correspond to four network segments, four different BDs must be planned. In symmetric mode, you need to configure all BDs and VBDIF interfaces only on L2GWs/L3GWs and bind all VBDIF interfaces to the same L3VPN instance. In symmetric mode, you also need to perform the following configurations for NFVI distributed gateways:
- Establish VPN BGP peer relationships between VNFs and DC gateways, so that VNFs can advertise UE routes to DC gateways.
- Configure VPN static routes on L2GW/L3GW1 and L2GW/L3GW2, or configure L2GWs/L3GWs to establish VPN IGP neighbor relationships with VNFs to obtain VNF routes with next hop addresses being IPU addresses.
- Establish BGP EVPN peer relationships between any two of the DC gateways and L2GWs/L3GWs. L2GWs/L3GWs can then advertise VNF routes to DC gateways and other L2GWs/L3GWs through BGP EVPN peer relationships. DC gateways can advertise the local loopback route and default route as well as obtained UE routes to L2GWs/L3GWs through BGP EVPN peer relationships.
- Traffic forwarded between the UE and Internet through VNFs is called north-south traffic, and traffic forwarded between VNF1 and VNF2 is called east-west traffic. To balance both types of traffic, you need to configure load balancing on DC gateways and L2GWs/L3GWs.
In the NFVI distributed gateway scenario, the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can function as either a DC gateway or an L2GW/L3GW. However, if the NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M is used as an L2GW/L3GW, east-west traffic cannot be balanced.
Prerequisites
Before configuring NFVI distributed gateways (symmetric mode), complete the following tasks:
- Configure the static VXLAN active-active gateway function or dynamic VXLAN active-active gateway function, or dynamic IPv6 VXLAN active-active gateway function on each DC gateway and L2GW/L3GW.
- Configure VXLAN in distributed gateway mode using BGP EVPN, or configure IPv6 VXLAN in distributed gateway mode using BGP EVPN on each L2GW/L3GW.
Configuring an L3VPN Instance on a DC Gateway
An L3VPN instance can be configured on a DC gateway to store and manage received UE routes and VPN routes destined for VNFs.
Procedure
- Run system-view
The system view is displayed.
- Run ip vpn-instance vpn-instance-name
A VPN instance is created, and the VPN instance view is displayed.
- Enter the VPN instance IPv4/IPv6 address family view.
Run the ipv4-family command to enter the VPN instance IPv4 address family view.
Run the ipv6-family command to enter the VPN instance IPv6 address family view.
- Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4/IPv6 address family.
- Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-extcommunity ] evpn
VPN targets for route exchange with L3VPN instances on remote devices are configured for the VPN instance IPv4/IPv6 address family.
When the local device advertises EVPN routes to a remote device, the EVPN routes carry the export VPN targets configured using this command. The local device allows the EVPN routes received from remote devices to be imported into the local VPN instance IPv4/IPv6 address family routing table only when the VPN targets carried in these routes match the import VPN targets configured using this command.
- Run commit
The configuration is committed.
Configuring Route Advertisement on a DC Gateway
After route advertisement is configured on a DC gateway, other devices can obtain routes to the DC gateway, and the DC gateway can generate its own forwarding entries based on the received EVPN or BGP routes.
Configuring Route Advertisement on L2GWs/L3GWs
After route advertisement is configured on L2GWs/L3GWs, other devices can obtain routes to L2GWs/L3GWs and L2GWs/L3GWs can generate their own forwarding entries based on the received EVPN or BGP routes.
Procedure
- Use either of the following methods to configure a VPN route destined for a VNF on an L2GW/L3GW:
- Configure a VPN static route destined for the VNF. For details, see IPv4 VPN Static Routes or IPv6 VPN Static Routes.
- Configure L2GWs/L3GWs to establish VPN IGP neighbor relationships with VNFs. For details, see Configuring Basic IPv4 IS-IS Functions, Configuring Basic OSPF Functions, Configuring Basic IPv6 IS-IS Functions, or Configuring Basic OSPFv3 Functions.
In an active-active L2GW/L3GW scenario, a secondary IP address (ip address ip-address sub) needs to be configured for the VBDIF interface on each L2GW/L3GW for the establishment of VPN IGP neighbor relationships with VNFs.
- Configure an L3VPN instance on each L2GW/L3GW to advertise VPN routes destined for VNFs to the EVPN instance.
- Configure L2GWs/L3GWs to advertise IRB or IRBv6 routes to DC gateways.
- Run the l2vpn-family evpn command to enter the BGP EVPN address family view.
- Run the peer { ipv4-address | group-name | ipv6-address } advertise { irb | irbv6 } command to configure the device to advertise IRB or IRBv6 routes to DC gateways.
- Run the quit command to return to the BGP view.
- Run the quit command to return to the system view.
- Run the commit command to commit the configuration.
Verifying the Configuration
After configuring NFVI distributed gateways, verify the configuration.
Procedure
- Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer command on each DC gateway to check whether the VPN peer relationships between DC gateways and VNFs are in the Established state.
- Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on each DC gateway to check whether the DC gateway has received UE routes from VNFs and whether the next hop addresses of these routes are VNF addresses.
- Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on each DC gateway to check whether UE routes exist in the VPN routing table and whether the outbound interfaces of these routes are VXLAN or VXLAN6 if such routes exist.
Maintaining VXLAN
This section describes how to clear VXLAN statistics and monitor the VXLAN running status.
Configuring the VXLAN Alarm Function
To learn the VXLAN operating status in time, configure the VXLAN alarm function so that the NMS will be notified of the VXLAN status changes. This facilitates O&M.
Collecting and Checking VXLAN Packet Statistics
To check the network status or locate network faults, you can enable the traffic statistics function to view VXLAN packet statistics.
Procedure
- Enable VXLAN packet statistics collection for a BD.
Run system-view
The system view is displayed.
Run bridge-domain bd-id
A BD is created, and the BD view is displayed.
Run statistic enable
VXLAN packet statistics collection is enabled for the BD.
Run commit
The configuration is committed.
- Enable VXLAN packet statistics collection for a specific VNI.
Run system-view
The system view is displayed.
Run vni vni-id
A VNI is created, and the VNI view is displayed.
Run statistic enable
VXLAN packet statistics collection is enabled.
Run commit
The configuration is committed.
- Enable VNI- and IPv4 VXLAN tunnel-based packet statistics collection.
Run system-view
The system view is displayed.
Run interface nve nve-number
An NVE interface is created, and the NVE interface view is displayed.
Run source ip-address
The IP address of the source VTEP is configured.
Run vni vni-id head-end peer-list ip-address &<1-10>
An ingress replication list is configured for the VNI.
Run vxlan statistics peer peer-ip vni vni-id [ inbound | outbound ] enable
VNI- and VXLAN tunnel-based packet statistics collection is enabled.
Run vxlan statistic l3-mode peer peer-ip vni vni-id inbound enable
Upstream Layer 3 traffic statistics collection by VNI and VXLAN tunnel is enabled.
Run vxlan statistics l3-mode peer peer-ip [ vni vni-id ] outbound enable
VNI- and VXLAN tunnel-based downlink Layer 3 traffic statistics collection is enabled.
Run commit
The configuration is committed.
- Enable VNI- and IPv6 VXLAN tunnel-based packet statistics collection.
- Run the system-view command to enter the system view.
- Run the interface nve nve-number command to create an NVE interface and enter its view.
- Run the vxlan statistics peer destIpv6Addr vni vni-val [ inbound | outbound ] enable command to enable VNI- and IPv6 VXLAN tunnel-based packet statistics collection.
- Run the vxlan statistic l3-mode peer destIpv6Addr vni vni-val inbound enable command to enable VNI- and IPv6 VXLAN tunnel-based Layer 3 uplink traffic statistics collection.
- Run the vxlan statistics l3-mode peer destIpv6Addr [ vni vni-val ] outbound enable command to enable VNI- and VXLAN tunnel-based Layer 3 downlink traffic statistics collection.
- Run the commit command to commit the configuration.
Follow-up Procedure
- Run the display bridge-domain bd-id statistics command to view VXLAN packet statistics in the BD.
- Run the display vxlan statistics vni vni-id command to view VXLAN packet statistics collected by VNI.
- Run the display vxlan statistics source source-ip peer peer-ip vni vni-id command to check VNI- and VXLAN tunnel-based packet statistics.
- Run the display vxlan statistics source source-ipv6 peer peer-ipv6 vni vni-val command to check VNI- and IPv6 VXLAN tunnel-based packet statistics.
- Run the display vxlan statistics l3-mode source source-ip peer peer-ip local-vni vni-id command to check VNI- and VXLAN tunnel-based Layer 3 uplink traffic statistics.
- Run the display vxlan statistics l3-mode source source-ip peer peer-ip remote-vni vni-id command to check VNI- and VXLAN tunnel-based Layer 3 downlink traffic statistics.
- Run the display vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 local-vni vni-val command to check VNI- and IPv6 VXLAN tunnel-based Layer 3 uplink traffic statistics.
- Run the display vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 remote-vni vni-val command to check VNI- and IPv6 VXLAN tunnel-based Layer 3 downlink traffic statistics.
Clearing VXLAN Packet Statistics
This section describes how to clear VXLAN packet statistics in a BD, VXLAN packet statistics collected per VNI, or per VNI and VXLAN tunnel.
Context
Packet statistics cannot be restored after they are cleared. Exercise caution when running the reset commands.
Procedure
- Run the reset bridge-domain bd-id statistics command in the user view to delete packet statistics in a specified BD.
- Run the reset vxlan statistics vni vni-id command in the user view to delete VXLAN packet statistics collected per VNI.
- Run the reset vxlan statistics source source-ip peer peer-ip vni vni-id command in the user view to delete packet statistics collected per VNI and VXLAN tunnel.
- Run the reset vxlan statistics source source-ipv6 peer peer-ipv6 vni vni-val command in the user view to delete VNI- and IPv6 VXLAN tunnel-based packet statistics.
- Run the reset vxlan statistics source source-ip peer peer-ip local-vni local-vni-id command in the user view to delete uplink VXLAN packet statistics collected based on the local VNI ID.
- Run the reset vxlan statistics source source-ipv6 peer peer-ipv6 local-vni vni-val command in the user view to delete uplink IPv6 VXLAN packet statistics collected based on the local VNI ID.
- Run the reset vxlan statistics source source-ip peer peer-ip remote-vni remote-vni-id command in the user view to delete downstream VXLAN packet statistics collected based on the remote VNI ID.
- Run the reset vxlan statistics source source-ipv6 peer peer-ipv6 remote-vni vni-val command in the user view to delete downlink IPv6 VXLAN packet statistics collected based on the peer VNI ID.
- Run the reset vxlan statistics l3-mode source source-ip peer peer-ip local-vni vni-id command in the user view to delete Layer 3 upstream packet statistics collected per VNI and VXLAN tunnel.
- Run the reset vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 local-vni vni-val command in the user view to delete VNI- and IPv6 VXLAN tunnel-based Layer 3 uplink traffic statistics.
- Run the reset vxlan statistics l3-mode source source-ip peer peer-ip remote-vni vni-id command in the user view to delete Layer 3 downstream packet statistics collected per VNI and VXLAN tunnel.
- Run the reset vxlan statistics l3-mode source source-ipv6 peer peer-ipv6 remote-vni vni-val command in the user view to delete VNI- and IPv6 VXLAN tunnel-based Layer 3 downlink traffic statistics.
Checking Statistics about MAC Address Entries in a BD
Clearing Statistics about Dynamic MAC Address Entries in a BD
To view dynamic MAC address entries in a BD within a specified period of time, clear existing dynamic MAC address entry information before starting statistics collection to ensure information accuracy.
Configuration Examples for VXLAN
This section describes the typical application scenarios of VXLANs, including networking requirements, configuration roadmap, and data preparation, and provides related configuration files.
Example for Configuring Users on the Same Network Segment to Communicate Through a VXLAN Tunnel
This section provides an example for configuring users on the same network segment to communicate through a VXLAN tunnel.
Networking Requirements
On the network shown in Figure 1-1118, an enterprise has VMs deployed in different data centers. VM1 on Server1 belongs to VLAN10, and VM1 on Server2 belongs to VLAN20. VM1 on Server1 and VM1 on Server2 reside on the same network segment. To allow VM1s in different data centers to communicate with each other, configure a VXLAN tunnel between Device1 and Device3.
Configuration Roadmap
- Configure a routing protocol on Device1, Device2, and Device3 to allow them to communicate at Layer 3.
- Configure a service access point on Device1 and Device3 to differentiate service traffic.
- Configure a VXLAN tunnel on Device1 and Device3 to forward service traffic.
Data Preparation
To complete the configuration, you need the following data:
- VMs' VLAN IDs (10 and 20)
- IP addresses of interfaces connecting devices
- Interior Gateway Protocol (IGP) running between devices (OSPF in this example)
- BD ID (10)
- VNI ID (5010)
Procedure
- Configure a routing protocol.
Assign an IP address to each interface on Device1, Device2, and Device3 according to Figure 1-1118.
# Configure Device1.<HUAWEI> system-view
[~HUAWEI] sysname Device1
[*HUAWEI] commit
[~Device1] interface loopback 1
[*Device1-LoopBack1] ip address 2.2.2.2 32
[*Device1-LoopBack1] quit
[*Device1] interface gigabitethernet 0/1/1
[*Device1-GigabitEthernet0/1/1] ip address 192.168.1.1 24
[*Device1-GigabitEthernet0/1/1] quit
[*Device1] ospf
[*Device1-ospf-1] area 0
[*Device1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*Device1-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
[*Device1-ospf-1-area-0.0.0.0] quit
[*Device1-ospf-1] quit
[*Device1] commit
Repeat these steps for Device2 and Device3. For configuration details, see Configuration Files in this section.
After OSPF is configured, the devices can use OSPF to learn the IP addresses of loopback interfaces of each other and successfully ping each other. The following example shows the command output on Device1 after it pings Device3:[~Device1] ping 4.4.4.4
PING 4.4.4.4: 56 data bytes, press CTRL_C to break Reply from 4.4.4.4: bytes=56 Sequence=1 ttl=254 time=5 ms Reply from 4.4.4.4: bytes=56 Sequence=2 ttl=254 time=2 ms Reply from 4.4.4.4: bytes=56 Sequence=3 ttl=254 time=2 ms Reply from 4.4.4.4: bytes=56 Sequence=4 ttl=254 time=3 ms Reply from 4.4.4.4: bytes=56 Sequence=5 ttl=254 time=3 ms --- 4.4.4.4 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 2/3/5 ms
- Configure a service access point on Device1 and Device3.# Configure Device1.
[~Device1] bridge-domain 10
[*Device1-bd10] quit
[*Device1] interface gigabitethernet0/1/2.1 mode l2
[*Device1-GigabitEthernet0/1/2.1] encapsulation dot1q vid 10
[*Device1-GigabitEthernet0/1/2.1] rewrite pop single
[*Device1-GigabitEthernet0/1/2.1] bridge-domain 10
[*Device1-GigabitEthernet0/1/2.1] quit
[*Device1] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Configure a VXLAN tunnel on Device1 and Device3.# Configure Device1.
[~Device1] bridge-domain 10
[~Device1-bd10] vxlan vni 5010
[*Device1-bd10] quit
[*Device1] interface nve 1
[*Device1-Nve1] source 2.2.2.2
[*Device1-Nve1] vni 5010 head-end peer-list 4.4.4.4
[*Device1-Nve1] quit
[*Device1] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Verify the configuration.
After completing the configurations, run the display vxlan vni and display vxlan tunnel commands on Device1 and Device3 to check the VNI status and VXLAN tunnel information, respectively. The VNIs are Up on Device1 and Device3. The following example shows the command output on Device1.
[~Device1] display vxlan vni
Number of vxlan vni: 1 VNI BD-ID State --------------------------------------- 5010 10 up
[~Device1] display vxlan tunnel
Number of vxlan tunnel : 1 Tunnel ID Source Destination State Type Uptime ------------------------------------------------------------------- 4026531842 2.2.2.2 4.4.4.4 up static 0028h16m
By now, users on the same network can communicate through the VXLAN tunnel.
Configuration Files
Device1 configuration file
# sysname Device1 # bridge-domain 10 vxlan vni 5010 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.1.1 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown # interface GigabitEthernet0/1/2.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Nve1 source 2.2.2.2 vni 5010 head-end peer-list 4.4.4.4 # ospf 1 area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 192.168.1.0 0.0.0.255 # return
Device2 configuration file
# sysname Device2 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.1.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 192.168.2.1 255.255.255.0 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 192.168.1.0 0.0.0.255 network 192.168.2.0 0.0.0.255 # return
Device3 configuration file
# sysname Device3 # bridge-domain 10 vxlan vni 5010 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.2.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown # interface GigabitEthernet0/1/2.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface Nve1 source 4.4.4.4 vni 5010 head-end peer-list 2.2.2.2 # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 192.168.2.0 0.0.0.255 # return
Example for Configuring Users on Different Network Segments to Communicate Through a VXLAN Layer 3 Gateway
This section provides an example for configuring users on different network segments to communicate through a VXLAN Layer 3 gateway. To achieve this, the default gateway address of the users must be the IP address of the BDIF interface of the Layer 3 gateway.
Networking Requirements
On the network shown in Figure 1-1119, an enterprise has VMs deployed in different data centers. VM1 on Server1 belongs to VLAN10, and VM1 on Server2 belongs to VLAN20. VM1 on Server1 and VM1 on Server2 reside on different network segments. To allow VM1s in different data centers to communicate with each other, configure a VXLAN tunnel between Device1 and Device2 and one between Device2 and Device3.
Configuration Roadmap
- Configure a routing protocol on Device1, Device2, and Device3 to allow them to communicate at Layer 3.
- Configure a service access point on Device1 and Device3 to differentiate service traffic.
- Configure a VXLAN tunnel on Device1, Device2, and Device3 to forward service traffic.
- Configure Device2 as a VXLAN Layer 3 gateway to allow users on different network segments to communicate.
Data Preparation
To complete the configuration, you need the following data:
- VMs' VLAN IDs (10 and 20)
- IP addresses of interfaces connecting devices
- Interior Gateway Protocol (IGP) running between devices (OSPF in this example)
- BD IDs (10 and 20)
- VNI IDs (5010 and 5020)
Procedure
- Configure a routing protocol.
Assign an IP address to each interface on Device 1, Device 2, and Device 3 according to Figure 1-1119.
# Configure Device1.<HUAWEI> system-view
[~HUAWEI] sysname Device1
[*HUAWEI] commit
[~Device1] interface loopback 1
[*Device1-LoopBack1] ip address 2.2.2.2 32
[*Device1-LoopBack1] quit
[*Device1] interface gigabitethernet 0/1/1
[*Device1-GigabitEthernet0/1/1] ip address 192.168.1.1 24
[*Device1-GigabitEthernet0/1/1] quit
[*Device1] ospf
[*Device1-ospf-1] area 0
[*Device1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*Device1-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
[*Device1-ospf-1-area-0.0.0.0] quit
[*Device1-ospf-1] quit
[*Device1] commit
The configurations of Device2 and Device3 are similar to the configuration of Device1. For configuration details, see Configuration Files in this section.
After OSPF is configured, the devices can use OSPF to learn the IP addresses of loopback interfaces of each other and successfully ping each other. The following example shows the command output on Device1 after it pings Device3:[~Device1] ping 4.4.4.4
PING 4.4.4.4: 56 data bytes, press CTRL_C to break Reply from 4.4.4.4: bytes=56 Sequence=1 ttl=254 time=5 ms Reply from 4.4.4.4: bytes=56 Sequence=2 ttl=254 time=2 ms Reply from 4.4.4.4: bytes=56 Sequence=3 ttl=254 time=2 ms Reply from 4.4.4.4: bytes=56 Sequence=4 ttl=254 time=3 ms Reply from 4.4.4.4: bytes=56 Sequence=5 ttl=254 time=3 ms --- 4.4.4.4 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 2/3/5 ms
- Configure a service access point on Device1 and Device3.# Configure Device1.
[~Device1] bridge-domain 10
[*Device1-bd10] quit
[*Device1] interface gigabitethernet0/1/2.1 mode l2
[*Device1-GigabitEthernet0/1/2.1] encapsulation dot1q vid 10
[*Device1-GigabitEthernet0/1/2.1] rewrite pop single
[*Device1-GigabitEthernet0/1/2.1] bridge-domain 10
[*Device1-GigabitEthernet0/1/2.1] quit
[*Device1] commit
The configuration of Device3 is similar to the configuration of Device1. For configuration details, see Configuration Files in this section.
- Configure VXLAN tunnels on Device1, Device2, and Device3.
[~Device1] bridge-domain 10
[*Device1-bd10] vxlan vni 5010
[*Device1-bd10] quit
[*Device1] interface nve 1
[*Device1-Nve1] source 2.2.2.2
[*Device1-Nve1] vni 5010 head-end peer-list 3.3.3.3
[*Device1-Nve1] quit
[*Device1] commit
# Configure Device2.[~Device2] bridge-domain 10
[*Device2-bd10] vxlan vni 5010
[*Device2-bd10] quit
[*Device2] interface nve 1
[*Device2-Nve1] source 3.3.3.3
[*Device2-Nve1] vni 5010 head-end peer-list 2.2.2.2
[*Device2-Nve1] quit
[~Device2] bridge-domain 20
[*Device2-bd20] vxlan vni 5020
[*Device2-bd20] quit
[*Device2] interface nve 1
[*Device2-Nve1] vni 5020 head-end peer-list 4.4.4.4
[*Device2-Nve1] quit
[*Device2] commit
# Configure Device3.[~Device3] bridge-domain 20
[*Device3-bd20] vxlan vni 5020
[*Device3-bd20] quit
[*Device3] interface nve 1
[*Device3-Nve1] source 4.4.4.4
[*Device3-Nve1] vni 5020 head-end peer-list 3.3.3.3
[*Device3-Nve1] quit
[*Device3] commit
- Configure Device2 as a VXLAN Layer 3 gateway.
[~Device2] interface vbdif 10
[*Device2-Vbdif10] ip address 192.168.10.10 24
[*Device2-Vbdif10] quit
[*Device2] interface vbdif 20
[*Device2-Vbdif20] ip address 192.168.20.10 24
[*Device2-Vbdif20] quit
[*Device2-Vbdif20] commit
- Verify the configuration.
After completing the configurations, run the display vxlan vni and display vxlan tunnel commands on Device1, Device2, and Device3 to check the VNI status and VXLAN tunnel information, respectively. The VNIs are Up on Device1, Device2, and Device3. The following example shows the command output on Device2.
[~Device2] display vxlan vni
Number of vxlan vni: 2 VNI BD-ID State --------------------------------------- 5010 10 up 5020 20 up
[~Device2] display vxlan tunnel
Number of Vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime --------------------------------------------------------------------- 4026531841 3.3.3.3 2.2.2.2 up static 0029h30m 4026531842 3.3.3.3 4.4.4.4 up static 0029h44m
Configure 192.168.10.10/24 as the default gateway IP address of VM1 in VLAN 10 on Server1.
Configure 192.168.20.10/24 as the default gateway IP address of VM1 in VLAN 20 on Server2.
After the configuration is complete, VM1 on different network segments can communicate with each other. In addition, to enable Device1 and Device3 to communicate on the overlay network, configure static routes or an IGP to advertise routes to 192.168.10.0/24 and 192.168.20.0/24 to each other. The next hop is the VBDIF interface address on Device2.
Configuration Files
Device1 configuration file
# sysname Device1 # bridge-domain 10 vxlan vni 5010 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.1.1 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown # interface GigabitEthernet0/1/2.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Nve1 source 2.2.2.2 vni 5010 head-end peer-list 3.3.3.3 # ospf 1 area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 192.168.1.0 0.0.0.255 # return
Device2 configuration file
# sysname Device2 # bridge-domain 10 vxlan vni 5010 # bridge-domain 20 vxlan vni 5020 # interface Vbdif10 ip address 192.168.10.10 255.255.255.0 # interface Vbdif20 ip address 192.168.20.10 255.255.255.0 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.1.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 192.168.2.1 255.255.255.0 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Nve1 source 3.3.3.3 vni 5010 head-end peer-list 2.2.2.2 vni 5020 head-end peer-list 4.4.4.4 # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 192.168.1.0 0.0.0.255 network 192.168.2.0 0.0.0.255 # return
Device3 configuration file
# sysname Device3 # bridge-domain 20 vxlan vni 5020 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.2.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown # interface GigabitEthernet0/1/2.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface Nve1 source 4.4.4.4 vni 5020 head-end peer-list 3.3.3.3 # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 192.168.2.0 0.0.0.255 # return
Example for Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN
This section provides an example for configuring VXLAN in centralized gateway mode for dynamic tunnel establishment so that users on the same network segment or different network segments can communicate.
Networking Requirements
On the network shown in Figure 1-1120, an enterprise has VMs deployed in different areas of a data center. VM 1 on Server 1 belongs to VLAN 10, VM 1 on Server 2 belongs to VLAN 20, and VM 1 on Server 3 belongs to VLAN 30. Server 1 and Server 2 reside on different network segments, whereas Server 2 and Server 3 reside on the same network segment. To allow VM 1s on different servers to communicate with each other, configure IPv6 VXLAN in centralized gateway mode.
Configuration Roadmap
Configure a routing protocol on Device 1, Device 2, and Device 3 to allow them to communicate at Layer 3.
Configure a service access point on Device 1 and Device 3 to differentiate service traffic.
Configure a BGP EVPN peer relationship.
Configure EVPN instances.
Configure an ingress replication list.
Configure Device 2 as a Layer 3 VXLAN gateway.
Data Preparation
To complete the configuration, you need the following data.
VMs' VLAN IDs (10, 20, and 30)
IP addresses of interfaces connecting devices
Interior Gateway Protocol (IGP) running between devices (OSPF in this example)
- BD IDs (10 and 20)
- VNI IDs (5010 and 5020)
- EVPN instances' RDs (11:1, 12:1, 21:1, 23:1, and 31:2) and RTs (1:1 and 2:2)
Procedure
- Configure a routing protocol.
Assign an IP address to each interface on Device 1, Device 2, and Device 3 according to Figure 1-1120.
# Configure Device 1.<HUAWEI> system-view
[~HUAWEI] sysname Device1
[*HUAWEI] commit
[~Device1] interface loopback 1
[*Device1-LoopBack1] ip address 2.2.2.2 32
[*Device1-LoopBack1] quit
[*Device1] interface gigabitethernet 0/1/1
[*Device1-GigabitEthernet0/1/1] ip address 192.168.1.1 24
[*Device1-GigabitEthernet0/1/1] quit
[*Device1] ospf
[*Device1-ospf-1] area 0
[*Device1-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*Device1-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
[*Device1-ospf-1-area-0.0.0.0] quit
[*Device1-ospf-1] quit
[*Device1] commit
The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.
After OSPF is configured, the devices can use OSPF to learn the IP addresses of each other's loopback interfaces and successfully ping each other. The following example shows the command output on Device 1 after it pings Device 3:[~Device1] ping 4.4.4.4
PING 4.4.4.4: 56 data bytes, press CTRL_C to break Reply from 4.4.4.4: bytes=56 Sequence=1 ttl=254 time=5 ms Reply from 4.4.4.4: bytes=56 Sequence=2 ttl=254 time=2 ms Reply from 4.4.4.4: bytes=56 Sequence=3 ttl=254 time=2 ms Reply from 4.4.4.4: bytes=56 Sequence=4 ttl=254 time=3 ms Reply from 4.4.4.4: bytes=56 Sequence=5 ttl=254 time=3 ms --- 4.4.4.4 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 2/3/5 ms
- Configure a service access point on Device 1 and Device 3.# Configure Device 1.
[~Device1] bridge-domain 10
[*Device1-bd10] quit
[*Device1] interface gigabitethernet0/1/2.1 mode l2
[*Device1-GigabitEthernet0/1/2.1] encapsulation dot1q vid 10
[*Device1-GigabitEthernet0/1/2.1] rewrite pop single
[*Device1-GigabitEthernet0/1/2.1] bridge-domain 10
[*Device1-GigabitEthernet0/1/2.1] quit
[*Device1] bridge-domain 20
[*Device1-bd20] quit
[*Device1] interface gigabitethernet0/1/3.1 mode l2
[*Device1-GigabitEthernet0/1/3.1] encapsulation dot1q vid 30
[*Device1-GigabitEthernet0/1/3.1] rewrite pop single
[*Device1-GigabitEthernet0/1/3.1] bridge-domain 20
[*Device1-GigabitEthernet0/1/3.1] quit
[*Device1] commit
The configuration of Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.
- Configure a BGP EVPN peer relationship.
# Configure Device 1.
[~Device1] bgp 100
[*Device1-bgp] peer 3.3.3.3 as-number 100
[*Device1-bgp] peer 3.3.3.3 connect-interface LoopBack1
[*Device1-bgp] peer 4.4.4.4 as-number 100
[*Device1-bgp] peer 4.4.4.4 connect-interface LoopBack1
[*Device1-bgp] l2vpn-family evpn
[*Device1-bgp-af-evpn] peer 3.3.3.3 enable
[*Device1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
[*Device1-bgp-af-evpn] peer 4.4.4.4 enable
[*Device1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*Device1-bgp-af-evpn] quit
[*Device1-bgp] quit
[*Device1] commit
The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.
- Configure an EVPN instance on Device 1, Device 2, and Device 3.# Configure Device 1.
[~Device1] evpn vpn-instance evrf3 bd-mode
[*Device1-evpn-instance-evrf3] route-distinguisher 11:1
[*Device1-evpn-instance-evrf3] vpn-target 1:1
[*Device1-evpn-instance-evrf3] quit
[*Device1] bridge-domain 10
[*Device1-bd10] vxlan vni 5010 split-horizon-mode
[*Device1-bd10] evpn binding vpn-instance evrf3
[*Device1-bd10] quit
[*Device1] evpn vpn-instance evrf4 bd-mode
[*Device1-evpn-instance-evrf4] route-distinguisher 12:1
[*Device1-evpn-instance-evrf4] vpn-target 2:2
[*Device1-evpn-instance-evrf4] quit
[*Device1] bridge-domain 20
[*Device1-bd20] vxlan vni 5020 split-horizon-mode
[*Device1-bd20] evpn binding vpn-instance evrf4
[*Device1-bd20] quit
[*Device1] commit
The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.
- Configure an ingress replication list.# Configure Device 1.
[~Device1] interface nve 1
[*Device1-Nve1] source 2.2.2.2
[*Device1-Nve1] vni 5010 head-end peer-list protocol bgp
[*Device1-Nve1] vni 5020 head-end peer-list protocol bgp
[*Device1-Nve1] quit
[*Device1] commit
The configuration of Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.
- Configure Device 2 as a Layer 3 VXLAN gateway.
[~Device2] interface vbdif 10
[*Device2-Vbdif10] ip address 192.168.10.10 24
[*Device2-Vbdif10] quit
[*Device2] interface vbdif 20
[*Device2-Vbdif20] ip address 192.168.20.10 24
[*Device2-Vbdif20] quit
[*Device2-Vbdif20] commit
- Verify the configuration.
After completing the configurations, run the display vxlan tunnel and display vxlan vni commands on Device 1, Device 2, and Device 3 to check the VXLAN tunnel and VNI information, respectively. The VNIs are Up. The following example shows the command output on Device 1.
[~Device1] display vxlan tunnel
Number of vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime ------------------------------------------------------------------- 4026531843 2.2.2.2 4.4.4.4 up dynamic 0035h21m 4026531844 2.2.2.2 3.3.3.3 up dynamic 0036h10m
[~Device1] display vxlan vni
Number of vxlan vni : 2 VNI BD-ID State --------------------------------------- 5010 10 up 5020 20 up
Run the display bgp evpn all routing-table command to check EVPN route information.
[~Device1] display bgp evpn all routing-table
Local AS number : 100 BGP Local router ID is 192.168.1.1 Status codes: * - valid, > - best, d - damped, x - best external, a - add path, h - history, i - internal, s - suppressed, S - Stale Origin : i - IGP, e - EGP, ? - incomplete EVPN address family: Number of Inclusive Multicast Routes: 5 Route Distinguisher: 11:1 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *> 0:32:2.2.2.2 0.0.0.0 Route Distinguisher: 12:1 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *> 0:32:2.2.2.2 0.0.0.0 Route Distinguisher: 21:1 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *>i 0:32:3.3.3.3 3.3.3.3 Route Distinguisher: 23:1 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *>i 0:32:3.3.3.3 3.3.3.3 Route Distinguisher: 31:2 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *>i 0:32:4.4.4.4 4.4.4.4
VM1s on different servers can communicate. For example, you can ping VM1 of Server 1 on the Layer 3 gateway Device 2.
[~Device2] ping 192.168.10.1 PING 192.168.10.1: 56 data bytes, press CTRL_C to break Reply from 192.168.10.1: bytes=56 Sequence=1 ttl=254 time=15 ms Reply from 192.168.10.1: bytes=56 Sequence=2 ttl=254 time=5 ms Reply from 192.168.10.1: bytes=56 Sequence=3 ttl=254 time=5 ms Reply from 192.168.10.1: bytes=56 Sequence=4 ttl=254 time=10 ms Reply from 192.168.10.1: bytes=56 Sequence=5 ttl=254 time=10 ms --- 192.168.10.1 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 5/10/15 ms
Configuration Files
Device 1 configuration file
# sysname Device1 # evpn vpn-instance evrf3 bd-mode route-distinguisher 11:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 5010 split-horizon-mode evpn binding vpn-instance evrf3 # evpn vpn-instance evrf4 bd-mode route-distinguisher 12:1 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # bridge-domain 20 vxlan vni 5020 split-horizon-mode evpn binding vpn-instance evrf4 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.1.1 255.255.255.0 # interface GigabitEthernet0/1/2.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 30 rewrite pop single bridge-domain 20 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Nve1 source 2.2.2.2 vni 5010 head-end peer-list protocol bgp vni 5020 head-end peer-list protocol bgp # bgp 100 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast peer 3.3.3.3 enable peer 4.4.4.4 enable # l2vpn-family evpn undo policy vpn-target peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 192.168.1.0 0.0.0.255 # return
Device 2 configuration file
# sysname Device2 # evpn vpn-instance evrf3 bd-mode route-distinguisher 21:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 5010 split-horizon-mode evpn binding vpn-instance evrf3 # evpn vpn-instance evrf4 bd-mode route-distinguisher 23:1 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # bridge-domain 20 vxlan vni 5020 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip address 192.168.10.10 255.255.255.0 # interface Vbdif20 ip address 192.168.20.10 255.255.255.0 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.1.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 192.168.2.1 255.255.255.0 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Nve1 source 3.3.3.3 vni 5010 head-end peer-list protocol bgp vni 5020 head-end peer-list protocol bgp # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast peer 2.2.2.2 enable peer 4.4.4.4 enable # l2vpn-family evpn peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 192.168.1.0 0.0.0.255 network 192.168.2.0 0.0.0.255 # return
Device 3 configuration file
# sysname Device3 # evpn vpn-instance evrf4 bd-mode route-distinguisher 31:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # bridge-domain 20 vxlan vni 5020 split-horizon-mode evpn binding vpn-instance evrf4 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.2.2 255.255.255.0 # interface GigabitEthernet0/1/2.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface Nve1 source 4.4.4.4 vni 5020 head-end peer-list protocol bgp # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 # ipv4-family unicast peer 2.2.2.2 enable peer 3.3.3.3 enable # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 192.168.2.0 0.0.0.255 # return
Example for Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN
This section provides an example for configuring VXLAN in distributed gateway mode using BGP EVPN.
Networking Requirements
Distributed VXLAN gateways can be configured to address problems that occur in legacy centralized VXLAN gateway networking, for example, forwarding paths are not optimal, and the ARP entry specification is a bottleneck.
On the network shown in Figure 1-1121, an enterprise has VMs deployed in different data centers. VM 1 on Server 1 belongs to VLAN 10, and VM 1 on Server 2 belongs to VLAN 20. VM 1 on Server 1 and VM 1 on Server 2 reside on different network segments. To allow VM1s in different data centers to communicate with each other, configure distributed VXLAN gateways.
In this example, Interface1 and Interface2 represent GE 0/1/0 and GE 0/1/1, respectively.
Device |
Interface |
IP Address |
---|---|---|
Device 1 |
GE 0/1/0 |
192.168.3.2/24 |
GE 0/1/1 |
192.168.2.2/24 |
|
Loopback 0 |
1.1.1.1/32 |
|
Device 2 |
GE 0/1/0 |
192.168.2.1/24 |
Loopback 0 |
2.2.2.2/32 |
|
Device 3 |
GE 0/1/0 |
192.168.3.1/24 |
Loopback 0 |
3.3.3.3/32 |
Configuration Roadmap
- Configure IGP to run between Device 1 and Device 2 and between Device 1 and Device 3.
- Configure a service access point on Device 2 and Device 3 to differentiate service traffic.
- Specify Device 1 as a BGP EVPN peer for Device 2 and Device 3.
- Specify Device 2 and Device 3 as BGP EVPN peers for Device 1 and configure Device 2 and Device 3 as RR clients.
- Configure VPN and EVPN instances on Device 2 and Device 3.
- Configure an ingress replication list on Device 2 and Device 3.
- Configure Device 2 and Device 3 as Layer 3 VXLAN gateways.
- Configure IRB route advertisement on Device 1, Device 2, and Device 3.
Data Preparation
To complete the configuration, you need the following data.
- VMs' VLAN IDs (10 and 20)
- IP addresses of interfaces connecting devices
- BD IDs (10 and 20)
- VNI IDs (10 and 20)
- VNI ID in VPN instance (5010)
Procedure
- Configure IGP routing protocol.
Assign an IP address to each interface on Device 1, Device 2, and Device 3 according to Figure 1-1121.
# Configure Device 1.
<HUAWEI> system-view
[~HUAWEI] sysname Device1
[*HUAWEI] commit
[~Device1] isis 1
[*Device1-isis-1] network-entity 10.0000.0000.0001.00
[*Device1-isis-1] quit
[*Device1] commit
[~Device1] interface loopback 0
[*Device1-LoopBack0] ip address 1.1.1.1 32
[*Device1-LoopBack0] isis enable 1
[*Device1-LoopBack0] quit
[*Device1] interface GigabitEthernet0/1/0
[*Device1-GigabitEthernet0/1/0] ip address 192.168.3.2 24
[*Device1-GigabitEthernet0/1/0] isis enable 1
[*Device1-GigabitEthernet0/1/0] quit
[*Device1] interface GigabitEthernet0/1/1
[*Device1-GigabitEthernet0/1/1] ip address 192.168.2.2 24
[*Device1-GigabitEthernet0/1/1] isis enable 1
[*Device1-GigabitEthernet0/1/1] quit
[*Device1] commit
The configuration of Device 2 and Device 3 is similar to the configuration of Device 1. For configuration details, see Configuration Files in this section.
- Configure a service access point on Device 2 and Device 3.
# Configure Device 2.
[~Device2] bridge-domain 10
[*Device2-bd10] quit
[*Device2] interface GigabitEthernet0/1/1.1 mode l2
[*Device2-GigabitEthernet0/1/1.1] encapsulation dot1q vid 10
[*Device2-GigabitEthernet0/1/1.1] rewrite pop single
[*Device2-GigabitEthernet0/1/1.1] bridge-domain 10
[*Device2-GigabitEthernet0/1/1.1] quit
[*Device2] commit
The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.
- Specify Device 1 as a BGP EVPN peer for Device 2 and Device 3.# Specify Device 1 as a BGP EVPN peer for Device 2.
[~Device2] bgp 100
[*Device2-bgp] peer 1.1.1.1 as-number 100
[*Device2-bgp] peer 1.1.1.1 connect-interface LoopBack0
[*Device2-bgp] l2vpn-family evpn
[*Device2-bgp-af-evpn] policy vpn-target
[*Device2-bgp-af-evpn] peer 1.1.1.1 enable
[*Device2-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
[*Device2-bgp-af-evpn] quit
[*Device2-bgp] quit
[*Device2] commit
The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.
- Specify Device 2 and Device 3 as BGP EVPN peers for Device 1 and configure them as RR clients.# Specify BGP EVPN peers for Device 1.
[~Device1] bgp 100
[*Device1-bgp] peer 2.2.2.2 as-number 100
[*Device1-bgp] peer 2.2.2.2 connect-interface LoopBack0
[*Device1-bgp] peer 3.3.3.3 as-number 100
[*Device1-bgp] peer 3.3.3.3 connect-interface LoopBack0
[*Device1-bgp] l2vpn-family evpn
[*Device1-bgp-af-evpn] peer 2.2.2.2 enable
[*Device1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*Device1-bgp-af-evpn] peer 2.2.2.2 reflect-client
[*Device1-bgp-af-evpn] peer 3.3.3.3 enable
[*Device1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
[*Device1-bgp-af-evpn] peer 3.3.3.3 reflect-client
[*Device1-bgp-af-evpn] undo policy vpn-target
[*Device1-bgp-af-evpn] quit
[*Device1-bgp] quit
[*Device1] commit
- Configure VPN and EVPN instances on Device 2 and Device 3.
# Configure Device 2.
[~Device2] ip vpn-instance vpn1
[*Device2-vpn-instance-vpn1] vxlan vni 5010
[*Device2-vpn-instance-vpn1] ipv4-family
[*Device2-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*Device2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*Device2-vpn-instance-vpn1-af-ipv4] quit
[*Device2-vpn-instance-vpn1] quit
[*Device2] evpn vpn-instance evrf3 bd-mode
[*Device2-evpn-instance-evrf3] route-distinguisher 10:1
[*Device2-evpn-instance-evrf3] vpn-target 11:1
[*Device2-evpn-instance-evrf3] quit
[*Device2] bridge-domain 10
[*Device2-bd10] vxlan vni 10 split-horizon-mode
[*Device2-bd10] evpn binding vpn-instance evrf3
[*Device2-bd10] quit
[*Device2] commit
The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.
- Configure an ingress replication list on Device 2 and Device 3.# Configure an ingress replication list on Device 2.
[~Device2] interface nve 1
[*Device2-Nve1] source 2.2.2.2
[*Device2-Nve1] vni 10 head-end peer-list protocol bgp
[*Device2-Nve1] quit
[*Device2] commit
The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.
- Configure Device 2 and Device 3 as Layer 3 VXLAN gateways.
# Configure Device 2.
[~Device2] interface Vbdif10
[*Device2-Vbdif10] ip binding vpn-instance vpn1
[*Device2-Vbdif10] ip address 10.1.1.1 255.255.255.0
[*Device2-Vbdif10] vxlan anycast-gateway enable
[*Device2-Vbdif10] arp collect host enable
[*Device2-Vbdif10] quit
[*Device2] commit
The configuration of Device 3 is similar to the configuration of Device 2. Note that the IP addresses of VBDIF interfaces on Device 2 and Device 3 must belong to different network segments. For configuration details, see Configuration Files in this section.
- Configure IRB route advertisement on Device 1, Device 2, and Device 3.
# Configure Device 1.
[~Device1] bgp 100
[~Device1-bgp] l2vpn-family evpn
[~Device1-bgp-af-evpn] peer 2.2.2.2 advertise irb
[*Device1-bgp-af-evpn] peer 3.3.3.3 advertise irb
[*Device1-bgp-af-evpn] quit
[*Device1-bgp] quit
[*Device1] commit
# Configure Device 2.
[~Device2] bgp 100
[~Device2-bgp] l2vpn-family evpn
[~Device2-bgp-af-evpn] peer 1.1.1.1 advertise irb
[*Device2-bgp-af-evpn] quit
[*Device2-bgp] quit
[*Device2] commit
The configuration of Device 3 is similar to the configuration of Device 2. For configuration details, see Configuration Files in this section.
- Verify the configuration.
After completing the configurations, run the display vxlan tunnel command on Device 2 and Device 3 to check VXLAN tunnel information. The following example uses the command output on Device 2.
[*Device2] display vxlan tunnel
Number of vxlan tunnel : 1 Tunnel ID Source Destination State Type Uptime -------------------------------------------------------------------- 4026531841 2.2.2.2 3.3.3.3 up dynamic 0026h29m
Run the display bgp evpn all routing-table command to check EVPN route information.
[*Device2]display bgp evpn all routing-table Local AS number : 100 BGP Local router ID is 2.2.2.2 Status codes: * - valid, > - best, d - damped, x - best external, a - add path, h - history, i - internal, s - suppressed, S - Stale Origin : i - IGP, e - EGP, ? - incomplete EVPN address family: Number of Mac Routes: 2 Route Distinguisher: 10:1 Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop *> 0:48:00e0-fc00-0002:0:0.0.0.0 0.0.0.0 Route Distinguisher: 20:1 Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop *>i 0:48:00e0-fc00-0003:0:0.0.0.0 3.3.3.3 EVPN address family: Number of Inclusive Multicast Routes: 2 Route Distinguisher: 10:1 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *> 0:32:2.2.2.2 0.0.0.0 Route Distinguisher: 20:1 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *>i 0:32:3.3.3.3 3.3.3.3
VM1s on different servers can communicate. You can ping VM1 of Server 2 from the distributed gateway Device 2.
[~Device2] ping -vpn-instance vpn1 10.2.1.10 PING 10.2.1.10: 300 data bytes, press CTRL_C to break Reply from 10.2.1.10: bytes=300 Sequence=1 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=2 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=3 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=4 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=5 ttl=254 time=30 ms --- 10.2.1.10 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 30/30/30 ms
Configuration Files
Device 1 configuration file
# sysname Device1 # isis 1 network-entity 10.0000.0000.0001.00 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.3.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/1 undo shutdown ip address 192.168.2.2 255.255.255.0 isis enable 1 # interface LoopBack0 ip address 1.1.1.1 255.255.255.255 isis enable 1 # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack0 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack0 # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan peer 2.2.2.2 advertise irb peer 2.2.2.2 reflect-client peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan peer 3.3.3.3 advertise irb peer 3.3.3.3 reflect-client # return
Device 2 configuration file
# sysname Device2 # isis 1 network-entity 10.0000.0000.0002.00 # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # evpn vpn-instance evrf3 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf3 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.2.1 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/1.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack0 ip address 2.2.2.2 255.255.255.255 isis enable 1 # interface Nve1 source 2.2.2.2 vni 10 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack0 # l2vpn-family evpn policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise encap-type vxlan peer 1.1.1.1 advertise irb # return
Device 3 configuration file
# sysname Device3 # isis 1 network-entity 10.0000.0000.0003.00 # ip vpn-instance vpn1 ipv4-family route-distinguisher 22:22 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # evpn vpn-instance evrf3 bd-mode route-distinguisher 20:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf3 # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.2.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.3.1 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/1.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface LoopBack0 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 vni 20 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack0 # l2vpn-family evpn policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise encap-type vxlan peer 1.1.1.1 advertise irb # return
Example for Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN
This section provides an example for deploying IPv6 VXLAN in distributed gateway mode using BGP EVPN.
Networking Requirements
In IPv6 VXLAN, distributed gateways can be configured to address problems that occur in centralized gateway networking. Such problems include sub-optimal forwarding paths and bottlenecks on Layer 3 gateways in terms of ARP or ND entry specifications.
As shown in Figure 1-1122, an enterprise deploys IPv4 VMs in different areas of an IPv6 DC. IPv4 VM1 on Server1 belongs to VLAN 10, and IPv4 VM1 on Server2 belongs to VLAN 20. The two VMs reside on different network segments. IPv6 VXLAN in distributed gateway mode is required for communication between IPv4 VM1s on different servers.
Interface1 and Interface2 in this example represent GigabitEthernet 0/1/0 and GigabitEthernet 0/1/1, respectively.
Device |
Interface |
IP Address and Mask |
---|---|---|
Device1 |
GigabitEthernet 0/1/0 |
2001:DB8:3::2/64 |
GigabitEthernet 0/1/1 |
2001:DB8:2::2/64 |
|
LoopBack0 |
2001:DB8:11::1/128 |
|
Device2 |
GigabitEthernet 0/1/0 |
2001:DB8:2::1/64 |
LoopBack0 |
2001:DB8:22::2/128 |
|
Device3 |
GigabitEthernet 0/1/0 |
2001:DB8:3::1/64 |
LoopBack0 |
2001:DB8:33::3/128 |
Configuration Roadmap
- Configure OSPFv3 to run between Device1 and Device2 and between Device1 and Device3.
- Configure a service access point on Device2 and Device3 to differentiate service traffic.
- Configure Device2 and Device3 to establish BGP EVPN peer relationships with Device1.
- Configure Device1 to establish BGP EVPN peer relationships with Device2 and Device3. Then, configure Device1 as the RR.
- Configure a VPN instance and an EVPN instance on Device2 and Device3.
- Enable ingress replication on Device2 and Device3.
- Configure an IPv6 VXLAN Layer 3 gateway on Device2 and Device3, and configure an IPv4 address for the gateway interface.
- Configure BGP to advertise IRB routes between Device1 and Device2 and between Device1 and Device3.
Data Preparation
To complete the configuration, you need the following data:
- Router IDs of Device1, Device2, and Device3 used by OSPFv3 (1.1.1.1, 2.2.2.2, and 3.3.3.3)
- VM1 VLAN IDs (10 and 20)
- IPv6 addresses of interconnection interfaces between network devices and IPv4 address of the VBDIF interface that functions as the Layer 3 gateway interface
- BD IDs (10 and 20)
- VNI IDs (10 and 20)
- VNI ID in the VPN instance (100)
Procedure
- Assign an IPv6 address for each interface.
Assign an IPv6 address to each interface on Device1, Device2, and Device3 according to Figure 1-1122. For configuration details, see Configuration Files in this section.
- Configure OSPFv3.
# Configure Device1.
<HUAWEI> system-view [~HUAWEI] sysname Device1 [*HUAWEI] commit [~Device1] ospfv3 1 [*Device1-ospfv3-1] router-id 1.1.1.1 [*Device1-ospfv3-1] area 0.0.0.0 [*Device1-ospfv3-1-area-0.0.0.0] quit [*Device1-ospfv3-1] quit [*Device1] commit [~Device1] interface loopback 0 [*Device1-LoopBack0] ospfv3 1 area 0.0.0.0 [*Device1-LoopBack0] quit [*Device1] interface GigabitEthernet0/1/0 [*Device1-GigabitEthernet0/1/0] ospfv3 1 area 0.0.0.0 [*Device1-GigabitEthernet0/1/0] quit [*Device1] interface GigabitEthernet0/1/1 [*Device1-GigabitEthernet0/1/1] ospfv3 1 area 0.0.0.0 [*Device1-GigabitEthernet0/1/1] quit [*Device1] commit
The configuration of Device2 and Device3 is similar to the configuration of Device1. For configuration details, see Configuration Files in this section.
- Configure a service access point on Device2 and Device3.
# Configure Device2.
[~Device2] bridge-domain 10 [*Device2-bd10] quit [*Device2] interface GigabitEthernet0/1/1.1 mode l2 [*Device2-GigabitEthernet0/1/1.1] encapsulation dot1q vid 10 [*Device2-GigabitEthernet0/1/1.1] rewrite pop single [*Device2-GigabitEthernet0/1/1.1] bridge-domain 10 [*Device2-GigabitEthernet0/1/1.1] quit [*Device2] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Configure Device2 and Device3 to establish BGP EVPN peer relationships with Device1.
# Configure a BGP EVPN peer relationship on Device2.
[~Device2] bgp 100 [*Device2-bgp] peer 2001:DB8:11::1 as-number 100 [*Device2-bgp] peer 2001:DB8:11::1 connect-interface LoopBack0 [*Device2-bgp] l2vpn-family evpn [*Device2-bgp-af-evpn] policy vpn-target [*Device2-bgp-af-evpn] peer 2001:DB8:11::1 enable [*Device2-bgp-af-evpn] peer 2001:DB8:11::1 advertise encap-type vxlan [*Device2-bgp-af-evpn] quit [*Device2-bgp] quit [*Device2] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Configure Device1 to establish BGP EVPN peer relationships with Device2 and Device3. Then configure Device1 as the RR and Device2 and Device3 as the RR clients.
# Configure Device1.
[~Device1] bgp 100 [*Device1-bgp] peer 2001:DB8:22::2 as-number 100 [*Device1-bgp] peer 2001:DB8:22::2 connect-interface LoopBack0 [*Device1-bgp] peer 2001:DB8:33::3 as-number 100 [*Device1-bgp] peer 2001:DB8:33::3 connect-interface LoopBack0 [*Device1-bgp] l2vpn-family evpn [*Device1-bgp-af-evpn] peer 2001:DB8:22::2 enable [*Device1-bgp-af-evpn] peer 2001:DB8:22::2 advertise encap-type vxlan [*Device1-bgp-af-evpn] peer 2001:DB8:22::2 reflect-client [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 enable [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 advertise encap-type vxlan [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 reflect-client [*Device1-bgp-af-evpn] undo policy vpn-target [*Device1-bgp-af-evpn] quit [*Device1-bgp] quit [*Device1] commit
- Configure a VPN instance and an EVPN instance on Device2 and Device3.
# Configure Device2.
[~Device2] ip vpn-instance vpn1 [*Device2-vpn-instance-vpn1] vxlan vni 100 [*Device2-vpn-instance-vpn1] ipv4-family [*Device2-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11 [*Device2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn [*Device2-vpn-instance-vpn1-af-ipv4] quit [*Device2-vpn-instance-vpn1] quit [*Device2] evpn vpn-instance evrf1 bd-mode [*Device2-evpn-instance-evrf1] route-distinguisher 10:1 [*Device2-evpn-instance-evrf1] vpn-target 11:1 [*Device2-evpn-instance-evrf1] quit [*Device2] bridge-domain 10 [*Device2-bd10] vxlan vni 10 split-horizon-mode [*Device2-bd10] evpn binding vpn-instance evrf1 [*Device2-bd10] quit [*Device2] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Enable ingress replication on Device2 and Device3.
# Enable ingress replication on Device2.
[~Device2] interface nve 1 [*Device2-Nve1] source 2001:DB8:22::2 [*Device2-Nve1] vni 10 head-end peer-list protocol bgp [*Device2-Nve1] quit [*Device2] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Configure Device2 and Device3 as Layer 3 VXLAN gateways.
# Configure Device2.
[~Device2] interface Vbdif10 [*Device2-Vbdif10] ip binding vpn-instance vpn1 [*Device2-Vbdif10] ip address 10.1.1.1 255.255.255.0 [*Device2-Vbdif10] vxlan anycast-gateway enable [*Device2-Vbdif10] arp collect host enable [*Device2-Vbdif10] quit [*Device2] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Configure BGP to advertise IRB routes between Device1 and Device2 and between Device1 and Device3.
# Configure Device1.
[~Device1] bgp 100 [~Device1-bgp] l2vpn-family evpn [~Device1-bgp-af-evpn] peer 2001:DB8:22::2 advertise irb [*Device1-bgp-af-evpn] peer 2001:DB8:33::3 advertise irb [*Device1-bgp-af-evpn] quit [*Device1-bgp] quit [*Device1] commit
# Configure Device2.
[~Device2] bgp 100 [~Device2-bgp] l2vpn-family evpn [~Device2-bgp-af-evpn] peer 2001:DB8:11::1 advertise irb [*Device2-bgp-af-evpn] quit [*Device2-bgp] quit [*Device2] commit
Repeat these steps for Device3. For configuration details, see Configuration Files in this section.
- Verify the configuration.
After completing the configurations, run the display vxlan tunnel command on Device2 and Device3 to check VXLAN tunnel information. The following example uses the command output on Device2.
[*Device2] display vxlan tunnel
Number of vxlan tunnel : 1 Tunnel ID Source Destination State Type Uptime -------------------------------------------------------------------- 4026531879 2001:DB8:22::2 2001:DB8:33::3 up dynamic 00:44:18
VM1s on different servers can communicate. VM1 on Server2 can be pinged from the distributed gateway Device2.
[~Device2] ping -vpn-instance vpn1 10.2.1.10 PING 10.2.1.10: 300 data bytes, press CTRL_C to break Reply from 10.2.1.10: bytes=300 Sequence=1 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=2 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=3 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=4 ttl=254 time=30 ms Reply from 10.2.1.10: bytes=300 Sequence=5 ttl=254 time=30 ms --- 10.2.1.10 ping statistics --- 5 packet(s) transmitted 5 packet(s) received 0.00% packet loss round-trip min/avg/max = 30/30/30 ms
Configuration Files
Device1 configuration file
# sysname Device1 # ospfv3 1 router-id 1.1.1.1 area 0.0.0.0 # interface GigabitEthernet0/1/0 undo shutdown ipv6 enable ipv6 address 2001:DB8:3::2/64 ospfv3 1 area 0.0.0.0 # interface GigabitEthernet0/1/1 undo shutdown ipv6 enable ipv6 address 2001:DB8:2::2/64 ospfv3 1 area 0.0.0.0 # interface LoopBack0 ipv6 enable ipv6 address 2001:DB8:11::1/128 ospfv3 1 area 0.0.0.0 # bgp 100 peer 2001:DB8:22::2 as-number 100 peer 2001:DB8:22::2 connect-interface LoopBack0 peer 2001:DB8:33::3 as-number 100 peer 2001:DB8:33::3 connect-interface LoopBack0 # l2vpn-family evpn undo policy vpn-target peer 2001:DB8:22::2 enable peer 2001:DB8:22::2 advertise encap-type vxlan peer 2001:DB8:22::2 advertise irb peer 2001:DB8:22::2 reflect-client peer 2001:DB8:33::3 enable peer 2001:DB8:33::3 advertise encap-type vxlan peer 2001:DB8:33::3 advertise irb peer 2001:DB8:33::3 reflect-client # return
Device2 configuration file
# sysname Device2 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 100 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # ospfv3 1 router-id 2.2.2.2 area 0.0.0.0 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ipv6 enable ipv6 address 2001:DB8:2::1/64 ospfv3 1 area 0.0.0.0 # interface GigabitEthernet0/1/1.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack0 ipv6 enable ipv6 address 2001:DB8:22::2/128 ospfv3 1 area 0.0.0.0 # interface Nve1 source 2001:DB8:22::2 vni 10 head-end peer-list protocol bgp # bgp 100 peer 2001:DB8:11::1 as-number 100 peer 2001:DB8:11::1 connect-interface LoopBack0 # l2vpn-family evpn policy vpn-target peer 2001:DB8:11::1 enable peer 2001:DB8:11::1 advertise encap-type vxlan peer 2001:DB8:11::1 advertise irb # return
Device3 configuration file
# sysname Device3 # evpn vpn-instance evrf1 bd-mode route-distinguisher 20:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 22:22 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 100 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # ospfv3 1 router-id 3.3.3.3 area 0.0.0.0 # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.2.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ipv6 enable ipv6 address 2001:DB8:3::1/64 ospfv3 1 area 0.0.0.0 # interface GigabitEthernet0/1/1.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface LoopBack0 ipv6 enable ipv6 address 2001:DB8:33::3/128 ospfv3 1 area 0.0.0.0 # interface Nve1 source 2001:DB8:33::3 vni 20 head-end peer-list protocol bgp # bgp 100 peer 2001:DB8:11::1 as-number 100 peer 2001:DB8:11::1 connect-interface LoopBack0 # l2vpn-family evpn policy vpn-target peer 2001:DB8:11::1 enable peer 2001:DB8:11::1 advertise encap-type vxlan peer 2001:DB8:11::1 advertise irb # return
Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking
This section provides an example for configuring three-segment VXLAN to enable Layer 3 communication between VMs that belong to the different DCs.
Networking Requirements
In Figure 1-1123, DC-A and DC-B reside in different BGP ASs. To allow intra-DC VM communication (VMa1 and VMa2 in DC-A, and VMb1 and VMb2 in DC-B), configure BGP EVPN on the devices in the DCs to create VXLAN tunnels between distributed gateways. To allow VMs in different DCs (for example, VMa1 and VMb2) to communicate with each other, configure BGP EVPN on Leaf2 and Leaf3 to create another VXLAN tunnel. In this way, three-segment VXLAN tunnels are established to implement DC interconnection (DCI).
Interface1, interface2, and interface3 in this example stand for GE 0/1/0, GE 0/2/0, and GE 0/3/0, respectively.
Device Name |
Interface Name |
IP Address |
Device Name |
Interface Name |
IP Address |
---|---|---|---|---|---|
Device1 |
GE 0/1/0 |
192.168.50.1/24 |
Device2 |
GE 0/1/0 |
192.168.60.1/24 |
GE 0/2/0 |
192.168.1.1/24 |
GE 0/2/0 |
192.168.1.2/24 |
||
Loopback1 |
1.1.1.1/32 |
Loopback1 |
2.2.2.2/32 |
||
Spine1 |
GE 0/1/0 |
192.168.10.1/24 |
Spine2 |
GE 0/1/0 |
192.168.30.1/24 |
GE 0/2/0 |
192.168.20.1/24 |
GE 0/2/0 |
192.168.40.1/24 |
||
Loopback1 |
3.3.3.3/32 |
Loopback1 |
4.4.4.4/32 |
||
Leaf1 |
GE 0/1/0 |
192.168.10.2/24 |
Leaf4 |
GE 0/1/0 |
192.168.40.2/24 |
GE 0/2/0 |
- |
GE 0/2/0 |
- |
||
Loopback1 |
5.5.5.5/32 |
Loopback1 |
8.8.8.8/32 |
||
Leaf2 |
GE 0/1/0 |
192.168.20.2/24 |
Leaf3 |
GE 0/1/0 |
192.168.30.2/24 |
GE 0/2/0 |
- |
GE 0/2/0 |
- |
||
GE 0/3/0 |
192.168.50.2/24 |
GE 0/3/0 |
192.168.60.2/24 |
||
Loopback1 |
6.6.6.6/32 |
Loopback1 |
7.7.7.7/32 |
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface.
Configure an IGP to ensure route reachability between nodes.
Configure static routes to achieve interworking between DCs.
Configure BGP EVPN on Leaf1 and Leaf2 in DC-A and Leaf3 and Leaf4 in DC-B to create VXLAN tunnels between distributed gateways.
Configure BGP EVPN on DC edge nodes Leaf2 and Leaf3 to create a VXLAN tunnel between DCs.
Data Preparation
To complete the configuration, you need the following data:
VLAN IDs of the VMs
BD IDs
VXLAN network identifiers (VNIs) in BDs and VNIs in VPN instances
Procedure
- Assign an IP address to each interface (including each loopback interface) on each node.
For configuration details, see Configuration Files in this section.
- Configure an IGP. In this example, OSPF is used.
For configuration details, see Configuration Files in this section.
- Configure static routes to achieve interworking between DCs.
For configuration details, see Configuration Files in this section.
- Configure BGP EVPN on Leaf1 and Leaf2 in DC-A and Leaf3 and Leaf4 in DC-B to create VXLAN tunnels between distributed gateways.
- Configure BGP EVPN on Leaf2 and Leaf3 to create a VXLAN tunnel.
- Verify the configuration.
Run the display vxlan tunnel command on leaf nodes to check VXLAN tunnel information. The following example uses the command output on Leaf2. The command output shows that the VXLAN tunnels are Up.
[~Leaf2] display vxlan tunnel
Number of vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime --------------------------------------------------------------------- 4026531841 6.6.6.6 5.5.5.5 up dynamic 00:11:01 4026531842 6.6.6.6 7.7.7.7 up dynamic 00:12:11
Run the display ip routing-table vpn-instance vpn1 command on leaf nodes to check IP route information. The following example uses the command output on Leaf1.
[~Leaf1] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route ------------------------------------------------------------------------------ Routing Table : vpn1 Destinations : 8 Routes : 8 Destination/Mask Proto Pre Cost Flags NextHop Interface 10.1.1.0/24 Direct 0 0 D 10.1.1.1 Vbdif10 10.1.1.1/32 Direct 0 0 D 127.0.0.1 Vbdif10 10.1.1.255/32 Direct 0 0 D 127.0.0.1 Vbdif10 10.20.1.0/24 IBGP 255 0 RD 6.6.6.6 VXLAN 10.30.1.0/24 IBGP 255 0 RD 6.6.6.6 VXLAN 10.40.1.0/24 IBGP 255 0 RD 6.6.6.6 VXLAN 127.0.0.0/8 Direct 0 0 D 127.0.0.1 InLoopBack0 255.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0
After the configurations are complete, VMa1 and VMb2 can communicate with each other.
Configuration Files
Spine1 configuration file
# sysname Spine1 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.10.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.20.1 255.255.255.0 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 192.168.10.0 0.0.0.255 network 192.168.20.0 0.0.0.255 # return
Leaf1 configuration file
# sysname Leaf1 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 11:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.10.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 # interface Nve1 source 5.5.5.5 vni 10 head-end peer-list protocol bgp # bgp 100 peer 6.6.6.6 as-number 100 peer 6.6.6.6 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 6.6.6.6 enable # ipv4-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 6.6.6.6 enable peer 6.6.6.6 advertise irb peer 6.6.6.6 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 5.5.5.5 0.0.0.0 network 192.168.10.0 0.0.0.255 # return
Leaf2 configuration file
# sysname Leaf2 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 11:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.20.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.20.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface GigabitEthernet0/3/0 undo shutdown ip address 192.168.50.2 255.255.255.0 # interface LoopBack1 ip address 6.6.6.6 255.255.255.255 # interface Nve1 source 6.6.6.6 vni 20 head-end peer-list protocol bgp # bgp 100 peer 5.5.5.5 as-number 100 peer 5.5.5.5 connect-interface LoopBack1 peer 7.7.7.7 as-number 200 peer 7.7.7.7 ebgp-max-hop 255 peer 7.7.7.7 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 5.5.5.5 enable peer 7.7.7.7 enable # ipv4-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 5.5.5.5 enable peer 5.5.5.5 advertise irb peer 5.5.5.5 advertise encap-type vxlan peer 5.5.5.5 import reoriginate peer 5.5.5.5 advertise route-reoriginated evpn ip peer 7.7.7.7 enable peer 7.7.7.7 advertise irb peer 7.7.7.7 advertise encap-type vxlan peer 7.7.7.7 import reoriginate peer 7.7.7.7 advertise route-reoriginated evpn ip # ospf 1 area 0.0.0.0 network 6.6.6.6 0.0.0.0 network 192.168.20.0 0.0.0.255 # ip route-static 7.7.7.7 255.255.255.255 192.168.50.1 ip route-static 192.168.1.0 255.255.255.0 192.168.50.1 ip route-static 192.168.60.0 255.255.255.0 192.168.50.1 # return
Spine2 configuration file
# sysname Spine2 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.30.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.40.1 255.255.255.0 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 192.168.30.0 0.0.0.255 network 192.168.40.0 0.0.0.255 # return
Leaf3 configuration file
# sysname Leaf3 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 11:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.30.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.30.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface GigabitEthernet0/3/0 undo shutdown ip address 192.168.60.2 255.255.255.0 # interface LoopBack1 ip address 7.7.7.7 255.255.255.255 # interface Nve1 source 7.7.7.7 vni 10 head-end peer-list protocol bgp # bgp 200 peer 6.6.6.6 as-number 100 peer 6.6.6.6 ebgp-max-hop 255 peer 6.6.6.6 connect-interface LoopBack1 peer 8.8.8.8 as-number 200 peer 8.8.8.8 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 6.6.6.6 enable peer 8.8.8.8 enable # ipv4-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 6.6.6.6 enable peer 6.6.6.6 advertise irb peer 6.6.6.6 advertise encap-type vxlan peer 6.6.6.6 import reoriginate peer 6.6.6.6 advertise route-reoriginated evpn ip peer 8.8.8.8 enable peer 8.8.8.8 advertise irb peer 8.8.8.8 advertise encap-type vxlan peer 8.8.8.8 import reoriginate peer 8.8.8.8 advertise route-reoriginated evpn ip # ospf 1 area 0.0.0.0 network 7.7.7.7 0.0.0.0 network 192.168.30.0 0.0.0.255 # ip route-static 6.6.6.6 255.255.255.255 192.168.60.1 ip route-static 192.168.1.0 255.255.255.0 192.168.60.1 ip route-static 192.168.50.0 255.255.255.0 192.168.60.1 # return
Leaf4 configuration file
# sysname Leaf4 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 11:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.40.1.1 255.255.255.0 arp collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.40.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface LoopBack1 ip address 8.8.8.8 255.255.255.255 # interface Nve1 source 8.8.8.8 vni 20 head-end peer-list protocol bgp # bgp 200 peer 7.7.7.7 as-number 200 peer 7.7.7.7 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 7.7.7.7 enable # ipv4-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 7.7.7.7 enable peer 7.7.7.7 advertise irb peer 7.7.7.7 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 8.8.8.8 0.0.0.0 network 192.168.40.0 0.0.0.255 # return
Device1 configuration file
# sysname Device1 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.50.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.1.1 255.255.255.0 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # ip route-static 6.6.6.6 255.255.255.255 192.168.50.2 ip route-static 7.7.7.7 255.255.255.255 192.168.1.2 ip route-static 192.168.60.0 255.255.255.0 192.168.1.2 # return
Device2 configuration file
# sysname Device2 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.60.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.1.2 255.255.255.0 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # ip route-static 6.6.6.6 255.255.255.255 192.168.1.1 ip route-static 7.7.7.7 255.255.255.255 192.168.60.2 ip route-static 192.168.50.0 255.255.255.0 192.168.1.1 # return
Example for Configuring Three-Segment VXLAN to Implement Layer 2 Interworking
This section provides an example for configuring three-segment VXLAN tunnels to enable Layer 2 communication between VMs that belong to the different DCs.
Networking Requirements
On the network shown in Figure 1-1124, BGP EVPN is configured within DC A and DC B to establish VXLAN tunnels. BGP EVPN is also configured on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them. To enable communication between VM 1 and VM 2, implement Layer 2 communication between DC A and DC B. In this example, the VXLAN tunnel in DC A uses the VNI 10, and that in DC B uses the VNI 20. VNI conversion must be Implemented before establishing a VXLAN tunnel between Leaf 2 and Leaf 3.
Interfaces 1 and 2 in this example represent GE 0/1/0 and GE 0/2/0, respectively.
Device |
Interface |
IP Address |
Device |
Interface |
IP Address |
---|---|---|---|---|---|
Spine 1 |
GE 0/1/0 |
192.168.10.1/24 |
Spine 2 |
GE 0/1/0 |
192.168.30.1/24 |
GE 0/2/0 |
192.168.20.1/24 |
GE 0/2/0 |
192.168.40.1/24 |
||
Leaf 1 |
GE 0/1/0 |
192.168.10.2/24 |
Leaf 4 |
GE 0/1/0 |
192.168.40.2/24 |
GE 0/2/0 |
- |
GE 0/2/0 |
- |
||
Loopback 1 |
1.1.1.1/32 |
Loopback 1 |
4.4.4.4/32 |
||
Leaf 2 |
GE 0/1/0 |
192.168.20.2/24 |
Leaf 3 |
GE 0/1/0 |
192.168.30.2/24 |
GE 0/2/0 |
192.168.50.1/24 |
GE 0/2/0 |
192.168.50.2/24 |
||
Loopback 1 |
2.2.2.2/32 |
Loopback 1 |
3.3.3.3/32 |
Configuration Roadmap
The configuration roadmap is as follows:
Assign an IP address to each interface.
Configure an IGP to allow devices to communicate with each other.
Configure static routes to achieve interworking between DCs.
Configure BGP EVPN within DC A and DC B to establish VXLAN tunnels.
Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.
Configure Leaf 2 and Leaf 3 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.
Data Preparation
To complete the configuration, you need the following data:
VLAN IDs of the VMs
BD IDs
VNI IDs associated with BDs within DC A and DC B
Number of the AS to which DC A and DC B belong
Name of the SHG to which Leaf 2 and Leaf 3 belong
Procedure
- Assign an IP address to each interface (including the loopback interface) on each node.
For configuration details, see "Configuration Files" in this section.
- Configure an IGP. In this example, OSPF is used.
For configuration details, see "Configuration Files" in this section.
- Configure static routes to achieve interworking between DCs.
For configuration details, see "Configuration Files" in this section.
- Configure BGP EVPN within DC A and DC B to create VXLAN tunnels.
- Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.
- Configure Leaf 2 and Leaf 3 to advertise routes that are re-originated by the EVPN address family to BGP EVPN peers.
- Verify the configuration.
Run the display vxlan tunnel command on each leaf node to view information about the VXLAN tunnels. The following example uses the command output on Leaf 2. The command output shows that the VXLAN tunnels are Up.
[~Leaf2] display vxlan tunnel
Number of vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime ----------------------------------------------------------------------------------- 4026531924 2.2.2.2 1.1.1.1 up dynamic 00:39:19 4026531925 2.2.2.2 3.3.3.3 up dynamic 00:39:09
Run the display vxlan peer command on Leaf 2 to view information about the VXLAN peers.
[~Leaf2] display vxlan peer
Number of peers : 2 Vni ID Source Destination Type Out Vni ID ------------------------------------------------------------------------------- 10 2.2.2.2 1.1.1.1 dynamic 10 10 2.2.2.2 3.3.3.3 dynamic 20
After the preceding configurations are complete, Layer 2 communication can be implemented between VM 1 and VM 2.
Configuration Files
Spine 1 configuration file
# sysname Spine1 # interface GE0/1/0 undo shutdown ip address 192.168.10.1 255.255.255.0 # interface GE0/2/0 undo shutdown ip address 192.168.20.1 255.255.255.0 # ospf 1 area 0.0.0.0 network 192.168.10.0 0.0.0.255 network 192.168.20.0 0.0.0.255 # return
Leaf 1 configuration file
# sysname Leaf1 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # interface GE0/1/0 undo shutdown ip address 192.168.10.2 255.255.255.0 # interface GE0/2/0 undo shutdown # interface GE0/2/0.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Nve1 source 1.1.1.1 vni 10 head-end peer-list protocol bgp # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 # ipv4-family unicast peer 2.2.2.2 enable # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 1.1.1.1 0.0.0.0 network 192.168.10.0 0.0.0.255 # return
Leaf 2 configuration file
# sysname Leaf2 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # interface GE0/1/0 undo shutdown ip address 192.168.20.2 255.255.255.0 # interface GE0/2/0 undo shutdown ip address 192.168.50.1 255.255.255.0 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Nve1 source 2.2.2.2 vni 10 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 3.3.3.3 as-number 200 peer 3.3.3.3 ebgp-max-hop 255 peer 3.3.3.3 connect-interface LoopBack1 # ipv4-family unicast network 2.2.2.2 255.255.255.255 peer 1.1.1.1 enable peer 3.3.3.3 enable # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise encap-type vxlan peer 1.1.1.1 import reoriginate peer 1.1.1.1 advertise route-reoriginated evpn mac peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan peer 3.3.3.3 import reoriginate peer 3.3.3.3 advertise route-reoriginated evpn mac peer 3.3.3.3 split-group sg1 # ospf 1 area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 192.168.20.0 0.0.0.255 # ip route-static 3.3.3.3 255.255.255.255 192.168.50.2 # return
Spine 2 configuration file
# sysname Spine2 # interface GE0/1/0 undo shutdown ip address 192.168.30.1 255.255.255.0 # interface GE0/2/0 undo shutdown ip address 192.168.40.1 255.255.255.0 # ospf 1 area 0.0.0.0 network 192.168.30.0 0.0.0.255 network 192.168.40.0 0.0.0.255 # return
Leaf 3 configuration file
# sysname Leaf3 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # bridge-domain 10 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # interface GE0/1/0 undo shutdown ip address 192.168.30.2 255.255.255.0 # interface GE0/2/0 undo shutdown ip address 192.168.50.2 255.255.255.0 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface Nve1 source 3.3.3.3 vni 20 head-end peer-list protocol bgp # bgp 200 peer 2.2.2.2 as-number 100 peer 2.2.2.2 ebgp-max-hop 255 peer 2.2.2.2 connect-interface LoopBack1 peer 4.4.4.4 as-number 200 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast network 3.3.3.3 255.255.255.255 peer 2.2.2.2 enable peer 4.4.4.4 enable # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan peer 2.2.2.2 import reoriginate peer 2.2.2.2 advertise route-reoriginated evpn mac peer 2.2.2.2 split-group sg1 peer 4.4.4.4 enable peer 4.4.4.4 advertise encap-type vxlan peer 4.4.4.4 import reoriginate peer 4.4.4.4 advertise route-reoriginated evpn mac # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 192.168.30.0 0.0.0.255 # ip route-static 2.2.2.2 255.255.255.255 192.168.50.1 # return
Leaf 4 configuration file
# sysname Leaf4 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # bridge-domain 10 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # interface GE0/1/0 undo shutdown ip address 192.168.40.2 255.255.255.0 # interface GE0/2/0 undo shutdown # interface GE0/2/0.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface Nve1 source 4.4.4.4 vni 20 head-end peer-list protocol bgp # bgp 200 peer 3.3.3.3 as-number 200 peer 3.3.3.3 connect-interface LoopBack1 # ipv4-family unicast peer 3.3.3.3 enable # l2vpn-family evpn undo policy vpn-target peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 192.168.40.0 0.0.0.255 # return
Example for Configuring Static VXLAN in an Active-Active Scenario (Layer 2 Communication)
In a scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. Carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in the case of a fault.
Networking Requirements
On the network shown in Figure 1-1125, CE1 is dual-homed to PE1 and PE2 through an Eth-Trunk. PE1 and PE2 use the same virtual address as the source VTEP address of an NVE interface, namely, an anycast VTEP address. In this way, the CPE is aware of only one remote NVE interface and establishes a static VXLAN tunnel with the anycast VTEP address.
The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.
In this example, interfaces 1 through 3 represent GE 0/1/1, GE 0/1/2, and GE 0/1/3, respectively.
Device |
Interface |
IP Address |
---|---|---|
PE1 |
GE 0/1/1 |
10.1.20.1/24 |
GE 0/1/2 |
- |
|
GE 0/1/3 |
10.1.1.1/24 |
|
Loopback 1 |
1.1.1.1/32 |
|
Loopback 2 |
3.3.3.3/32 |
|
PE2 |
GE 0/1/1 |
10.1.20.2/24 |
GE 0/1/2 |
- |
|
GE 0/1/3 |
10.1.2.1/24 |
|
Loopback 1 |
2.2.2.2/32 |
|
Loopback 2 |
3.3.3.3/32 |
|
CE1 |
GE 0/1/1 |
- |
GE 0/1/2 |
- |
|
CPE |
GE 0/1/1 |
10.1.1.2/24 |
GE 0/1/2 |
10.1.2.2/24 |
|
GE 0/1/3 |
- |
|
Loopback 1 |
4.4.4.4/32 |
Configuration Roadmap
The configuration roadmap is as follows:
- Configure an IGP on the PEs and CPE to implement network connectivity.
- On PE1 and PE2, configure service access points and set the same ESI for the access links of CE1 so that CE1 is dual-homed to PE1 and PE2.
- Configure the same anycast VTEP address (virtual address) on PE1 and PE2 as the NVE interface's source address, which is used to establish a VXLAN tunnel with the CPE. Establish static VXLAN tunnels between the PEs and CPE so that the PEs and CPE can communicate.
- Establish a BGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
- Configure EVPN instances in BD mode on PE1 and PE2 and bind the BD to the corresponding EVPN instances.
- Enable the inter-chassis VXLAN function on PE1 and PE2, configure different bypass addresses for PE1 and PE2, and establish a bypass VXLAN tunnel on PE1 and PE2 so that PE1 and PE2 can communicate.
- (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
- Configure a BD on PE1 and PE2.
- On PE1 and PE2, enable routes to be sent to carry extended community attributes and the function of redirecting received routes carrying the extended VLAN community attribute.
- On PE1 and PE2, enable FRR for MAC routes between the local and remote ends. When a PE fails, the downstream traffic of the CPE can quickly switch to the other PE.
- (Optional) When PE1 and PE2 establish an EBGP peer relationship, set the function of not changing the next-hop addresses of routes. When PE1 and PE2 establish an IBGP peer relationship, this function is not required.
Data Preparation
To complete the configuration, you need the following data:
Interfaces and their IP addresses
Names of VPN and EVPN instances
VPN targets of the received and sent routes in VPN and EVPN instances
Procedure
- Assign IP addresses to device interfaces, including loopback interfaces.
For configuration details, see Configuration Files in this section.
- Configure an IGP. In this example, IS-IS is used.
For configuration details, see Configuration Files in this section.
- Enable EVPN capabilities.
# Configure PE1.
<PE1> system-view [~PE1] evpn
[*PE1-evpn] vlan-extend private enable
[*PE1-evpn] vlan-extend redirect enable
[*PE1-evpn] local-remote frr enable
[*PE1-evpn] bypass-vxlan enable
[*PE1-evpn] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Establish an IBGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.2 as-number 100
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*PE1-bgp] ipv4-family unicast
[*PE1-bgp-af-ipv4] undo synchronization
[*PE1-bgp-af-ipv4] peer 2.2.2.2 enable
[*PE1-bgp-af-ipv4] quit
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] undo policy vpn-target
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable
[*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Create a VXLAN tunnel.
- Configure PEs to provide access for CEs.
# Configure PE1.
[*PE1] e-trunk 1
[*PE1-e-trunk-1] priority 10
[*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*PE1-e-trunk-1] quit
[*PE1] interface eth-trunk 1
[*PE1-Eth-Trunk1] mac-address 00e0-fc12-3456
[*PE1-Eth-Trunk1] mode lacp-static
[*PE1-Eth-Trunk1] e-trunk 1
[*PE1-Eth-Trunk1] e-trunk mode force-master
[*PE1-Eth-Trunk1] es track evpn-peer 2.2.2.2
[*PE1-Eth-Trunk1] esi 0000.0001.0001.0001.0001
[*PE1-Eth-Trunk1] quit
[*PE1] interface eth-trunk1.1 mode l2
[*PE1-Eth-Trunk1.1] encapsulation dot1q vid 1
[*PE1-Eth-Trunk1.1] rewrite pop single
[*PE1-Eth-Trunk1.1] bridge-domain 10
[*PE1-Eth-Trunk1.1] quit
[~PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Verify the configuration.
Run the display vxlan tunnel command on PE1 to view VXLAN tunnel information. The following example uses the command output on PE1.
[~PE1] display vxlan tunnel
Number of vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime ----------------------------------------------------------------------------------- 4026531842 1.1.1.1 2.2.2.2 up dynamic 00:43:14 4026531843 3.3.3.3 4.4.4.4 up static 00:08:30
Configuration Files
PE1 configuration file
# sysname PE1 # evpn enhancement port 1345 # evpn vlan-extend private enable vlan-extend redirect enable local-remote frr enable bypass-vxlan enable # evpn vpn-instance evpn1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evpn1 # e-trunk 1 priority 10 peer-address 2.2.2.2 source-address 1.1.1.1 # isis 1 network-entity 10.0000.0000.0001.00 frr # interface Eth-Trunk1 mac-address 00e0-fc12-3456 mode lacp-static e-trunk 1 e-trunk mode force-master es track evpn-peer 2.2.2.2 esi 0000.0001.0001.0001.0001 # interface Eth-Trunk1.1 mode l2 encapsulation dot1q vid 1 rewrite pop single bridge-domain 10 # interface GigabitEthernet 0/1/1 undo shutdown ip address 10.1.20.1 255.255.255.0 isis enable 1 # interface GigabitEthernet 0/1/2 undo shutdown eth-trunk 1 # interface GigabitEthernet 0/1/3 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 1.1.1.1 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp vni 10 head-end peer-list 4.4.4.4 # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 2.2.2.2 enable # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan # return
PE2 configuration file
# sysname PE2 # evpn enhancement port 1345 # evpn vlan-extend redirect enable vlan-extend private enable local-remote frr enable bypass-vxlan enable # evpn vpn-instance evpn1 bd-mode route-distinguisher 22:22 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evpn1 # e-trunk 1 priority 10 peer-address 1.1.1.1 source-address 2.2.2.2 # isis 1 network-entity 10.0000.0000.0002.00 frr # interface Eth-Trunk1 mac-address 00e0-fc12-3456 mode lacp-static e-trunk 1 e-trunk mode force-master es track evpn-peer 1.1.1.1 esi 0000.0001.0001.0001.0001 # interface Eth-Trunk1.1 mode l2 encapsulation dot1q vid 1 rewrite pop single bridge-domain 10 # interface GigabitEthernet 0/1/1 undo shutdown ip address 10.1.20.2 255.255.255.0 isis enable 1 # interface GigabitEthernet 0/1/2 undo shutdown eth-trunk 1 # interface GigabitEthernet 0/1/3 undo shutdown ip address 10.1.2.1 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 2.2.2.2 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp vni 10 head-end peer-list 4.4.4.4 # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise encap-type vxlan # return
CE configuration file
# sysname CE # vlan batch 1 to 4094 # interface Eth-Trunk1 port link-type trunk port trunk allow-pass vlan 1 # interface GigabitEthernet 0/1/1 undo shutdown eth-trunk 1 # interface GigabitEthernet 0/1/2 undo shutdown eth-trunk 1 # return
CPE configuration file
# sysname CPE # bridge-domain 10 vxlan vni 10 split-horizon-mode # isis 1 network-entity 20.0000.0000.0001.00 frr # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.2.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/3 undo shutdown esi 0000.0000.0000.0000.0017 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # interface Nve1 source 4.4.4.4 vni 10 head-end peer-list 3.3.3.3 # return
Example for Configuring Dynamic VXLAN in an Active-Active Scenario (Layer 3 Communication)
In a scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. Carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in the case of a fault.
Networking Requirements
On the network shown in Figure 1-1126, a VXLAN tunnel is required to be dynamically deployed using BGP EVPN between the CPE and the PE1-PE2 pair. An EVPN peer relationship needs to be established between PE1 and PE2 to deploy a bypass VXLAN tunnel. In addition, a CE needs to be dual-homed to PE1 and PE2 that are configured with the same anycast VTEP address to implement the active-active function. If one of the PEs fails, this function allows traffic to be quickly switched to the other PE.
In this example, interfaces 1 through 3 represent GE 0/1/1, GE 0/1/2, and GE 0/1/3, respectively.
Device |
Interface |
IP Address |
---|---|---|
PE1 |
GE 0/1/1 |
10.1.20.1/24 |
GE 0/1/2 |
- |
|
GE 0/1/3 |
10.1.1.1/24 |
|
Loopback 1 |
1.1.1.1/32 |
|
Loopback 2 |
3.3.3.3/32 |
|
PE2 |
GE 0/1/1 |
10.1.20.2/24 |
GE 0/1/2 |
- |
|
GE 0/1/3 |
10.1.2.1/24 |
|
Loopback 1 |
2.2.2.2/32 |
|
Loopback 2 |
3.3.3.3/32 |
|
CE |
GE 0/1/1 |
- |
GE 0/1/2 |
- |
|
CPE |
GE 0/1/1 |
10.1.1.2/24 |
GE 0/1/2 |
10.1.2.2/24 |
|
Loopback 1 |
4.4.4.4/32 |
Configuration Roadmap
The configuration roadmap is as follows:
- Configure a routing protocol on the CPE and each PE for the devices to communicate at Layer 3. In this example, an IGP is configured.
- Configure PE1 and PE2 to provide dual-homing access for CE1 through Eth-Trunk Layer 2 sub-interfaces.
- Configure EVPN instances on PE1, PE2, and the CPE.
- Configure VPN instances on PE1, PE2, and the CPE.
- Configure VBDIF interfaces on PE1 and PE2 for Layer 3 access.
- Configure the same anycast VTEP address (virtual address) on PE1 and PE2 as the NVE interface's source address, which is used to establish a VXLAN tunnel with the CPE. Create a VXLAN tunnel between the CPE and the PE1-PE2 pair using BGP EVPN, allowing the CPE to communicate with the PEs.
- Enable the inter-chassis VXLAN function on PE1 and PE2, configure different bypass addresses for the PEs, establish a BGP EVPN peer relationship, and create a bypass VXLAN tunnel between the PEs, allowing the PEs to communicate with each other.
- Enable auto FRR in the BGP VPN address family view on PE1 and PE2. In this way, if a PE fails, the downstream traffic of the CPE can be quickly switched to the other PE and then forwarded to a CE.
Procedure
- Assign IP addresses to device interfaces, including loopback interfaces.
For configuration details, see Configuration Files in this section.
- Configure an IGP. In this example, IS-IS is used.
For configuration details, see Configuration Files in this section.
- Configure PE1 and PE2 to provide dual-homing access for CE1 through Eth-Trunk Layer 2 sub-interfaces.
# Configure PE1.
<PE1> system-view [~PE1] interface eth-trunk 10 [*PE1-Eth-Trunk10] trunkport gigabitethernet 0/1/2 [*PE1-Eth-Trunk10] esi 0000.0000.0000.0000.1111 [*PE1-Eth-Trunk10] quit [*PE1] bridge-domain 10 [*PE1-bd10] vxlan vni 10 split-horizon-mode [*PE1-bd10] quit [*PE1] interface eth-trunk 10.1 mode l2 [*PE1-Eth-Trunk10.1] encapsulation dot1q vid 10 [*PE1-Eth-Trunk10.1] rewrite pop single [*PE1-Eth-Trunk10.1] bridge-domain 10 [*PE1-Eth-Trunk10.1] quit [*PE1] commit
# Configure PE2.
<PE2> system-view [~PE2] interface eth-trunk 10 [*PE2-Eth-Trunk10] trunkport gigabitethernet 0/1/2 [*PE2-Eth-Trunk10] esi 0000.0000.0000.0000.1111 [*PE2-Eth-Trunk10] quit [*PE2] bridge-domain 10 [*PE2-bd10] vxlan vni 10 split-horizon-mode [*PE2-bd10] quit [*PE2] interface eth-trunk 10.1 mode l2 [*PE2-Eth-Trunk10.1] encapsulation dot1q vid 10 [*PE2-Eth-Trunk10.1] rewrite pop single [*PE2-Eth-Trunk10.1] bridge-domain 10 [*PE2-Eth-Trunk10.1] quit [*PE2] commit
- Enable the inter-chassis VXLAN function.
# Configure PE1.
[~PE1] evpn
[*PE1-evpn] bypass-vxlan enable
[*PE1-evpn] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Configure EVPN instances.
# Configure the CPE.
<CPE> system-view [~CPE] evpn vpn-instance evrf1 bd-mode [*CPE-evpn-instance-evrf1] route-distinguisher 11:11 [*CPE-evpn-instance-evrf1] vpn-target 1:1 import-extcommunity [*CPE-evpn-instance-evrf1] vpn-target 1:1 export-extcommunity [*CPE-evpn-instance-evrf1] quit [*CPE] bridge-domain 20 [*CPE-bd20] vxlan vni 20 split-horizon-mode [*CPE-bd20] evpn binding vpn-instance evrf1 [*CPE-bd20] quit [*CPE] commit
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 bd-mode [*PE1-evpn-instance-evrf1] route-distinguisher 11:11 [*PE1-evpn-instance-evrf1] vpn-target 1:1 import-extcommunity [*PE1-evpn-instance-evrf1] vpn-target 1:1 export-extcommunity [*PE1-evpn-instance-evrf1] quit [*PE1] bridge-domain 10 [*PE1-bd10] evpn binding vpn-instance evrf1 [*PE1-bd10] quit [*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Configure VPN instances.
# Configure the CPE.
[~CPE] ip vpn-instance vpn1 [*CPE-vpn-instance-vpn1] vxlan vni 100 [*CPE-vpn-instance-vpn1] ipv4-family [*CPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 1:1 [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity evpn [*CPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity evpn [*CPE-vpn-instance-vpn1-af-ipv4] quit [*CPE-vpn-instance-vpn1] quit [*CPE] commit
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] vxlan vni 100 [*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 1:1
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 import-extcommunity evpn
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 export-extcommunity evpn
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Establish BGP and BGP EVPN peer relationships.# Configure the CPE.
[~CPE] bgp 100 [*CPE-bgp] peer 1.1.1.1 as-number 100 [*CPE-bgp] peer 1.1.1.1 connect-interface LoopBack 1 [*CPE-bgp] peer 2.2.2.2 as-number 100 [*CPE-bgp] peer 2.2.2.2 connect-interface LoopBack 1 [*CPE-bgp] ipv4-family vpn-instance vpn1 [*CPE-bgp-vpn1] advertise l2vpn evpn [*CPE-bgp-vpn1] import-route direct [*CPE-bgp-vpn1] quit [*CPE-bgp] l2vpn-family evpn [*CPE-bgp-af-evpn] undo policy vpn-target [*CPE-bgp-af-evpn] peer 1.1.1.1 enable [*CPE-bgp-af-evpn] peer 1.1.1.1 advertise irb [*CPE-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan [*CPE-bgp-af-evpn] peer 2.2.2.2 enable [*CPE-bgp-af-evpn] peer 2.2.2.2 advertise irb [*CPE-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan [*CPE-bgp-af-evpn] quit [*CPE-bgp] quit [*CPE] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.2 as-number 100
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 1 [*PE1-bgp] peer 4.4.4.4 as-number 100 [*PE1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct [*PE1-bgp-vpn1] auto-frr
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] undo policy vpn-target
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable [*PE1-bgp-af-evpn] peer 2.2.2.2 advertise irb
[*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan [*PE1-bgp-af-evpn] peer 4.4.4.4 enable [*PE1-bgp-af-evpn] peer 4.4.4.4 advertise irb [*PE1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Create a VXLAN tunnel between the CPE and the PE1-PE2 pair and a bypass VXLAN tunnel between PE1 and PE2.
# Configure the CPE.
[~CPE] interface nve 1
[*CPE-Nve1] source 4.4.4.4
[*CPE-Nve1] vni 20 head-end peer-list protocol bgp
[*CPE-Nve1] quit
[*CPE] commit
# Configure PE1.
[~PE1] interface nve 1
[*PE1-Nve1] source 3.3.3.3
[*PE1-Nve1] bypass source 1.1.1.1
[*PE1-Nve1] mac-address 00e0-fc12-7890
[*PE1-Nve1] vni 10 head-end peer-list protocol bgp
[*PE1-Nve1] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Bind VPN instances to VBDIF interfaces.
# Configure the CPE.
[~CPE] interface vbdif20 [*CPE-Vbdif20] ip binding vpn-instance vpn1 [*CPE-Vbdif20] ip address 10.1.30.1 24 [*CPE-Vbdif20] arp collect host enable [*CPE-Vbdif20] vxlan anycast-gateway enable [*CPE-Vbdif20] quit [*CPE] commit
# Configure PE1.
[~PE1] interface vbdif10
[*PE1-Vbdif10] ip binding vpn-instance vpn1
[*PE1-Vbdif10] ip address 10.1.10.1 24
[*PE1-Vbdif10] arp collect host enable [*PE1-Vbdif10] vxlan anycast-gateway enable
[*PE1-Vbdif10] mac-address 00e0-fc12-3456
[*PE1-Vbdif10] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Verify the configuration.
Run the display vxlan tunnel command on a PE to check VXLAN tunnel information. The following example uses the command output on PE1.
[~PE1] display vxlan tunnel
Number of vxlan tunnel : 1 Tunnel ID Source Destination State TyPE Uptime ------------------------------------------------------------------- 4026531841 1.1.1.1 2.2.2.2 up dynamic 0033h12m 4026531842 3.3.3.3 4.4.4.4 up dynamic 0033h12m
Configuration Files
PE1 configuration file
# sysname PE1 # evpn bypass-vxlan enable # evpn vpn-instance evrf1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 1:1 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity vpn-target 1:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity evpn vxlan vni 100 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # isis 1 network-entity 10.0000.0000.0010.00 frr # interface Eth-Trunk10 trunkport gigabitethernet 0/1/2 esi 0000.0000.0000.0000.1111 # interface Eth-Trunk10.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.10.1 255.255.255.0 mac-address 00e0-fc12-3456 vxlan anycast-gateway enable arp collect host enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.20.1 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown eth-trunk 10 # interface GigabitEthernet0/1/3 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 1.1.1.1 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 2.2.2.2 enable peer 4.4.4.4 enable # ipv4-family vpn-instance vpn1 import-route direct auto-frr advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise irb peer 2.2.2.2 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise irb peer 4.4.4.4 advertise encap-type vxlan # return
PE2 configuration file
# sysname PE2 # evpn bypass-vxlan enable # evpn vpn-instance evrf1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 1:1 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity vpn-target 1:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity evpn vxlan vni 100 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # isis 1 network-entity 10.0000.0000.0020.00 frr # interface Eth-Trunk10 trunkport gigabitethernet 0/1/2 esi 0000.0000.0000.0000.1111 # interface Eth-Trunk10.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.10.1 255.255.255.0 mac-address 00e0-fc12-3456 vxlan anycast-gateway enable arp collect host enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.20.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown eth-trunk 10 # interface GigabitEthernet0/1/3 undo shutdown ip address 10.1.2.1 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 2.2.2.2 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 4.4.4.4 enable # ipv4-family vpn-instance vpn1 import-route direct auto-frr advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise irb peer 1.1.1.1 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise irb peer 4.4.4.4 advertise encap-type vxlan # return
CE1 configuration file
# sysname CE1 # vlan batch 10 # interface Eth-Trunk10 port link-type trunk port trunk allow-pass vlan 10 # interface GigabitEthernet0/1/1 undo shutdown eth-trunk 10 # interface GigabitEthernet0/1/2 undo shutdown eth-trunk 10 # return
CPE configuration file
# sysname CPE # evpn vpn-instance evrf1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 1:1 apply-label per-instance vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity vpn-target 1:1 export-extcommunity evpn vpn-target 1:1 import-extcommunity evpn vxlan vni 100 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # isis 1 network-entity 20.0000.0000.0001.00 frr # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.1.30.1 255.255.255.0 vxlan anycast-gateway enable arp collect host enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.2.2 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # interface Nve1 source 4.4.4.4 vni 20 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 2.2.2.2 enable # ipv4-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise irb peer 1.1.1.1 advertise encap-type vxlan peer 2.2.2.2 enable peer 2.2.2.2 advertise irb peer 2.2.2.2 advertise encap-type vxlan # return
Example for Configuring VXLAN over IPsec in an Active-Active Scenario
In a scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network, which enhances VXLAN access reliability and implements rapid convergence in the case of a fault. IPsec encapsulation implements encrypted packet transmission, securing packet transmission.
Networking Requirements
On the network shown in Figure 1-1127, CE1 is dual-homed to PE1 and PE2, and PE1 and PE2 use the same virtual address as the VTEP address of the source NVE interface. In this way, the CPE is aware of only one remote NVE interface and establishes a static VXLAN tunnel with the anycast VTEP address to communicate with the PEs. VXLAN packets are transmitted in plain text on the network, which is insecure. IPsec encapsulation implements encrypted packet transmission, securing packet transmission.
In this example, interfaces 1 through 3 represent GE 0/1/1, GE 0/2/0, and GE 0/3/0, respectively.
Device |
Interface |
IP Address |
---|---|---|
PE1 |
GE 0/1/1 |
10.1.20.1/24 |
GE 0/1/2 |
192.168.1.1/24 |
|
GE 0/1/3 |
10.1.1.1/24 |
|
Loopback 0 |
1.1.1.1/32 |
|
Loopback 1 |
3.3.3.3/32 |
|
Loopback 2 |
5.5.5.5/32 |
|
PE2 |
GE 0/1/1 |
10.1.20.2/24 |
GE 0/1/2 |
192.168.2.1/24 |
|
GE 0/1/3 |
10.1.2.1/24 |
|
Loopback 0 |
2.2.2.2/32 |
|
Loopback 1 |
3.3.3.3/32 |
|
Loopback 2 |
5.5.5.5/32 |
|
CE1 |
GE 0/1/1 |
192.168.1.2/24 |
GE 0/1/2 |
192.168.2.2/24 |
|
CPE |
GE 0/1/1 |
10.1.1.2/24 |
Loopback 0 |
4.4.4.4/32 |
|
Loopback 1 |
6.6.6.6/32 |
Configuration Roadmap
The configuration roadmap is as follows:
- Configure an IGP on the CEs, PEs, and CPE to implement Layer 2 network connectivity.
- Configure service access points on PE1 and PE2 so that CE1 can be dual-homed to PE1 and PE2.
- Establish static VXLAN tunnels between the PEs and CPE so that the PEs and CPE can communicate.
- Establish a bypass VXLAN tunnel between PE1 and PE2 so that PE1 and PE2 can communicate.
- (Optional) Configure a UDP port on the PEs to prevent the receiving of replicated packets.
- Configure IPsec on the PEs and CPE and establish IPsec tunnels.
Data Preparation
To complete the configuration, you need the following data:
Interfaces and their IP addresses
EVPN instance names
VPN targets of the received and sent routes in EVPN instances
Preshared key
SHA2-256 as the ESP authentication algorithm and AES 256 as the ESP encryption algorithm for the IPsec proposal
SHA2-256 as the authentication algorithm and HMAC-SHA2-256 as the integrity algorithm for the IKE proposal
Procedure
- Assign IP addresses to device interfaces, including loopback interfaces.
For configuration details, see Configuration Files in this section.
- Configure an IGP. In this example, IS-IS is used.
For configuration details, see Configuration Files in this section.
- Enable EVPN capabilities.
# Configure PE1.
<PE1> system-view [~PE1] evpn
[*PE1-evpn] vlan-extend private enable
[*PE1-evpn] vlan-extend redirect enable
[*PE1-evpn] local-remote frr enable
[*PE1-evpn] bypass-vxlan enable
[*PE1-evpn] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Establish an IBGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.2 as-number 100
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*PE1-bgp] ipv4-family unicast
[*PE1-bgp-af-ipv4] undo synchronization
[*PE1-bgp-af-ipv4] peer 2.2.2.2 enable
[*PE1-bgp-af-ipv4] quit
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] undo policy vpn-target
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable
[*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Create a VXLAN tunnel.
- Configure PEs to provide access for CEs.
# Configure PE1.
[*PE1] e-trunk 1
[*PE1-e-trunk-1] priority 10
[*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*PE1-e-trunk-1] quit
[*PE1] interface eth-trunk 1
[*PE1-Eth-Trunk1] mac-address 00e0-fc12-3456
[*PE1-Eth-Trunk1] mode lacp-static
[*PE1-Eth-Trunk1] e-trunk 1
[*PE1-Eth-Trunk1] e-trunk mode force-master
[*PE1-Eth-Trunk1] es track evpn-peer 2.2.2.2
[*PE1-Eth-Trunk1] esi 0000.0001.0001.0001.0001
[*PE1-Eth-Trunk1] quit
[*PE1] interface eth-trunk1.1 mode l2
[*PE1-Eth-Trunk1.1] encapsulation dot1q vid 1
[*PE1-Eth-Trunk1.1] rewrite pop single
[*PE1-Eth-Trunk1.1] bridge-domain 10
[*PE1-Eth-Trunk1.1] quit
[~PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- (Optional) Configure a UDP port.
# Configure PE1.
[~PE1] evpn enhancement port 1345
[*PE1] commit
The same UDP port number must be set for the PEs in the active state.
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Configure IPsec on PE1.
- Configure IPsec on the CPE.
Configuration Files
PE1 configuration file
# sysname PE1 #
evpn enhancement port 1345 #
evpn vlan-extend private enable vlan-extend redirect enable local-remote frr enable bypass-vxlan enable # evpn vpn-instance evpn1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evpn1 # acl number 3000 rule 5 permit ip source 3.3.3.3 0 destination 4.4.4.4 0 # e-trunk 1 priority 10 peer-address 2.2.2.2 source-address 1.1.1.1 # isis 1 network-entity 10.0000.0000.0001.00 frr
# ike proposal 10 encryption-algorithm aes-cbc 256 dh group14 authentication-algorithm sha2-256 integrity-algorithm hmac-sha2-256 # ike peer b pre-shared-key %$%$THBGMJK2659z"C(T{J"-,.2n%$%$ ike-proposal 10 remote-address 4.4.4.4 # service-location 1 location follow-forwarding-mode //Use this configuration in 1:1 board protection mode. location slot 9 //Use this configuration in non-1:1 board protection mode. # service-instance-group group1 service-location 1 # ipsec proposal tran1 esp authentication-algorithm sha2-256 esp encryption-algorithm aes 256 # ipsec policy map1 10 isakmp security acl 3000 ike-peer b proposal tran1 local-address 3.3.3.3 # interface Eth-Trunk1 mac-address 00e0-fc12-3456 mode lacp-static e-trunk 1 e-trunk mode force-master es track evpn-peer 2.2.2.2 esi 0000.0001.0001.0001.0001 # interface Eth-Trunk1.1 mode l2 encapsulation dot1q vid 1 rewrite pop single bridge-domain 10 # interface GigabitEthernet 0/1/1 undo shutdown ip address 10.1.20.1 255.255.255.0 # interface GigabitEthernet 0/1/2 undo shutdown eth-trunk 1 # interface GigabitEthernet 0/1/3 undo shutdown ip address 10.1.1.1 255.255.255.0 # interface LoopBack0 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 5.5.5.5 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 1.1.1.1 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp vni 10 head-end peer-list 4.4.4.4 # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 2.2.2.2 enable # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan # interface Tunnel1 ip address 10.11.1.1 255.255.255.255 tunnel-protocol ipsec ipsec policy map1 service-instance-group group1 # ip route-static 6.6.6.6 255.255.255.255 GigabitEthernet0/1/3 10.1.1.2 ip route-static 4.4.4.4 255.255.255.255 Tunnel1 6.6.6.6 # return
PE2 configuration file
# sysname PE2 #
evpn enhancement port 1345 #
evpn vlan-extend redirect enable vlan-extend private enable local-remote frr enable bypass-vxlan enable # evpn vpn-instance evpn1 bd-mode route-distinguisher 22:22 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evpn1 # acl number 3000 rule 5 permit ip source 3.3.3.3 0 destination 4.4.4.4 0
# ike proposal 10 encryption-algorithm aes-cbc 256 dh group14 authentication-algorithm sha2-256 integrity-algorithm hmac-sha2-256 # ike peer b pre-shared-key %$%$THBGMJK2659z"C(T{J"-,.2n%$%$ ike-proposal 10 remote-address 2.2.2.2 # service-location 1 location follow-forwarding-mode //Use this configuration in 1:1 board protection mode. location slot 9 //Use this configuration in non-1:1 board protection mode. # service-instance-group group1 service-location 1 # ipsec proposal tran1 esp authentication-algorithm sha2-256 esp encryption-algorithm aes 256 # ipsec policy map1 10 isakmp security acl 3000 ike-peer b proposal tran1 local-address 5.5.5.5
# e-trunk 1 priority 10 peer-address 1.1.1.1 source-address 2.2.2.2 # isis 1 network-entity 10.0000.0000.0002.00 frr # interface Eth-Trunk1 mac-address 00e0-fc12-3456 mode lacp-static e-trunk 1 e-trunk mode force-master es track evpn-peer 1.1.1.1 esi 0000.0001.0001.0001.0001 # interface Eth-Trunk1.1 mode l2 encapsulation dot1q vid 1 rewrite pop single bridge-domain 10 # interface GigabitEthernet 0/1/1 undo shutdown ip address 10.1.20.2 255.255.255.0 # interface GigabitEthernet 0/1/2 undo shutdown eth-trunk 1 # interface GigabitEthernet 0/1/3 undo shutdown ip address 10.1.2.1 255.255.255.0 # interface LoopBack0 ip address 2.2.2.2 255.255.255.255 isis enable 1 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 5.5.5.5 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 2.2.2.2 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp vni 10 head-end peer-list 4.4.4.4 # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise encap-type vxlan #
interface Tunnel1 ip address 10.11.1.1 255.255.255.0 tunnel-protocol ipsec ipsec policy map1 service-instance-group group1 # ip route-static 6.6.6.6 255.255.255.255 GigabitEthernet0/1/3 10.1.2.2 ip route-static 4.4.4.4 255.255.255.255 Tunnel1 6.6.6.6 # return
CE configuration file
# sysname CE # vlan batch 1 to 4094 # interface Eth-Trunk1 portswitch port link-type trunk port trunk allow-pass vlan 1 # interface GigabitEthernet 0/1/1 undo shutdown eth-trunk 1 # interface GigabitEthernet 0/1/2 undo shutdown eth-trunk 1 # return
CPE configuration file
# sysname CPE # bridge-domain 10 vxlan vni 10 split-horizon-mode # acl number 3000 rule 5 permit ip
# ike proposal 10 encryption-algorithm aes-cbc 256 dh group14 authentication-algorithm sha2-256 integrity-algorithm hmac-sha2-256 # ike peer 1 pre-shared-key %$%$THBGMJK2659z"C(T{J"-,.2n%$%$ ike-proposal 10 remote-address 5.5.5.5 # service-location 1 location follow-forwarding-mode //Use this configuration in 1:1 board protection mode. location slot 9 //Use this configuration in non-1:1 board protection mode. # service-instance-group group1 service-location 1 # ipsec proposal tran1 esp authentication-algorithm sha2-256 esp encryption-algorithm aes 256 # ipsec policy-template temp1 1 # security acl 3000 ike-peer 1 proposal tran1 local-address 6.6.6.6 # ipsec policy 1 1 isakmp template temp1
# isis 1 network-entity 20.0000.0000.0001.00 frr # interface GigabitEthernet 0/1/1 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 # interface GigabitEthernet 0/1/1.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack0 ip address 4.4.4.4 255.255.255.255 isis enable 1 # interface LoopBack1 ip address 6.6.6.6 255.255.255.255 isis enable 1 # interface Nve1 source 4.4.4.4 vni 10 head-end peer-list 3.3.3.3 #
interface Tunnel1 ip address 10.22.2.2 255.255.255.255 tunnel-protocol ipsec ipsec policy 1 service-instance-group group1 # ip route-static 5.5.5.5 255.255.255.255 GigabitEthernet0/1/1 192.168.1.1 # return
Example for Configuring the Static VXLAN Active-Active Scenario (in VLAN-Aware Bundle Mode)
In the scenario where a data center is interconnected with an enterprise site, a CE is dual-homed to a VXLAN network. In this way, carriers can enhance VXLAN access reliability to improve the stability of user services so that rapid convergence can be implemented in case of a fault. The VLAN-aware bundle access mode allows different VLANs configured on different physical interfaces to access the same EVPN instance while ensuring that traffic from these VLANs remains isolated.
Networking Requirements
On the network shown in Figure 1-1128, CE1 is dual-homed to PE1 and PE2 through an Eth-Trunk. PE1 and PE2 use a virtual address as the source virtual tunnel end point (VTEP) address of a Network Virtualization Edge (NVE) interface, that is, anycast VTEP address. In this way, the CPE detects only one remote NVE interface, and a static VXLAN tunnel is established between the CPE and the anycast VTEP address.
The packets from the CPE can reach CE1 through either PE1 or PE2. However, single-homed CEs may exist, such as CE2 and CE3. As a result, after reaching a PE, the packets from the CPE may need to be forwarded by the other PE to a single-homed CE. Therefore, a bypass VXLAN tunnel needs to be established between PE1 and PE2.
To allow different VLANs configured on different physical interfaces to access the same EVPN instance while ensuring that traffic from these VLANs remains isolated, configure the CE to access PEs in VLAN-aware mode.
In this example, interfaces 1 through 3 represen GE 0/1/1, GE 0/1/2, and GE 0/1/3, respectively.
Device |
Interface |
IP Address |
---|---|---|
PE1 |
GE 0/1/1 |
10.1.20.1/24 |
GE 0/1/2 |
- |
|
GE 0/1/3 |
10.1.1.1/24 |
|
Loopback1 |
1.1.1.1/24 |
|
Loopback2 |
3.3.3.3/32 |
|
PE2 |
GE 0/1/1 |
10.1.20.2/24 |
GE 0/1/2 |
- |
|
GE 0/1/3 |
10.1.2.1/24 |
|
Loopback1 |
2.2.2.2/32 |
|
Loopback2 |
3.3.3.3/32 |
|
CE1 |
GE 0/1/1 |
- |
GE 0/1/2 |
- |
|
CPE |
GE 0/1/1 |
10.1.1.2/24 |
GE 0/1/2 |
10.1.2.2/24 |
|
GE 0/1/3 |
- |
|
Loopback1 |
4.4.4.4/32 |
Configuration Roadmap
The configuration roadmap is as follows:
- Configure an IGP on each PE and the CPE to ensure Layer 3 connectivity.
- Configure fast traffic switching on PE1 and PE2. If a PE fails, this configuration allows downstream traffic on the CPE to be switched to the other PE, which then forwards the traffic to a CE.
- Establish a BGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
- Create an EVPN instance in BD mode and a BD and bind the BD to the EVPN instance with a BD tag set on each of PE1 and PE2.
- Configure the same anycast VTEP address (virtual address) on PE1 and PE2 as the NVE interface's source address, which is used to establish a VXLAN tunnel with the CPE. Establish static VXLAN tunnels between the PEs and CPE so that the PEs and CPE can communicate.
- On PE1 and PE2, configure service access points and set the same ESI for the access links of CE1 so that CE1 is dual-homed to PE1 and PE2.
- Enable inter-chassis VXLAN on PE1 and PE2, configure different bypass addresses for the PEs, and establish a bypass VXLAN tunnel between the PEs, allowing communication between PE1 and PE2.
Data Preparation
To complete the configuration, you need the following data:
Interfaces and their IP addresses
VPN and EVPN instance names
Import and export VPN targets for the VPN and EVPN instances
Procedure
- Assign IP addresses to device interfaces, including loopback interfaces.
For configuration details, see Configuration Files in this section.
- Configure an IGP on each PE and the CPE. IS-IS is used in this example.
For configuration details, see Configuration Files in this section.
- Configure fast traffic switching on each PE.
# Configure PE1.
[~PE1] evpn
[*PE1-evpn] vlan-extend private enable
[*PE1-evpn] vlan-extend redirect enable
[*PE1-evpn] local-remote frr enable
[*PE1-evpn] bypass-vxlan enable
[*PE1-evpn] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Establish an IBGP EVPN peer relationship between PE1 and PE2 so that they can exchange VXLAN EVPN routes.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.2 as-number 100
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack 0
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable
[*PE1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Establish VXLAN tunnels.
- Configure PEs to provide access for CEs.
# Configure PE1.
[*PE1] e-trunk 1
[*PE1-e-trunk-1] priority 10
[*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*PE1-e-trunk-1] quit
[*PE1] interface eth-trunk 1
[*PE1-Eth-Trunk1] mac-address 00e0-fc12-3456
[*PE1-Eth-Trunk1] mode lacp-static
[*PE1-Eth-Trunk1] e-trunk 1
[*PE1-Eth-Trunk1] e-trunk mode force-master
[*PE1-Eth-Trunk1] es track evpn-peer 2.2.2.2
[*PE1-Eth-Trunk1] esi 0000.0001.0001.0001.0001
[*PE1-Eth-Trunk1] quit
[*PE1] interface eth-trunk1.1 mode l2
[*PE1-Eth-Trunk1.1] encapsulation dot1q vid 100
[*PE1-Eth-Trunk1.1] bridge-domain 10
[*PE1-Eth-Trunk1.1] quit
[*PE1] interface eth-trunk1.2 mode l2
[*PE1-Eth-Trunk1.2] encapsulation dot1q vid 200
[*PE1-Eth-Trunk1.2] bridge-domain 20
[*PE1-Eth-Trunk1.2] quit
[~PE1] commit
The configuration of PE2 is similar to the configuration of PE1. For configuration details, see Configuration Files in this section.
- Verify the configuration.
Run the display vxlan tunnel command on PE1 and check VXLAN tunnel information. The following example uses the command output on PE1.
[~PE1] display vxlan tunnel
Number of vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime ----------------------------------------------------------------------------------- 4026531841 3.3.3.3 4.4.4.4 up static 00:30:12 4026531842 1.1.1.1 2.2.2.2 up dynamic 00:12:28
Run the display bgp evpn all routing-table command on PE1. The command output shows that EVPN routes carrying Ethernet tag IDs are received from PE2.
[~PE1] display bgp evpn all routing-table
Local AS number : 100 BGP Local router ID is 1.1.1.1 Status codes: * - valid, > - best, d - damped, x - best external, a - add path, h - history, i - internal, s - suppressed, S - Stale Origin : i - IGP, e - EGP, ? - incomplete EVPN address family: Number of A-D Routes: 4 Route Distinguisher: 11:11 Network(ESI/EthTagId) NextHop *> 0000.0001.0001.0001.0001:100 0.0.0.0 * i 3.3.3.3 *> 0000.0001.0001.0001.0001:200 0.0.0.0 * i 3.3.3.3 EVPN-Instance evpn1: Number of A-D Routes: 4 Network(ESI/EthTagId) NextHop *> 0000.0001.0001.0001.0001:100 0.0.0.0 i 3.3.3.3 *> 0000.0001.0001.0001.0001:200 0.0.0.0 i 3.3.3.3 EVPN address family: Number of Inclusive Multicast Routes: 4 Route Distinguisher: 11:11 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *> 100:32:3.3.3.3 0.0.0.0 * i 3.3.3.3 *> 200:32:3.3.3.3 0.0.0.0 * i 3.3.3.3 EVPN-Instance evpn1: Number of Inclusive Multicast Routes: 4 Network(EthTagId/IpAddrLen/OriginalIp) NextHop *> 100:32:3.3.3.3 0.0.0.0 * i 3.3.3.3 *> 200:32:3.3.3.3 0.0.0.0 * i 3.3.3.3
Configuration Files
PE1 configuration file
# sysname PE1 # evpn vlan-extend private enable vlan-extend redirect enable local-remote frr enable bypass-vxlan enable # evpn vpn-instance evpn1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evpn1 bd-tag 100 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evpn1 bd-tag 200 # e-trunk 1 priority 10 peer-address 2.2.2.2 source-address 1.1.1.1 # isis 1 network-entity 10.0000.0000.0001.00 # interface Eth-Trunk1 mac-address 00e0-fc12-3456 mode lacp-static e-trunk 1 e-trunk mode force-master es track evpn-peer 2.2.2.2 esi 0000.0001.0001.0001.0001 # interface Eth-Trunk1.1 mode l2 encapsulation dot1q vid 100 bridge-domain 10 # interface Eth-Trunk1.2 mode l2 encapsulation dot1q vid 200 bridge-domain 20 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.20.1 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown eth-trunk 1 # interface GigabitEthernet0/1/3 undo shutdown ip address 10.1.1.1 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 1.1.1.1 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp vni 10 head-end peer-list 4.4.4.4 vni 20 head-end peer-list protocol bgp vni 20 head-end peer-list 4.4.4.4 # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack0 # ipv4-family unicast undo synchronization peer 2.2.2.2 enable # l2vpn-family evpn undo policy vpn-target peer 2.2.2.2 enable peer 2.2.2.2 advertise encap-type vxlan # return
PE2 configuration file
# sysname PE2 # evpn vlan-extend private enable vlan-extend redirect enable local-remote frr enable bypass-vxlan enable # evpn vpn-instance evpn1 bd-mode route-distinguisher 11:11 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evpn1 bd-tag 100 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evpn1 bd-tag 200 # e-trunk 1 priority 10 peer-address 1.1.1.1 source-address 2.2.2.2 # isis 1 network-entity 10.0000.0000.0002.00 # interface Eth-Trunk1 mac-address 00e0-fc12-3456 mode lacp-static e-trunk 1 e-trunk mode force-master es track evpn-peer 1.1.1.1 esi 0000.0001.0001.0001.0001 # interface Eth-Trunk1.1 mode l2 encapsulation dot1q vid 100 bridge-domain 10 # interface Eth-Trunk1.2 mode l2 encapsulation dot1q vid 200 bridge-domain 20 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.20.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown eth-trunk 1 # interface GigabitEthernet0/1/3 undo shutdown ip address 10.1.2.1 255.255.255.0 isis enable 1 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 isis enable 1 # interface LoopBack2 ip address 3.3.3.3 255.255.255.255 isis enable 1 # interface Nve1 source 3.3.3.3 bypass source 2.2.2.2 mac-address 00e0-fc12-7890 vni 10 head-end peer-list protocol bgp vni 10 head-end peer-list 4.4.4.4 vni 20 head-end peer-list protocol bgp vni 20 head-end peer-list 4.4.4.4 # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 advertise encap-type vxlan # return
CE configuration file
# sysname CE # vlan batch 1 to 4094 # interface Eth-Trunk1 port link-type trunk port trunk allow-pass vlan 100 200 # interface GigabitEthernet0/1/1 undo shutdown eth-trunk 1 # interface GigabitEthernet0/1/2 undo shutdown eth-trunk 1 # return
CPE configuration file
# sysname CPE # bridge-domain 10 vxlan vni 10 split-horizon-mode # bridge-domain 20 vxlan vni 20 split-horizon-mode # isis 1 network-entity 20.0000.0000.0001.00 # interface GigabitEthernet0/1/1 undo shutdown ip address 10.1.1.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.1.2.2 255.255.255.0 isis enable 1 # interface GigabitEthernet0/1/3 undo shutdown esi 0000.0000.0000.0000.0017 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 100 bridge-domain 10 # interface GigabitEthernet0/1/3.2 mode l2 encapsulation dot1q vid 200 bridge-domain 20 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 isis enable 1 # interface Nve1 source 4.4.4.4 vni 10 head-end peer-list 3.3.3.3 vni 20 head-end peer-list 3.3.3.3 # return
Example for Configuring IPv4 NFVI Distributed Gateway
This section provides an example for configuring IPv4 NFVI distributed gateway in a typical usage scenario.
Networking Requirements
Huawei's NFVI telecommunications (telco) cloud is a networking solution that incorporates Data Center Interconnect (DCI) and data center network (DCN) technologies. Mobile phone IPv4 traffic enters the DCN and accesses its virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE). After being processed by these, the phone traffic is forwarded over the Internet through the DCN to the destination devices. Equally, response traffic sent over the Internet from the destination devices to the mobile phones also undergoes this process. For this to take place and to ensure that the traffic is balanced within the DCN, you need to deploy the NFVI distributed gateway function on the DCN.
Interfaces 1 through 5 in this example represent GE 0/1/1, GE 0/1/2, GE 0/1/3, GE 0/1/4, and GE 0/1/5, respectively.
Figure 1-1129 shows the network on which the NFVI distributed gateway function is deployed. DCGW1 and DCGW2 are the DCN's border gateways. The DCGWs exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions (VNFs). As virtualized NEs, VNF1 and VNF2 can be deployed separately to implement the functions of the vUGW and vMSE. VNF1 and VNF2 are connected to L2GW/L3GW1 and L2GW/L3GW2 through respective interface process units (IPUs).
The EVPN VXLAN active-active gateway function is deployed on DCGW1 and DCGW2. Specifically, a bypass VXLAN tunnel is set up between DCGW1 and DCGW2. In addition, they use a virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between them.
The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can be deployed as a DCGW and L2GW/L3GW in this networking.
Device |
Interface |
IP Address and Mask |
---|---|---|
DCGW1 |
GE 0/1/1 |
10.6.1.1/24 |
GE 0/1/2 |
10.6.2.1/24 |
|
Loopback 0 |
9.9.9.9/32 |
|
Loopback1 |
3.3.3.3/32 |
|
Loopback2 |
33.33.33.33/32 |
|
DCGW2 |
GE 0/1/1 |
10.6.1.2/24 |
GE 0/1/2 |
10.6.3.1/24 |
|
Loopback0 |
9.9.9.9/32 |
|
Loopback1 |
4.4.4.4/32 |
|
Loopback2 |
44.44.44.44/32 |
|
L2GW/L3GW1 |
GE 0/1/1 |
10.6.4.1/24 |
GE 0/1/2 |
10.6.2.2/24 |
|
GE 0/1/3 |
- |
|
GE 0/1/4 |
- |
|
GE 0/1/5 |
- |
|
Loopback1 |
1.1.1.1/32 |
|
L2GW/L3GW2 |
GE 0/1/1 |
10.6.4.2/24 |
GE 0/1/2 |
10.6.3.2/24 |
|
GE 0/1/3 |
- |
|
GE 0/1/4 |
- |
|
Loopback1 |
2.2.2.2/32 |
Configuration Roadmap
The configuration roadmap is as follows:
- Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.
- Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.
- Configure an L3VPN instance and bind it to a VBDIF interface on each DCGW and each L2GW/L3GW.
- Configure BGP EVPN on each DCGW and each L2GW/L3GW.
- Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.
- On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.
- On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.
- On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.
- Configure each DCGW to establish a VPN BGP peer relationship with a VNF.
- Configure load balancing on each DCGW and each L2GW/L3GW.
Procedure
- Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
- Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.
For configuration details, see Configuration Files in this section.
- Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] evpn vpn-instance evrf1 bd-mode
[*DCGW1-evpn-instance-evrf1] route-distinguisher 1:1
[*DCGW1-evpn-instance-evrf1] vpn-target 1:1
[*DCGW1-evpn-instance-evrf1] quit
[*DCGW1] evpn vpn-instance evrf2 bd-mode
[*DCGW1-evpn-instance-evrf2] route-distinguisher 2:2
[*DCGW1-evpn-instance-evrf2] vpn-target 2:2
[*DCGW1-evpn-instance-evrf2] quit
[*DCGW1] evpn vpn-instance evrf3 bd-mode
[*DCGW1-evpn-instance-evrf3] route-distinguisher 3:3
[*DCGW1-evpn-instance-evrf3] vpn-target 3:3
[*DCGW1-evpn-instance-evrf3] quit
[*DCGW1] evpn vpn-instance evrf4 bd-mode
[*DCGW1-evpn-instance-evrf4] route-distinguisher 4:4
[*DCGW1-evpn-instance-evrf4] vpn-target 4:4
[*DCGW1-evpn-instance-evrf4] quit
[*DCGW1] bridge-domain 10
[*DCGW1-bd10] vxlan vni 100 split-horizon-mode
[*DCGW1-bd10] evpn binding vpn-instance evrf1
[*DCGW1-bd10] quit
[*DCGW1] bridge-domain 20
[*DCGW1-bd20] vxlan vni 110 split-horizon-mode
[*DCGW1-bd20] evpn binding vpn-instance evrf2
[*DCGW1-bd20] quit
[*DCGW1] bridge-domain 30
[*DCGW1-bd30] vxlan vni 120 split-horizon-mode
[*DCGW1-bd30] evpn binding vpn-instance evrf3
[*DCGW1-bd30] quit
[*DCGW1] bridge-domain 40
[*DCGW1-bd40] vxlan vni 130 split-horizon-mode
[*DCGW1-bd40] evpn binding vpn-instance evrf4
[*DCGW1-bd40] quit
[*DCGW1] commit
Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.
- Configure an L3VPN instance on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] vxlan vni 200
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*DCGW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] interface vbdif10
[*DCGW1-Vbdif10] ip binding vpn-instance vpn1
[*DCGW1-Vbdif10] ip address 10.1.1.1 24
[*DCGW1-Vbdif10] arp generate-rd-table enable
[*DCGW1-Vbdif10] vxlan anycast-gateway enable
[*DCGW1-Vbdif10] mac-address 00e0-fc00-0002
[*DCGW1-Vbdif10] quit
[*DCGW1] interface vbdif20
[*DCGW1-Vbdif20] ip binding vpn-instance vpn1
[*DCGW1-Vbdif20] ip address 10.2.1.1 24
[*DCGW1-Vbdif20] arp generate-rd-table enable
[*DCGW1-Vbdif20] vxlan anycast-gateway enable
[*DCGW1-Vbdif20] mac-address 00e0-fc00-0003
[*DCGW1-Vbdif20] quit
[*DCGW1] interface vbdif30
[*DCGW1-Vbdif30] ip binding vpn-instance vpn1
[*DCGW1-Vbdif30] ip address 10.3.1.1 24
[*DCGW1-Vbdif30] arp generate-rd-table enable
[*DCGW1-Vbdif30] vxlan anycast-gateway enable
[*DCGW1-Vbdif30] mac-address 00e0-fc00-0001
[*DCGW1-Vbdif30] quit
[*DCGW1] interface vbdif40
[*DCGW1-Vbdif40] ip binding vpn-instance vpn1
[*DCGW1-Vbdif40] ip address 10.4.1.1 24
[*DCGW1-Vbdif40] arp generate-rd-table enable
[*DCGW1-Vbdif40] vxlan anycast-gateway enable
[*DCGW1-Vbdif40] mac-address 00e0-fc00-0004
[*DCGW1-Vbdif40] quit
[*DCGW1] commit
Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.
- Configure BGP EVPN on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] ip ip-prefix uIP index 10 permit 10.10.10.10 32
[*DCGW1] route-policy stopuIP deny node 10
[*DCGW1-route-policy] if-match ip-prefix uIP
[*DCGW1-route-policy] quit
[*DCGW1] route-policy stopuIP permit node 20
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] peer 1.1.1.1 as-number 100
[*DCGW1-bgp] peer 1.1.1.1 connect-interface LoopBack 1
[*DCGW1-bgp] peer 2.2.2.2 as-number 100
[*DCGW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*DCGW1-bgp] peer 4.4.4.4 as-number 100
[*DCGW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] peer 2.2.2.2 as-number 100
[*L2GW/L3GW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*L2GW/L3GW1-bgp] peer 3.3.3.3 as-number 100
[*L2GW/L3GW1-bgp] peer 3.3.3.3 connect-interface LoopBack 1
[*L2GW/L3GW1-bgp] peer 4.4.4.4 as-number 100
[*L2GW/L3GW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise arp
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise arp
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise arp
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] evpn
[*DCGW1-evpn] bypass-vxlan enable
[*DCGW1-evpn] quit
[*DCGW1] interface nve 1
[*DCGW1-Nve1] source 9.9.9.9
[*DCGW1-Nve1] bypass source 3.3.3.3
[*DCGW1-Nve1] mac-address 00e0-fc00-0009
[*DCGW1-Nve1] vni 100 head-end peer-list protocol bgp
[*DCGW1-Nve1] vni 110 head-end peer-list protocol bgp
[*DCGW1-Nve1] vni 120 head-end peer-list protocol bgp
[*DCGW1-Nve1] vni 130 head-end peer-list protocol bgp
[*DCGW1-Nve1] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] interface nve 1
[*L2GW/L3GW1-Nve1] source 1.1.1.1
[*L2GW/L3GW1-Nve1] vni 100 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] vni 110 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] vni 120 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] vni 130 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] interface GigabitEthernet0/1/3.1 mode l2
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] encapsulation dot1q vid 10
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] rewrite pop single
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] bridge-domain 10
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] quit
[*L2GW/L3GW1] interface GigabitEthernet0/1/4.1 mode l2
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] encapsulation dot1q vid 20
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] rewrite pop single
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] bridge-domain 20
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] quit
[*L2GW/L3GW1] interface GigabitEthernet0/1/5.1 mode l2
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] encapsulation dot1q vid 10
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] rewrite pop single
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] bridge-domain 10
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] quit
[*L2GW/L3GW1] ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.1.1.2 tag 1000
[*L2GW/L3GW1] ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.2.1.2 tag 1000
[*L2GW/L3GW1] ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.1.1.3 tag 1000
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] import-route static
[*L2GW/L3GW1-bgp-vpn1] advertise l2vpn evpn import-route-multipath
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] route-policy sp permit node 10
[*L2GW/L3GW1-route-policy] if-match tag 1000
[*L2GW/L3GW1-route-policy] apply gateway-ip origin-nexthop
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] route-policy sp deny node 20
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] export route-policy sp evpn
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.
# Configure DCGW1.
[~DCGW1] ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
[*DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ip address 33.33.33.33 255.255.255.255
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] advertise l2vpn evpn
[*DCGW1-bgp-vpn1] import-route direct
[*DCGW1-bgp-vpn1] network 0.0.0.0 0
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ip-prefix lp index 10 permit 33.33.33.33 32
[*DCGW1] route-policy dp permit node 10
[*DCGW1-route-policy] if-match tag 2000
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ip-prefix lp
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp deny node 20
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] export route-policy dp evpn
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
- Configure each DCGW to establish a VPN BGP peer relationship with a VNF.
# Configure DCGW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW1-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW1-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
[*DCGW1-bgp-vpn1] peer 6.6.6.6 as-number 100
[*DCGW1-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
[*DCGW1-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
# Configure DCGW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv4-family vpn-instance vpn1
[*DCGW2-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW2-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
[*DCGW2-bgp-vpn1] peer 6.6.6.6 as-number 100
[*DCGW2-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
[*DCGW2-bgp-vpn1] quit
[*DCGW2-bgp] quit
[*DCGW2] commit
- Configure load balancing on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] maximum load-balancing 16
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- Verify the configuration.
Run the display bgp vpnv4 vpn-instance vpn1 peer command on each DCGW. The command output shows that the VPN BGP peer relationship between the DCGW and VNF is in Established state. The following example uses the command output on DCGW1:
[~DCGW1] display bgp vpnv4 vpn-instance vpn1 peer
BGP local router ID : 10.6.1.1 Local AS number : 100 VPN-Instance vpn1, Router ID 10.6.1.1: Total number of peers : 2 Peers in established state : 2 Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv 5.5.5.5 4 100 8136 8135 0 0118h05m Established 4 6.6.6.6 4 100 8140 8167 0 0118h07m Established 0
Run the display bgp vpnv4 vpn-instance vpn1 routing-table command on each DCGW. The command output shows that the DCGW has received the mobile phone route (destined for 10.10.10.10 in this example) from the VNF and the next hop of the route is the VNF IP address. The following example uses the command output on DCGW1:
[~DCGW1] display bgp vpnv4 vpn-instance vpn1 routing-table
BGP Local router ID is 10.6.1.1 Status codes: * - valid, > - best, d - damped, x - best external, a - add path, h - history, i - internal, s - suppressed, S - Stale Origin : i - IGP, e - EGP, ? - incomplete RPKI validation codes: V - valid, I - invalid, N - not-found VPN-Instance vpn1, Router ID 10.6.1.1: Total Number of Routes: 20 Network NextHop MED LocPrf PrefVal Path/Ogn *>i 5.5.5.5/32 1.1.1.1 0 100 0 ? * i 1.1.1.1 0 100 0 ? i 5.5.5.5 0 100 0 ? *>i 6.6.6.6/32 1.1.1.1 0 100 0 ? * i 2.2.2.2 0 100 0 ? * i 2.2.2.2 0 100 0 ? *> 10.1.1.0/24 0.0.0.0 0 0 ? * i 5.5.5.5 0 100 0 ? *> 10.1.1.1/32 0.0.0.0 0 0 ? *> 10.2.1.0/24 0.0.0.0 0 0 ? * i 5.5.5.5 0 100 0 ? *> 10.2.1.1/32 0.0.0.0 0 0 ? *> 10.3.1.0/24 0.0.0.0 0 0 ? *> 10.3.1.1/32 0.0.0.0 0 0 ? *> 10.4.1.0/24 0.0.0.0 0 0 ? *> 10.4.1.1/32 0.0.0.0 0 0 ? *>i 10.10.10.10/32 5.5.5.5 0 100 0 ? *> 33.33.33.33/32 0.0.0.0 0 0 ? *>i 44.44.44.44/32 9.9.9.9 0 100 0 ? *> 127.0.0.0/8 0.0.0.0 0 0 ?
Run the display ip routing-table vpn-instance vpn1 command on each DCGW. The command output shows the mobile phone routes in the VPN routing table on the DCGW and the outbound interfaces of the routes are VBDIF interfaces.
[~DCGW1] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route ------------------------------------------------------------------------------ Routing Table : vpn1 Destinations : 20 Routes : 23 Destination/Mask Proto Pre Cost Flags NextHop Interface 0.0.0.0/0 Static 60 0 DB 0.0.0.0 NULL0 5.5.5.5/32 IBGP 255 0 RD 10.2.1.2 Vbdif20 IBGP 255 0 RD 10.1.1.2 Vbdif10 6.6.6.6/32 IBGP 255 0 RD 10.1.1.3 Vbdif10 IBGP 255 0 RD 10.3.1.2 Vbdif30 IBGP 255 0 RD 10.4.1.2 Vbdif40 10.1.1.0/24 Direct 0 0 D 10.1.1.1 Vbdif10 10.1.1.1/32 Direct 0 0 D 127.0.0.1 Vbdif10 10.1.1.255/32 Direct 0 0 D 127.0.0.1 Vbdif10 10.2.1.0/24 Direct 0 0 D 10.2.1.1 Vbdif20 10.2.1.1/32 Direct 0 0 D 127.0.0.1 Vbdif20 10.2.1.255/32 Direct 0 0 D 127.0.0.1 Vbdif20 10.3.1.0/24 Direct 0 0 D 10.3.1.1 Vbdif30 10.3.1.1/32 Direct 0 0 D 127.0.0.1 Vbdif30 10.3.1.255/32 Direct 0 0 D 127.0.0.1 Vbdif30 10.4.1.0/24 Direct 0 0 D 10.4.1.1 Vbdif40 10.4.1.1/32 Direct 0 0 D 127.0.0.1 Vbdif40 10.4.1.255/32 Direct 0 0 D 127.0.0.1 Vbdif40 10.10.10.10/32 IBGP 255 0 RD 5.5.5.5 Vbdif20 IBGP 255 0 RD 5.5.5.5 Vbdif10 33.33.33.33/32 Direct 0 0 D 127.0.0.1 LoopBack2 44.44.44.44/32 IBGP 255 0 RD 4.4.4.4 VXLAN 127.0.0.0/8 Direct 0 0 D 127.0.0.1 InLoopBack0 255.255.255.255/32 Direct 0 0 D 127.0.0.1 InLoopBack0
Configuration Files
DCGW1 configuration file
# sysname DCGW1 # evpn bypass-vxlan enable # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 11:11 apply-label per-instance export route-policy dp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0002 vxlan anycast-gateway enable # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.2.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0003 vxlan anycast-gateway enable # interface Vbdif30 ip binding vpn-instance vpn1 ip address 10.3.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0001 vxlan anycast-gateway enable # interface Vbdif40 ip binding vpn-instance vpn1 ip address 10.4.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0004 vxlan anycast-gateway enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.1.1 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.2.1 255.255.255.0 # interface LoopBack0 ip address 9.9.9.9 255.255.255.255 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface LoopBack2 ip binding vpn-instance vpn1 ip address 33.33.33.33 255.255.255.255 # interface Nve1 source 9.9.9.9 bypass source 3.3.3.3 mac-address 00e0-fc00-0009 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 2.2.2.2 enable peer 4.4.4.4 enable # ipv4-family vpn-instance vpn1 network 0.0.0.0 0 import-route direct maximum load-balancing 16 advertise l2vpn evpn peer 5.5.5.5 as-number 100 peer 5.5.5.5 connect-interface LoopBack2 peer 5.5.5.5 route-policy p1 export peer 6.6.6.6 as-number 100 peer 6.6.6.6 connect-interface LoopBack2 peer 6.6.6.6 route-policy p1 export # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 capability-advertise add-path both peer 1.1.1.1 advertise add-path path-number 16 peer 1.1.1.1 advertise encap-type vxlan peer 2.2.2.2 enable peer 2.2.2.2 capability-advertise add-path both peer 2.2.2.2 advertise add-path path-number 16 peer 2.2.2.2 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise encap-type vxlan peer 4.4.4.4 route-policy stopuIP export # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 9.9.9.9 0.0.0.0 network 10.6.1.0 0.0.0.255 network 10.6.2.0 0.0.0.255 # route-policy dp permit node 10 if-match tag 2000 # route-policy dp permit node 15 if-match ip-prefix lp # route-policy dp deny node 20 # route-policy p1 deny node 10 # route-policy stopuIP deny node 10 if-match ip-prefix uIP # route-policy stopuIP permit node 20 # ip ip-prefix lp index 10 permit 33.33.33.33 32 ip ip-prefix uIP index 10 permit 10.10.10.10 32 # ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000 # return
DCGW2 configuration file
# sysname DCGW2 # evpn bypass-vxlan enable # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 22:22 apply-label per-instance export route-policy dp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0002 vxlan anycast-gateway enable # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.2.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0003 vxlan anycast-gateway enable # interface Vbdif30 ip binding vpn-instance vpn1 ip address 10.3.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0001 vxlan anycast-gateway enable # interface Vbdif40 ip binding vpn-instance vpn1 ip address 10.4.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0004 vxlan anycast-gateway enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.1.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.3.1 255.255.255.0 # interface LoopBack0 ip address 9.9.9.9 255.255.255.255 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface LoopBack2 ip binding vpn-instance vpn1 ip address 44.44.44.44 255.255.255.255 # interface Nve1 source 9.9.9.9 bypass source 4.4.4.4 mac-address 00e0-fc00-0009 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 2.2.2.2 enable peer 3.3.3.3 enable # ipv4-family vpn-instance vpn1 network 0.0.0.0 0 import-route direct maximum load-balancing 16 advertise l2vpn evpn peer 5.5.5.5 as-number 100 peer 5.5.5.5 connect-interface LoopBack2 peer 5.5.5.5 route-policy p1 export peer 6.6.6.6 as-number 100 peer 6.6.6.6 connect-interface LoopBack2 peer 6.6.6.6 route-policy p1 export # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 capability-advertise add-path both peer 1.1.1.1 advertise add-path path-number 16 peer 1.1.1.1 advertise encap-type vxlan peer 2.2.2.2 enable peer 2.2.2.2 capability-advertise add-path both peer 2.2.2.2 advertise add-path path-number 16 peer 2.2.2.2 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan peer 3.3.3.3 route-policy stopuIP export # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 9.9.9.9 0.0.0.0 network 10.6.1.0 0.0.0.255 network 10.6.3.0 0.0.0.255 # route-policy dp permit node 10 if-match tag 2000 # route-policy dp permit node 15 if-match ip-prefix lp # route-policy dp deny node 20 # route-policy p1 deny node 10 # route-policy stopuIP deny node 10 if-match ip-prefix uIP # route-policy stopuIP permit node 20 # ip ip-prefix lp index 10 permit 44.44.44.44 32 ip ip-prefix uIP index 10 permit 10.10.10.10 32 # ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000 # return
L2GW/L3GW1 configuration file
# sysname L2GW/L3GW1 # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 33:33 apply-label per-instance export route-policy sp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0002 vxlan anycast-gateway enable arp collect host enable # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.2.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0003 vxlan anycast-gateway enable arp collect host enable # interface Vbdif30 ip binding vpn-instance vpn1 ip address 10.3.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0001 vxlan anycast-gateway enable arp collect host enable # interface Vbdif40 ip binding vpn-instance vpn1 ip address 10.4.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0004 vxlan anycast-gateway enable arp collect host enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.4.1 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.2.2 255.255.255.0 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface GigabitEthernet0/1/4.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface GigabitEthernet0/1/5.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Nve1 source 1.1.1.1 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 2.2.2.2 enable peer 3.3.3.3 enable peer 4.4.4.4 enable # ipv4-family vpn-instance vpn1 import-route static maximum load-balancing 16 advertise l2vpn evpn import-route-multipath # l2vpn-family evpn undo policy vpn-target bestroute add-path path-number 16 peer 2.2.2.2 enable peer 2.2.2.2 advertise arp peer 2.2.2.2 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise arp peer 3.3.3.3 capability-advertise add-path both peer 3.3.3.3 advertise add-path path-number 16 peer 3.3.3.3 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise arp peer 4.4.4.4 capability-advertise add-path both peer 4.4.4.4 advertise add-path path-number 16 peer 4.4.4.4 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 1.1.1.1 0.0.0.0 network 10.6.2.0 0.0.0.255 network 10.6.4.0 0.0.0.255 # route-policy sp permit node 10 if-match tag 1000 apply gateway-ip origin-nexthop # route-policy sp deny node 20 # ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.1.1.2 tag 1000 ip route-static vpn-instance vpn1 5.5.5.5 255.255.255.255 10.2.1.2 tag 1000 ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.1.1.3 tag 1000 # return
L2GW/L3GW2 configuration file
# sysname L2GW/L3GW2 # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv4-family route-distinguisher 44:44 apply-label per-instance export route-policy sp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ip address 10.1.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0002 vxlan anycast-gateway enable arp collect host enable # interface Vbdif20 ip binding vpn-instance vpn1 ip address 10.2.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0003 vxlan anycast-gateway enable arp collect host enable # interface Vbdif30 ip binding vpn-instance vpn1 ip address 10.3.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0001 vxlan anycast-gateway enable arp collect host enable # interface Vbdif40 ip binding vpn-instance vpn1 ip address 10.4.1.1 255.255.255.0 arp generate-rd-table enable mac-address 00e0-fc00-0004 vxlan anycast-gateway enable arp collect host enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.4.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.3.2 255.255.255.0 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 30 rewrite pop single bridge-domain 30 # interface GigabitEthernet0/1/4.1 mode l2 encapsulation dot1q vid 40 rewrite pop single bridge-domain 40 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Nve1 source 2.2.2.2 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 3.3.3.3 enable peer 4.4.4.4 enable # ipv4-family vpn-instance vpn1 import-route static maximum load-balancing 16 advertise l2vpn evpn import-route-multipath # l2vpn-family evpn undo policy vpn-target bestroute add-path path-number 16 peer 1.1.1.1 enable peer 1.1.1.1 advertise arp peer 1.1.1.1 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise arp peer 3.3.3.3 capability-advertise add-path both peer 3.3.3.3 advertise add-path path-number 16 peer 3.3.3.3 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise arp peer 4.4.4.4 capability-advertise add-path both peer 4.4.4.4 advertise add-path path-number 16 peer 4.4.4.4 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 10.6.3.0 0.0.0.255 network 10.6.4.0 0.0.0.255 # route-policy sp permit node 10 if-match tag 1000 apply gateway-ip origin-nexthop # route-policy sp deny node 20 # ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.3.1.2 tag 1000 ip route-static vpn-instance vpn1 6.6.6.6 255.255.255.255 10.4.1.2 tag 1000 # return
VNF1 configuration file
For details, see the configuration file of a specific device model.
VNF2 configuration file
For details, see the configuration file of a specific device model.
Example for Configuring IPv6 NFVI Distributed Gateway
This section provides an example for configuring IPv6 NFVI distributed gateway in a typical usage scenario.
Networking Requirements
Huawei's NFVI telecommunications (telco) cloud is a networking solution that incorporates Data Center Interconnect (DCI) and data center network (DCN) technologies. Mobile phone IPv6 traffic enters the DCN and accesses its virtualized unified gateway (vUGW) and virtual multiservice engine (vMSE). After being processed by these, the phone traffic is forwarded over the Internet through the DCN to the destination devices. Equally, response traffic sent over the Internet from the destination devices to the mobile phones also undergoes this process. For this to take place and to ensure that the traffic is balanced within the DCN, you need to deploy the NFVI distributed gateway function on the DCN.
Interfaces 1 through 5 in this example represent GE 0/1/1, GE 0/1/2, GE 0/1/3, GE 0/1/4, and GE 0/1/5, respectively.
Figure 1-1130 shows the DCN on which the NFVI distributed gateway is deployed. DCGW1 and DCGW2 are the DCN's border gateways. The DCGWs exchange Internet routes with the external network. L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions (VNFs). As virtualized NEs, VNF1 and VNF2 can be deployed separately to implement the functions of the vUGW and vMSE. VNF1 and VNF2 are connected to L2GW/L3GW1 and L2GW/L3GW2 through respective interface process units (IPUs).
The EVPN VXLAN active-active gateway function is deployed on DCGW1 and DCGW2. Specifically, a bypass VXLAN tunnel is set up between DCGW1 and DCGW2. In addition, they use a virtual anycast VTEP address to establish VXLAN tunnels with L2GW/L3GW1 and L2GW/L3GW2.
The distributed gateway function is deployed on L2GW/L3GW1 and L2GW/L3GW2, and a VXLAN tunnel is established between them.
The NetEngine 8100 M, NetEngine 8000E M, NetEngine 8000 M can be deployed as a DCGW and L2GW/L3GW in this networking.
Device |
Interface |
IP Address and Mask |
---|---|---|
DCGW1 |
GE 0/1/1 |
10.6.1.1/24 |
GE 0/1/2 |
10.6.2.1/24 |
|
Loopback 0 |
9.9.9.9/32 |
|
Loopback 1 |
3.3.3.3/32 |
|
Loopback 2 |
2001:db8:33::33/128 |
|
DCGW2 |
GE 0/1/1 |
10.6.1.2/24 |
GE 0/1/2 |
10.6.3.1/24 |
|
Loopback 0 |
9.9.9.9/32 |
|
Loopback 1 |
4.4.4.4/32 |
|
Loopback 2 |
2001:db8:44::44/128 |
|
L2GW/L3GW1 |
GE 0/1/1 |
10.6.4.1/24 |
GE 0/1/2 |
10.6.2.2/24 |
|
GE 0/1/3 |
- |
|
GE 0/1/4 |
- |
|
GE 0/1/5 |
- |
|
Loopback 1 |
1.1.1.1/32 |
|
L2GW/L3GW2 |
GE 0/1/1 |
10.6.4.2/24 |
GE 0/1/2 |
10.6.3.2/24 |
|
GE 0/1/3 |
- |
|
GE 0/1/4 |
- |
|
Loopback 1 |
2.2.2.2/32 |
Configuration Roadmap
The configuration roadmap is as follows:
- Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.
- Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.
- Configure an L3VPN instance and bind it to a VBDIF interface on each DCGW and each L2GW/L3GW.
- Configure BGP EVPN on each DCGW and each L2GW/L3GW.
- Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.
- On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.
- On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.
- On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.
- Configure each DCGW to establish a VPN BGP peer relationship with a VNF.
- Configure load balancing on each DCGW and each L2GW/L3GW.
Procedure
- Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
- Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure Layer 3 communication. OSPF is used in this example.
For configuration details, see Configuration Files in this section.
- Configure an EVPN instance and bind it to a BD on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] evpn vpn-instance evrf1 bd-mode
[*DCGW1-evpn-instance-evrf1] route-distinguisher 1:1
[*DCGW1-evpn-instance-evrf1] vpn-target 1:1
[*DCGW1-evpn-instance-evrf1] quit
[*DCGW1] evpn vpn-instance evrf2 bd-mode
[*DCGW1-evpn-instance-evrf2] route-distinguisher 2:2
[*DCGW1-evpn-instance-evrf2] vpn-target 2:2
[*DCGW1-evpn-instance-evrf2] quit
[*DCGW1] evpn vpn-instance evrf3 bd-mode
[*DCGW1-evpn-instance-evrf3] route-distinguisher 3:3
[*DCGW1-evpn-instance-evrf3] vpn-target 3:3
[*DCGW1-evpn-instance-evrf3] quit
[*DCGW1] evpn vpn-instance evrf4 bd-mode
[*DCGW1-evpn-instance-evrf4] route-distinguisher 4:4
[*DCGW1-evpn-instance-evrf4] vpn-target 4:4
[*DCGW1-evpn-instance-evrf4] quit
[*DCGW1] bridge-domain 10
[*DCGW1-bd10] vxlan vni 100 split-horizon-mode
[*DCGW1-bd10] evpn binding vpn-instance evrf1
[*DCGW1-bd10] quit
[*DCGW1] bridge-domain 20
[*DCGW1-bd20] vxlan vni 110 split-horizon-mode
[*DCGW1-bd20] evpn binding vpn-instance evrf2
[*DCGW1-bd20] quit
[*DCGW1] bridge-domain 30
[*DCGW1-bd30] vxlan vni 120 split-horizon-mode
[*DCGW1-bd30] evpn binding vpn-instance evrf3
[*DCGW1-bd30] quit
[*DCGW1] bridge-domain 40
[*DCGW1-bd40] vxlan vni 130 split-horizon-mode
[*DCGW1-bd40] evpn binding vpn-instance evrf4
[*DCGW1-bd40] quit
[*DCGW1] commit
Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.
- Configure an L3VPN instance on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] vxlan vni 200
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] route-distinguisher 11:11
[*DCGW1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] interface vbdif10
[*DCGW1-Vbdif10] ip binding vpn-instance vpn1
[*DCGW1-Vbdif10] ipv6 enable
[*DCGW1-Vbdif10] ipv6 address 2001:db8:1::1 64
[*DCGW1-Vbdif10] ipv6 nd generate-rd-table enable
[*DCGW1-Vbdif10] vxlan anycast-gateway enable
[*DCGW1-Vbdif10] mac-address 00e0-fc00-0002
[*DCGW1-Vbdif10] quit
[*DCGW1] interface vbdif20
[*DCGW1-Vbdif20] ip binding vpn-instance vpn1
[*DCGW1-Vbdif20] ipv6 enable
[*DCGW1-Vbdif20] ipv6 address 2001:db8:2::1 64
[*DCGW1-Vbdif20] ipv6 nd generate-rd-table enable
[*DCGW1-Vbdif20] vxlan anycast-gateway enable
[*DCGW1-Vbdif20] mac-address 00e0-fc00-0003
[*DCGW1-Vbdif20] quit
[*DCGW1] interface vbdif30
[*DCGW1-Vbdif30] ip binding vpn-instance vpn1
[*DCGW1-Vbdif30] ipv6 enable
[*DCGW1-Vbdif30] ipv6 address 2001:db8:3::1 64
[*DCGW1-Vbdif30] ipv6 nd generate-rd-table enable
[*DCGW1-Vbdif30] vxlan anycast-gateway enable
[*DCGW1-Vbdif30] mac-address 00e0-fc00-0001
[*DCGW1-Vbdif30] quit
[*DCGW1] interface vbdif40
[*DCGW1-Vbdif40] ip binding vpn-instance vpn1
[*DCGW1-Vbdif40] ipv6 enable
[*DCGW1-Vbdif40] ipv6 address 2001:db8:4::1 64
[*DCGW1-Vbdif40] ipv6 nd generate-rd-table enable
[*DCGW1-Vbdif40] vxlan anycast-gateway enable
[*DCGW1-Vbdif40] mac-address 00e0-fc00-0004
[*DCGW1-Vbdif40] quit
[*DCGW1] commit
Repeat this step for DCGW2 and each L2GW/L3GW. For configuration details, see Configuration Files in this section.
- Configure BGP EVPN on DCGW1 and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128
[*DCGW1] route-policy stopuIP deny node 10
[*DCGW1-route-policy] if-match ipv6 address prefix-list uIP
[*DCGW1-route-policy] quit
[*DCGW1] route-policy stopuIP permit node 20
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] peer 1.1.1.1 as-number 100
[*DCGW1-bgp] peer 1.1.1.1 connect-interface LoopBack 1
[*DCGW1-bgp] peer 2.2.2.2 as-number 100
[*DCGW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*DCGW1-bgp] peer 4.4.4.4 as-number 100
[*DCGW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise encap-type vxlan
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] peer 2.2.2.2 as-number 100
[*L2GW/L3GW1-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*L2GW/L3GW1-bgp] peer 3.3.3.3 as-number 100
[*L2GW/L3GW1-bgp] peer 3.3.3.3 connect-interface LoopBack 1
[*L2GW/L3GW1-bgp] peer 4.4.4.4 as-number 100
[*L2GW/L3GW1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise nd
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise encap-type vxlan
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise nd
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise nd
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- Configure a VXLAN tunnel on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] evpn
[*DCGW1-evpn] bypass-vxlan enable
[*DCGW1-evpn] quit
[*DCGW1] interface nve 1
[*DCGW1-Nve1] source 9.9.9.9
[*DCGW1-Nve1] bypass source 3.3.3.3
[*DCGW1-Nve1] mac-address 00e0-fc00-0009
[*DCGW1-Nve1] vni 100 head-end peer-list protocol bgp
[*DCGW1-Nve1] vni 110 head-end peer-list protocol bgp
[*DCGW1-Nve1] vni 120 head-end peer-list protocol bgp
[*DCGW1-Nve1] vni 130 head-end peer-list protocol bgp
[*DCGW1-Nve1] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] interface nve 1
[*L2GW/L3GW1-Nve1] source 1.1.1.1
[*L2GW/L3GW1-Nve1] vni 100 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] vni 110 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] vni 120 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] vni 130 head-end peer-list protocol bgp
[*L2GW/L3GW1-Nve1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- On each L2GW/L3GW, configure a Layer 2 sub-interface that connects to a VNF and static VPN routes to the VNF.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] interface GigabitEthernet0/1/3.1 mode l2
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] encapsulation dot1q vid 10
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] rewrite pop single
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] bridge-domain 10
[*L2GW/L3GW1-GigabitEthernet0/1/3.1] quit
[*L2GW/L3GW1] interface GigabitEthernet0/1/4.1 mode l2
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] encapsulation dot1q vid 20
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] rewrite pop single
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] bridge-domain 20
[*L2GW/L3GW1-GigabitEthernet0/1/4.1] quit
[*L2GW/L3GW1] interface GigabitEthernet0/1/5.1 mode l2
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] encapsulation dot1q vid 10
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] rewrite pop single
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] bridge-domain 10
[*L2GW/L3GW1-GigabitEthernet0/1/5.1] quit
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:1::2 tag 1000
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:2::2 tag 1000
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:1::3 tag 1000
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- On each L2GW/L3GW, configure BGP EVPN to import static VPN routes, and configure a route policy for the L3VPN instance to keep the original next of the static VPN routes.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-6-vpn1] import-route static
[*L2GW/L3GW1-bgp-6-vpn1] advertise l2vpn evpn import-route-multipath
[*L2GW/L3GW1-bgp-6-vpn1] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] route-policy sp permit node 10
[*L2GW/L3GW1-route-policy] if-match tag 1000
[*L2GW/L3GW1-route-policy] apply ipv6 gateway-ip origin-nexthop
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] route-policy sp deny node 20
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] export route-policy sp evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- On each DCGW, configure default static routes for the VPN instance and loopback routes used to establish a VPN BGP peer relationship with a VNF. Then configure a route policy for the L3VPN instance so that the DCGW can advertise only the default static routes and loopback routes through BGP EVPN.
# Configure DCGW1.
[~DCGW1] ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
[*DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ipv6 enable
[*DCGW1-LoopBack2] ipv6 address 2001:db8:33::33 128
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] advertise l2vpn evpn
[*DCGW1-bgp-6-vpn1] import-route direct
[*DCGW1-bgp-6-vpn1] network :: 0
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ipv6-prefix lp index 10 permit 2001:db8:33::33 128
[*DCGW1] route-policy dp permit node 10
[*DCGW1-route-policy] if-match tag 2000
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ipv6 address prefix-list lp
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp deny node 20
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] export route-policy dp evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
- Configure each DCGW to establish a VPN BGP peer relationship with a VNF.
# Configure DCGW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
# Configure DCGW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv6-family vpn-instance vpn1
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
[*DCGW2-bgp-6-vpn1] quit
[*DCGW2-bgp] quit
[*DCGW2] commit
- Configure load balancing on each DCGW and each L2GW/L3GW.
# Configure DCGW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] maximum load-balancing 16
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DCGW2. For configuration details, see Configuration Files in this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-6-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-6-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration Files in this section.
- Verify the configuration.
Run the display bgp vpnv6 vpn-instance vpn1 peer command on each DCGW. The command output shows that the VPN BGP peer relationship between the DCGW and each VNF is Established. The following example uses the command output on DCGW1:
[~DCGW1] display bgp vpnv6 vpn-instance vpn1 peer
BGP local router ID : 9.9.9.9 Local AS number : 100 Total number of peers : 2 Peers in established state : 0 VPN-Instance vpn1, Router ID 9.9.9.9: Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv 2001:DB8:5::5 4 100 7136 7135 0 0118h05m Established 4 2001:DB8:6::6 4 100 7140 7167 0 01:59:11 Established 0
Run the display bgp vpnv6 vpn-instance vpn1 routing-table command on each DCGW. The command output shows that the DCGW has received the mobile phone route (destined for 2001:DB8:10::10 in this example) from the VNF and the next hop of the route is the VNF IP address. The following example uses the command output on DCGW1:
[~DCGW] display bgp vpnv6 vpn-instance vpn1 routing-table
BGP Local router ID is 9.9.9.9 Status codes: * - valid, > - best, d - damped, x - best external, a - add path, h - history, i - internal, s - suppressed, S - Stale Origin : i - IGP, e - EGP, ? - incomplete RPKI validation codes: V - valid, I - invalid, N - not-found VPN-Instance vpn1, Router ID 9.9.9.9: Total Number of Routes: 19 *> Network : :: PrefixLen : 0 NextHop : :: LocPrf : MED : 0 PrefVal : 32768 Label : Path/Ogn : i * i NextHop : ::FFFF:9.9.9.9 LocPrf : 100 MED : 0 PrefVal : 0 Label : 200/NULL Path/Ogn : i *> Network : 2001:DB8:1:: PrefixLen : 64 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:1::1 PrefixLen : 128 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:2:: PrefixLen : 64 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:2::1 PrefixLen : 128 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:3:: PrefixLen : 64 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:3::1 PrefixLen : 128 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:4:: PrefixLen : 64 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:4::1 PrefixLen : 128 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *>i Network : 2001:DB8:5::5 PrefixLen : 128 NextHop : ::FFFF:1.1.1.1 LocPrf : 100 MED : 0 PrefVal : 0 Label : Path/Ogn : ? * i NextHop : ::FFFF:1.1.1.1 LocPrf : 100 MED : 0 PrefVal : 0 Label : Path/Ogn : ? *>i Network : 2001:DB8:6::6 PrefixLen : 128 NextHop : ::FFFF:1.1.1.1 LocPrf : 100 MED : 0 PrefVal : 0 Label : Path/Ogn : ? * i NextHop : ::FFFF:2.2.2.2 LocPrf : 100 MED : 0 PrefVal : 0 Label : Path/Ogn : ? * i NextHop : ::FFFF:2.2.2.2 LocPrf : 100 MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:10::10 PrefixLen : 128 NextHop : 2001:DB8:5::5 LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *> Network : 2001:DB8:33::33 PrefixLen : 128 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ? *>i Network : 2001:DB8:44::44 PrefixLen : 128 NextHop : ::FFFF:9.9.9.9 LocPrf : 100 MED : 0 PrefVal : 0 Label : 200/NULL Path/Ogn : ? *> Network : FE80:: PrefixLen : 10 NextHop : :: LocPrf : MED : 0 PrefVal : 0 Label : Path/Ogn : ?
Run the display ipv6 routing-table vpn-instance vpn1 command on each DCGW. The command output shows the mobile phone routes in the VPN routing table on the DCGW and the outbound interfaces of the routes are VBDIF interfaces.
[~DCGW] display ipv6 routing-table vpn-instance vpn1
Routing Table : vpn1 Destinations : 15 Routes : 19 Destination : :: PrefixLength : 0 NextHop : :: Preference : 60 Cost : 0 Protocol : Static RelayNextHop : :: TunnelID : 0x0 Interface : NULL0 Flags : DB Destination : 2001:DB8:1:: PrefixLength : 64 NextHop : 2001:DB8:1::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif10 Flags : D Destination : 2001:DB8:1::1 PrefixLength : 128 NextHop : ::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif10 Flags : D Destination : 2001:DB8:2:: PrefixLength : 64 NextHop : 2001:DB8:2::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif20 Flags : D Destination : 2001:DB8:2::1 PrefixLength : 128 NextHop : ::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif20 Flags : D Destination : 2001:DB8:3:: PrefixLength : 64 NextHop : 2001:DB8:3::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif30 Flags : D Destination : 2001:DB8:3::1 PrefixLength : 128 NextHop : ::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif30 Flags : D Destination : 2001:DB8:4:: PrefixLength : 64 NextHop : 2001:DB8:4::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif40 Flags : D Destination : 2001:DB8:4::1 PrefixLength : 128 NextHop : ::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif40 Flags : D Destination : 2001:DB8:5::5 PrefixLength : 128 NextHop : 2001:DB8:2::2 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : 2001:DB8:2::2 TunnelID : 0x0 Interface : Vbdif20 Flags : RD Destination : 2001:DB8:5::5 PrefixLength : 128 NextHop : 2001:DB8:1::2 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : 2001:DB8:1::2 TunnelID : 0x0 Interface : Vbdif10 Flags : RD Destination : 2001:DB8:6::6 PrefixLength : 128 NextHop : 2001:DB8:1::3 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : 2001:DB8:1::3 TunnelID : 0x0 Interface : Vbdif10 Flags : RD Destination : 2001:DB8:6::6 PrefixLength : 128 NextHop : 2001:DB8:4::2 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : 2001:DB8:4::2 TunnelID : 0x0 Interface : Vbdif40 Flags : RD Destination : 2001:DB8:6::6 PrefixLength : 128 NextHop : 2001:DB8:3::2 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : 2001:DB8:3::2 TunnelID : 0x0 Interface : Vbdif30 Flags : RD Destination : 2001:DB8:10::10 PrefixLength : 128 NextHop : 2001:DB8:5::5 Preference : 0 Cost : 0 Protocol : IBGP RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif10 Flags : D Destination : 2001:DB8:10::10 PrefixLength : 128 NextHop : 2001:DB8:5::5 Preference : 0 Cost : 0 Protocol : IBGP RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif20 Flags : D Destination : 2001:DB8:33::33 PrefixLength : 128 NextHop : ::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : LoopBack2 Flags : D Destination : 2001:DB8:44::44 PrefixLength : 128 NextHop : ::FFFF:4.4.4.4 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : :: TunnelID : 0x0000000027f0000001 Interface : VXLAN Flags : RD Destination : FE80:: PrefixLength : 10 NextHop : :: Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : NULL0 Flags : DB
Configuration Files
DCGW1 configuration file
# sysname DCGW1 # evpn bypass-vxlan enable # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 11:11 apply-label per-instance export route-policy dp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:1::1/64 mac-address 00e0-fc00-0002 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif20 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:2::1/64 mac-address 00e0-fc00-0003 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif30 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:3::1/64 mac-address 00e0-fc00-0001 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif40 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:4::1/64 mac-address 00e0-fc00-0004 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.1.1 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.2.1 255.255.255.0 # interface LoopBack0 ip address 9.9.9.9 255.255.255.255 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # interface LoopBack2 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:33::33/128 # interface Nve1 source 9.9.9.9 bypass source 3.3.3.3 mac-address 00e0-fc00-0009 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 2.2.2.2 enable peer 4.4.4.4 enable # ipv6-family vpn-instance vpn1 network :: 0 import-route direct maximum load-balancing 16 advertise l2vpn evpn peer 2001:db8:5::5 as-number 100 peer 2001:db8:5::5 connect-interface LoopBack2 peer 2001:db8:5::5 route-policy p1 export peer 2001:db8:6::6 as-number 100 peer 2001:db8:6::6 connect-interface LoopBack2 peer 2001:db8:6::6 route-policy p1 export # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 capability-advertise add-path both peer 1.1.1.1 advertise add-path path-number 16 peer 1.1.1.1 advertise encap-type vxlan peer 2.2.2.2 enable peer 2.2.2.2 capability-advertise add-path both peer 2.2.2.2 advertise add-path path-number 16 peer 2.2.2.2 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise encap-type vxlan peer 4.4.4.4 route-policy stopuIP export # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 9.9.9.9 0.0.0.0 network 10.6.1.0 0.0.0.255 network 10.6.2.0 0.0.0.255 # route-policy dp permit node 10 if-match tag 2000 # route-policy dp permit node 15 if-match ipv6 address prefix-list lp # route-policy dp deny node 20 # route-policy p1 deny node 10 # route-policy stopuIP deny node 10 if-match ipv6 address prefix-list uIP # route-policy stopuIP permit node 20 # ip ipv6-prefix lp index 10 permit 2001:db8:33::33 128 ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128 # ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000 # return
DCGW2 configuration file
# sysname DCGW2 # evpn bypass-vxlan enable # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 22:22 apply-label per-instance export route-policy dp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:1::1/64 mac-address 00e0-fc00-0002 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif20 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:2::1/64 mac-address 00e0-fc00-0003 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif30 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:3::1/64 mac-address 00e0-fc00-0001 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif40 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:4::1/64 mac-address 00e0-fc00-0004 ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.1.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.3.1 255.255.255.0 # interface LoopBack0 ip address 9.9.9.9 255.255.255.255 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # interface LoopBack2 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:44::44 128 # interface Nve1 source 9.9.9.9 bypass source 4.4.4.4 mac-address 00e0-fc00-0009 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 2.2.2.2 enable peer 3.3.3.3 enable # ipv6-family vpn-instance vpn1 network :: 0 import-route direct maximum load-balancing 16 advertise l2vpn evpn peer 2001:db8:5::5 as-number 100 peer 2001:db8:5::5 connect-interface LoopBack2 peer 2001:db8:5::5 route-policy p1 export peer 2001:db8:6::6 as-number 100 peer 2001:db8:6::6 connect-interface LoopBack2 peer 2001:db8:6::6 route-policy p1 export # l2vpn-family evpn undo policy vpn-target peer 1.1.1.1 enable peer 1.1.1.1 capability-advertise add-path both peer 1.1.1.1 advertise add-path path-number 16 peer 1.1.1.1 advertise encap-type vxlan peer 2.2.2.2 enable peer 2.2.2.2 capability-advertise add-path both peer 2.2.2.2 advertise add-path path-number 16 peer 2.2.2.2 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise encap-type vxlan peer 3.3.3.3 route-policy stopuIP export # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 9.9.9.9 0.0.0.0 network 10.6.1.0 0.0.0.255 network 10.6.3.0 0.0.0.255 # route-policy dp permit node 10 if-match tag 2000 # route-policy dp permit node 15 if-match ipv6 address prefix-list lp # route-policy dp deny node 20 # route-policy p1 deny node 10 # route-policy stopuIP deny node 10 if-match ipv6 address prefix-list uIP # route-policy stopuIP permit node 20 # ip ipv6-prefix lp index 10 permit 2001:db8:44::44 128 ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128 # ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000 # return
L2GW/L3GW1 configuration file
# sysname L2GW/L3GW1 # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 33:33 apply-label per-instance export route-policy sp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:1::1/64 mac-address 00e0-fc00-0002 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif20 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:2::1/64 mac-address 00e0-fc00-0003 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif30 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:3::1/64 mac-address 00e0-fc00-0001 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif40 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:4::1/64 mac-address 00e0-fc00-0004 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.4.1 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.2.2 255.255.255.0 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface GigabitEthernet0/1/4.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface GigabitEthernet0/1/5.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # interface Nve1 source 1.1.1.1 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 2.2.2.2 as-number 100 peer 2.2.2.2 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 2.2.2.2 enable peer 3.3.3.3 enable peer 4.4.4.4 enable # ipv6-family vpn-instance vpn1 import-route static maximum load-balancing 16 advertise l2vpn evpn import-route-multipath # l2vpn-family evpn undo policy vpn-target bestroute add-path path-number 16 peer 2.2.2.2 enable peer 2.2.2.2 advertise nd peer 2.2.2.2 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise nd peer 3.3.3.3 capability-advertise add-path both peer 3.3.3.3 advertise add-path path-number 16 peer 3.3.3.3 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise nd peer 4.4.4.4 capability-advertise add-path both peer 4.4.4.4 advertise add-path path-number 16 peer 4.4.4.4 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 1.1.1.1 0.0.0.0 network 10.6.2.0 0.0.0.255 network 10.6.4.0 0.0.0.255 # route-policy sp permit node 10 if-match tag 1000 apply ipv6 gateway-ip origin-nexthop # route-policy sp deny node 20 # ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:1::2 tag 1000 ipv6 route-static vpn-instance vpn1 2001:db8:5::5 128 2001:db8:2::2 tag 1000 ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:1::3 tag 1000 # return
L2GW/L3GW2 configuration file
# sysname L2GW/L3GW2 # evpn vpn-instance evrf1 bd-mode route-distinguisher 1:1 vpn-target 1:1 export-extcommunity vpn-target 1:1 import-extcommunity # evpn vpn-instance evrf2 bd-mode route-distinguisher 2:2 vpn-target 2:2 export-extcommunity vpn-target 2:2 import-extcommunity # evpn vpn-instance evrf3 bd-mode route-distinguisher 3:3 vpn-target 3:3 export-extcommunity vpn-target 3:3 import-extcommunity # evpn vpn-instance evrf4 bd-mode route-distinguisher 4:4 vpn-target 4:4 export-extcommunity vpn-target 4:4 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 44:44 apply-label per-instance export route-policy sp evpn vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 200 # bridge-domain 10 vxlan vni 100 split-horizon-mode evpn binding vpn-instance evrf1 # bridge-domain 20 vxlan vni 110 split-horizon-mode evpn binding vpn-instance evrf2 # bridge-domain 30 vxlan vni 120 split-horizon-mode evpn binding vpn-instance evrf3 # bridge-domain 40 vxlan vni 130 split-horizon-mode evpn binding vpn-instance evrf4 # interface Vbdif10 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:1::1/64 mac-address 00e0-fc00-0002 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif20 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:2::1/64 mac-address 00e0-fc00-0003 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif30 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:3::1/64 mac-address 00e0-fc00-0001 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface Vbdif40 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:db8:4::1/64 mac-address 00e0-fc00-0004 ipv6 nd collect host enable ipv6 nd generate-rd-table enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/1 undo shutdown ip address 10.6.4.2 255.255.255.0 # interface GigabitEthernet0/1/2 undo shutdown ip address 10.6.3.2 255.255.255.0 # interface GigabitEthernet0/1/3.1 mode l2 encapsulation dot1q vid 30 rewrite pop single bridge-domain 30 # interface GigabitEthernet0/1/4.1 mode l2 encapsulation dot1q vid 40 rewrite pop single bridge-domain 40 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # interface Nve1 source 2.2.2.2 vni 100 head-end peer-list protocol bgp vni 110 head-end peer-list protocol bgp vni 120 head-end peer-list protocol bgp vni 130 head-end peer-list protocol bgp # bgp 100 peer 1.1.1.1 as-number 100 peer 1.1.1.1 connect-interface LoopBack1 peer 3.3.3.3 as-number 100 peer 3.3.3.3 connect-interface LoopBack1 peer 4.4.4.4 as-number 100 peer 4.4.4.4 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 1.1.1.1 enable peer 3.3.3.3 enable peer 4.4.4.4 enable # ipv6-family vpn-instance vpn1 import-route static maximum load-balancing 16 advertise l2vpn evpn import-route-multipath # l2vpn-family evpn undo policy vpn-target bestroute add-path path-number 16 peer 1.1.1.1 enable peer 1.1.1.1 advertise nd peer 1.1.1.1 advertise encap-type vxlan peer 3.3.3.3 enable peer 3.3.3.3 advertise nd peer 3.3.3.3 capability-advertise add-path both peer 3.3.3.3 advertise add-path path-number 16 peer 3.3.3.3 advertise encap-type vxlan peer 4.4.4.4 enable peer 4.4.4.4 advertise nd peer 4.4.4.4 capability-advertise add-path both peer 4.4.4.4 advertise add-path path-number 16 peer 4.4.4.4 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 2.2.2.2 0.0.0.0 network 10.6.3.0 0.0.0.255 network 10.6.4.0 0.0.0.255 # route-policy sp permit node 10 if-match tag 1000 apply ipv6 gateway-ip origin-nexthop # route-policy sp deny node 20 # ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:3::2 tag 1000 ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:4::2 tag 1000 # return
VNF1 configuration file
For details, see the configuration file of a specific device model.
VNF2 configuration file
For details, see the configuration file of a specific device model.
Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking (IPv6 Services)
This section provides an example for configuring three-segment VXLAN to implement Layer 3 interworking between VMs in different DCs.
Networking Requirements
In Figure 1-1131, DC A and DC B reside in different BGP ASs. To allow intra-DC VM communication (VMa1 and VMa2 in DC-A, and VMb1 and VMb2 in DC-B), configure BGP EVPN on the devices in the DCs to create VXLAN tunnels between distributed gateways. To allow IPv6 service interworking in different DCs (for example, VMa1 and VMb2), configure BGP EVPN on Leaf 2 and Leaf 3 to create another VXLAN tunnel. In this way, three-segment VXLAN tunnels are established to implement DC interconnection.
Interfaces 1 through 3 in this example represent GE 0/1/0, GE 0/2/0, and GE 0/3/0, respectively.
Device |
Interface |
IP Address |
Device |
Interface |
IP Address |
---|---|---|---|---|---|
Device1 |
GE 0/1/0 |
192.168.50.1/24 |
Device2 |
GE 0/1/0 |
192.168.60.1/24 |
GE 0/2/0 |
192.168.1.1/24 |
GE 0/2/0 |
192.168.1.2/24 |
||
LoopBack1 |
1.1.1.1/32 |
LoopBack1 |
2.2.2.2/32 |
||
Spine1 |
GE 0/1/0 |
192.168.10.1/24 |
Spine2 |
GE 0/1/0 |
192.168.30.1/24 |
GE 0/2/0 |
192.168.20.1/24 |
GE 0/2/0 |
192.168.40.1/24 |
||
LoopBack1 |
3.3.3.3/32 |
LoopBack1 |
4.4.4.4/32 |
||
Leaf1 |
GE 0/1/0 |
192.168.10.2/24 |
Leaf4 |
GE 0/1/0 |
192.168.40.2/24 |
GE 0/2/0 |
- |
GE 0/2/0 |
- |
||
LoopBack1 |
5.5.5.5/32 |
LoopBack1 |
8.8.8.8/32 |
||
Leaf2 |
GE 0/1/0 |
192.168.20.2/24 |
Leaf3 |
GE 0/1/0 |
192.168.30.2/24 |
GE 0/2/0 |
- |
GE 0/2/0 |
- |
||
GE 0/3/0 |
192.168.50.2/24 |
GE 0/3/0 |
192.168.60.2/24 |
||
LoopBack1 |
6.6.6.6/32 |
LoopBack1 |
7.7.7.7/32 |
Configuration Roadmap
The configuration roadmap is as follows:
Configure IP addresses for each node.
Configure an IGP for nodes to communicate with each other.
Configure static routes for DCs to communicate with each other.
Configure BGP EVPN on DC A and DC B to create VXLAN tunnels between distributed gateways.
Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.
Data Preparation
To complete the configuration, you need the following data:
VLAN IDs of VMs
BD IDs
VNI IDs of BDs and VPN instances
Procedure
- Assign an IP address to each node interface, including the loopback interface.
For configuration details, see Configuration Files in this section.
- Configure an IGP. In this example, OSPF is used.
For configuration details, see Configuration Files in this section.
- Configure static routes for DCs to communicate with each other.
For configuration details, see Configuration Files in this section.
- Configure BGP EVPN on DC A and DC B to create VXLAN tunnels between distributed gateways.
- Configure BGP EVPN on Leaf 2 and Leaf 3 to establish a VXLAN tunnel between them.
- Verify the configuration.
After completing the configurations, run the display vxlan tunnel command on each leaf node to view VXLAN tunnel information. The following example uses the command output on Leaf 2.
[~Leaf2] display vxlan tunnel
Number of vxlan tunnel : 2 Tunnel ID Source Destination State Type Uptime --------------------------------------------------------------------- 4026531841 6.6.6.6 5.5.5.5 up dynamic 00:11:01 4026531842 6.6.6.6 7.7.7.7 up dynamic 00:12:11
Run the display ipv6 routing-table vpn-instance vpn1 command on each leaf node to view IP route information. The following example uses the command output on Leaf 1.
[~Leaf1] display ipv6 routing-table vpn-instance vpn1
Routing Table : vpn1 Destinations : 6 Routes : 6 Destination : 2001:DB8:10:: PrefixLength : 64 NextHop : 2001:DB8:10::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif10 Flags : D Destination : 2001:DB8:10::1 PrefixLength : 128 NextHop : ::1 Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : Vbdif10 Flags : D Destination : 2001:DB8:20:: PrefixLength : 64 NextHop : ::FFFF:6.6.6.6 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : :: TunnelID : 0x0000000027f0000001 Interface : VXLAN Flags : RD Destination : 2001:DB8:30:: PrefixLength : 64 NextHop : ::FFFF:6.6.6.6 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : :: TunnelID : 0x0000000027f0000001 Interface : VXLAN Flags : RD Destination : 2001:DB8:40:: PrefixLength : 64 NextHop : ::FFFF:6.6.6.6 Preference : 255 Cost : 0 Protocol : IBGP RelayNextHop : :: TunnelID : 0x0000000027f0000001 Interface : VXLAN Flags : RD Destination : FE80:: PrefixLength : 10 NextHop : :: Preference : 0 Cost : 0 Protocol : Direct RelayNextHop : :: TunnelID : 0x0 Interface : NULL0 Flags : DB
After configurations are complete, VMa1 and VMb2 can communicate with each other.
Configuration Files
Spine 1 configuration file
# sysname Spine1 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.10.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.20.1 255.255.255.0 # interface LoopBack1 ip address 3.3.3.3 255.255.255.255 # ospf 1 area 0.0.0.0 network 3.3.3.3 0.0.0.0 network 192.168.10.0 0.0.0.255 network 192.168.20.0 0.0.0.255 # return
Leaf 1 configuration file
# sysname Leaf1 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 11:11 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif10 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:DB8:10::1/64 ipv6 nd collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.10.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface LoopBack1 ip address 5.5.5.5 255.255.255.255 # interface Nve1 source 5.5.5.5 vni 10 head-end peer-list protocol bgp # bgp 100 peer 6.6.6.6 as-number 100 peer 6.6.6.6 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 6.6.6.6 enable # ipv6-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 6.6.6.6 enable peer 6.6.6.6 advertise irbv6 peer 6.6.6.6 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 5.5.5.5 0.0.0.0 network 192.168.10.0 0.0.0.255 # return
Leaf 2 configuration file
# sysname Leaf2 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 11:11 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif20 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:DB8:20::1/64 ipv6 nd collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.20.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface GigabitEthernet0/3/0 undo shutdown ip address 192.168.50.2 255.255.255.0 # interface LoopBack1 ip address 6.6.6.6 255.255.255.255 # interface Nve1 source 6.6.6.6 vni 20 head-end peer-list protocol bgp # bgp 100 peer 5.5.5.5 as-number 100 peer 5.5.5.5 connect-interface LoopBack1 peer 7.7.7.7 as-number 200 peer 7.7.7.7 ebgp-max-hop 255 peer 7.7.7.7 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 5.5.5.5 enable peer 7.7.7.7 enable # ipv6-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 5.5.5.5 enable peer 5.5.5.5 advertise irbv6 peer 5.5.5.5 advertise encap-type vxlan peer 5.5.5.5 import reoriginate peer 5.5.5.5 advertise route-reoriginated evpn ipv6 peer 7.7.7.7 enable peer 7.7.7.7 advertise irbv6 peer 7.7.7.7 advertise encap-type vxlan peer 7.7.7.7 import reoriginate peer 7.7.7.7 advertise route-reoriginated evpn ipv6 # ospf 1 area 0.0.0.0 network 6.6.6.6 0.0.0.0 network 192.168.20.0 0.0.0.255 # ip route-static 7.7.7.7 255.255.255.255 192.168.50.1 ip route-static 192.168.1.0 255.255.255.0 192.168.50.1 ip route-static 192.168.60.0 255.255.255.0 192.168.50.1 # return
Spine 2 configuration file
# sysname Spine2 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.30.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.40.1 255.255.255.0 # interface LoopBack1 ip address 4.4.4.4 255.255.255.255 # ospf 1 area 0.0.0.0 network 4.4.4.4 0.0.0.0 network 192.168.30.0 0.0.0.255 network 192.168.40.0 0.0.0.255 # return
Leaf 3 configuration file
# sysname Leaf3 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 11:11 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 10 vxlan vni 10 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif10 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:DB8:30::1/64 ipv6 nd collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.30.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 10 rewrite pop single bridge-domain 10 # interface GigabitEthernet0/3/0 undo shutdown ip address 192.168.60.2 255.255.255.0 # interface LoopBack1 ip address 7.7.7.7 255.255.255.255 # interface Nve1 source 7.7.7.7 vni 10 head-end peer-list protocol bgp # bgp 200 peer 6.6.6.6 as-number 100 peer 6.6.6.6 ebgp-max-hop 255 peer 6.6.6.6 connect-interface LoopBack1 peer 8.8.8.8 as-number 200 peer 8.8.8.8 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 6.6.6.6 enable peer 8.8.8.8 enable # ipv6-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 6.6.6.6 enable peer 6.6.6.6 advertise irbv6 peer 6.6.6.6 advertise encap-type vxlan peer 6.6.6.6 import reoriginate peer 6.6.6.6 advertise route-reoriginated evpn ipv6 peer 8.8.8.8 enable peer 8.8.8.8 advertise irbv6 peer 8.8.8.8 advertise encap-type vxlan peer 8.8.8.8 import reoriginate peer 8.8.8.8 advertise route-reoriginated evpn ipv6 # ospf 1 area 0.0.0.0 network 7.7.7.7 0.0.0.0 network 192.168.30.0 0.0.0.255 # ip route-static 6.6.6.6 255.255.255.255 192.168.60.1 ip route-static 192.168.1.0 255.255.255.0 192.168.60.1 ip route-static 192.168.50.0 255.255.255.0 192.168.60.1 # return
Leaf 4 configuration file
# sysname Leaf4 # evpn vpn-instance evrf1 bd-mode route-distinguisher 10:1 vpn-target 11:1 export-extcommunity vpn-target 11:1 import-extcommunity # ip vpn-instance vpn1 ipv6-family route-distinguisher 11:11 apply-label per-instance vpn-target 11:1 export-extcommunity evpn vpn-target 11:1 import-extcommunity evpn vxlan vni 5010 # bridge-domain 20 vxlan vni 20 split-horizon-mode evpn binding vpn-instance evrf1 # interface Vbdif20 ip binding vpn-instance vpn1 ipv6 enable ipv6 address 2001:DB8:40::1/64 ipv6 nd collect host enable vxlan anycast-gateway enable # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.40.2 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown # interface GigabitEthernet0/2/0.1 mode l2 encapsulation dot1q vid 20 rewrite pop single bridge-domain 20 # interface LoopBack1 ip address 8.8.8.8 255.255.255.255 # interface Nve1 source 8.8.8.8 vni 20 head-end peer-list protocol bgp # bgp 200 peer 7.7.7.7 as-number 200 peer 7.7.7.7 connect-interface LoopBack1 # ipv4-family unicast undo synchronization peer 7.7.7.7 enable # ipv6-family vpn-instance vpn1 import-route direct advertise l2vpn evpn # l2vpn-family evpn undo policy vpn-target peer 7.7.7.7 enable peer 7.7.7.7 advertise irbv6 peer 7.7.7.7 advertise encap-type vxlan # ospf 1 area 0.0.0.0 network 8.8.8.8 0.0.0.0 network 192.168.40.0 0.0.0.255 # return
Device 1 configuration file
# sysname Device1 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.50.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.1.1 255.255.255.0 # interface LoopBack1 ip address 1.1.1.1 255.255.255.255 # ip route-static 6.6.6.6 255.255.255.255 192.168.50.2 ip route-static 7.7.7.7 255.255.255.255 192.168.1.2 ip route-static 192.168.60.0 255.255.255.0 192.168.1.2 # return
Device 2 configuration file
# sysname Device2 # interface GigabitEthernet0/1/0 undo shutdown ip address 192.168.60.1 255.255.255.0 # interface GigabitEthernet0/2/0 undo shutdown ip address 192.168.1.2 255.255.255.0 # interface LoopBack1 ip address 2.2.2.2 255.255.255.255 # ip route-static 6.6.6.6 255.255.255.255 192.168.1.1 ip route-static 7.7.7.7 255.255.255.255 192.168.60.2 ip route-static 192.168.50.0 255.255.255.0 192.168.1.1 # return
- VXLAN Feature Description
- VXLAN Configuration
- Overview of VXLAN
- Configuration Precautions for VXLAN
- Configuring IPv6 VXLAN in Centralized Gateway Mode for Static Tunnel Establishment
- Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN
- Configuring a VXLAN Service Access Point
- Configuring a VXLAN Tunnel
- Configuring a Layer 3 VXLAN Gateway
- (Optional) Configuring a Static MAC Address Entry
- (Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
- Verifying the Configuration of VXLAN in Centralized Gateway Mode Using BGP EVPN
- Configuring IPv6 VXLAN in Centralized Gateway Mode Using BGP EVPN
- Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN
- Configuring a VXLAN Service Access Point
- Configuring a VXLAN Tunnel
- (Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance
- (Optional) Configuring a Layer 3 Gateway on the VXLAN
- (Optional) Configuring VXLAN Gateways to Advertise Specific Types of Routes
- (Optional) Configuring a Static MAC Address Entry
- (Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
- Verifying the Configuration of VXLAN in Distributed Gateway Mode Using BGP EVPN
- Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN
- Configuring a VXLAN Service Access Point
- Configuring an IPv6 VXLAN Tunnel
- (Optional) Configuring a VPN Instance for Route Leaking with an EVPN Instance
- (Optional) Configuring a Layer 3 Gateway on the IPv6 VXLAN
- (Optional) Configuring IPv6 VXLAN Gateways to Advertise Specific Types of Routes
- (Optional) Configuring a Limit on the Number of MAC Addresses Learned by an Interface
- Verifying the Configuration
- Configuring Three-Segment VXLAN to Implement DCI
- Configuring the Static VXLAN Active-Active Scenario
- Configuring the Dynamic VXLAN Active-Active Scenario
- Configuring the Dynamic IPv6 VXLAN Active-Active Scenario
- Configuring VXLAN Accessing BRAS
- Configuring NFVI Distributed Gateways (Asymmetric Mode)
- Configuring NFVI Distributed Gateways (Symmetric Mode)
- Maintaining VXLAN
- Configuration Examples for VXLAN
- Example for Configuring Users on the Same Network Segment to Communicate Through a VXLAN Tunnel
- Example for Configuring Users on Different Network Segments to Communicate Through a VXLAN Layer 3 Gateway
- Example for Configuring VXLAN in Centralized Gateway Mode Using BGP EVPN
- Example for Configuring VXLAN in Distributed Gateway Mode Using BGP EVPN
- Example for Configuring IPv6 VXLAN in Distributed Gateway Mode Using BGP EVPN
- Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking
- Example for Configuring Three-Segment VXLAN to Implement Layer 2 Interworking
- Example for Configuring Static VXLAN in an Active-Active Scenario (Layer 2 Communication)
- Example for Configuring Dynamic VXLAN in an Active-Active Scenario (Layer 3 Communication)
- Example for Configuring VXLAN over IPsec in an Active-Active Scenario
- Example for Configuring the Static VXLAN Active-Active Scenario (in VLAN-Aware Bundle Mode)
- Example for Configuring IPv4 NFVI Distributed Gateway
- Example for Configuring IPv6 NFVI Distributed Gateway
- Example for Configuring Three-Segment VXLAN to Implement Layer 3 Interworking (IPv6 Services)